A while ago I came upon this great blog post by Alex Olivier that introduced Google Cloud Run, where you can deploy your webservers for next to nothing. This was perfect timing as I’m currently working on a side project of my own, and had come to the point where I needed to deploy the first version of it.
SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial Squeeze Modules and Information Blocking Decoder
In our new paper on lightweight segmentation we propose a new architecture utilizing an information blocking decoder and spatial squeeze modules. Check it out here.
In our recent paper ExtremeC3Net: Extreme Lightweight Portrait Segmentation Networks using Advanced C3-modules, we introduce a new extremely lightweight portrait segmentation model consisting of a two-branched architecture based on the concentrated-comprehensive convolutions block. Our method reduces the number of parameters from 2.08M to 37.9K (around 98.2% reduction), while maintaining the accuracy within a 1% margin from the state-of-the-art portrait segmentation method. Check out the full paper here.
I started working at a ML start-up right after graduation and quickly realized that I was not prepared for it. While I had graduated with a bachelor’s degree in engineering physics, a master’s in ML, and had a fair amount of knowledge about algorithms, the skills needed at a company were quite different from what I had learned at university. At the workplace, there was limited value in being able to do complex integrals or prove convergence bounds, while the ability to get things up and running with whatever means possible was crucial.
I find that a surprising number of people in the machine learning field do not track their metrics in a structured and automated way. Some only keep track of what their current single best model is, some put their faith in storing their whole experiment history in TensorBoard graphs, and some manually log their metrics in a Google Spreadsheet. While these methods might be sufficient in some cases, I find that they can be significantly improved in terms of the amount of insight they provide and resources they consume. In this post I will be talking about how to do this, and will go into depth about the why, what and how of tracking machine learning project metrics in a structured manner over time. I’ll be basing this on numerous projects I’ve been involved in, and also the many mistakes I’ve made in them. With metrics, I mean the final metrics you generate from an experiment, rather than the metrics you get per epoch during training.