When it comes to Deep Learning models, there are many architectures, and then there are variations of those architectures. Then there are hyperparameters to tune. It is easy to feel a bit overwhelmed when it comes to available options and choosing from them to solve the problem at hand. However you shouldn’t feel helpless. These are some general advice to get started with using pre-trained architectures for transfer learning, or using an architecture for training the model from scratch-
Read Respective Papers
It is not as hard as it sounds. Because all those models are discussed in detail in some papers. For example, this is the ResNet Paper- Deep Residual Learning for Image Recognition. You can learn about ResNet there. And, it is very likely that your framework’s documentation has explicit links to papers where the concerned model was discussed. This one is the link to PyTorch’s page-
You should study benchmarks. Benchmarks are a great way to achieve better results and to see whether an architecture fits your needs. For example, the last time I used MobileNet, I remember it being just about 17 MB in size with pre-trained weights and works great for narrow use cases such as recognizing sign-language. So, you can deploy the model to mobile devices, Raspberry Pi- you name it! On the other hand, you would want to use ResNet-50 for achieving state-of-the-art results.
- You can rely on standard benchmarks such as DAWNBench- An End-to-End Deep Learning Benchmark and Competition, powered by Stanford, and,
- benchmarks by researchers- cnn-benchmarks– this one is by Justin Johnson, who is now at UofM, and used to teach Stanford’s reputed CNN course.
Study Others’ Works
- Look at projects created by others, see what they used, and if they have had put blog posts accompanying the projects- read those and look for justification for using a certain framework over another. Or just look at their code and their results.
- Look at open and old Kaggle competitions’ submissions- look at what architecture they are using, and read about the justification, if any.
Prototype a Lot
Whether it’s a school project or personal, whether it is Kaggle competitions, prototype a lot with the same data- use different architectures, different variants, and so on. And judge for yourself which is the best. Ultimately, using models for a certain task is a skill that you master through practice. You have to do experiments, make prototypes with different pre-trained models to get a feel for them.
(Originally a Kaggle discussion post by the Author.)