Slide on logistic regression

– A slide from one of the first lectures –



These are a few comments about my experience of taking the Deep Learning specialization produced by deeplearning.ai and delivered on the Coursera platform. I completed and was certified in the five courses of the specialization during late 2018 and early 2019.

The courses are:

  • Neural Networks / Deep Learning
  • Improving Deep Neural Networks
  • Structuring Machine Learning Projects
  • Convolutional Neural Networks
  • Sequence Models

My background is that of an experienced software engineer, and I had previously done the Machine Learning course from Stanford, also on Coursera.

Overall, the content of the courses is excellent and well presented by Andrew Ng who is really good at lecturing and explaining the material. I felt that I got an excellent conceptual foundation for understanding neural networks.

Very helpful prerequisites: writing and troubleshooting code; linear algebra in the form of matrix operations; and a bit of differential calculus.

Videos, slide decks, transcripts of the talks, and the few auxiliary pdf files are all downloadable. I downloaded all of them since I was not sure how long I would continue to have access after I completed the specialization and stopped paying for it. As of two months later, I still have access. But you never know when that might be cut off, so I am glad to have my own copies.

In general I learn best by reading. These courses are video and lecture based, which works well while you are going through them. Even for a mainly visual learner like me, it was effective and enjoyable.

For reference later, however, I have missed having professionally written and formatted text that is like articles and books that I can readily skim through and look for particular points to review and refresh my memory. The transcripts are a literal capture of the spoken words and are like one long run-on sentence with no breaks or formatting.

About the downloaded pptx slide decks, many of the individual slides do not render correctly in the LibreOffice Impress program that I use on my Linux systems. But they do display fine in MS PowerPoint, 8-( That is a serious drawback for the slides.

Here are sample downloaded files from a lecture in the second week of the first course, which is on the topic of Logistic Regression.

Video: https://tz-earl.github.io//media/week-2-b-logistic-regression.mp4
Slide deck: https://tz-earl.github.io//media/week-2-b-logistic-regression.pptx
Transcript: https://tz-earl.github.io//files/week-2-b-logistic-regression.txt

Quizzes are provided that are automatically graded. The questions are all multiple choice. Some have more than one correct answer, and you have to select all of them to get credit. I found that occasionally that kind of question was worded ambiguously, and it was really hard to answer it correctly. In a few cases I gave up trying even though you can take the same quiz repeatedly, which I would do as a way to better learn the material, including some of the nuances I might have missed otherwise.

Programming exercises in Python are provided and automatically graded. They are done via Jupyter notebooks that are remotely hosted, so you do not need to have anything installed locally. For the most part the exercises were very short with lots of handholding in comments embedded in the code. You write small snippets of code within functions that are already defined.

In addition to the code templates and comments, there were often excellent explanations and graphics that accompanied the notebook code cells.

These exercises felt like just dipping your toes into the waters to get a taste of what it would be like to actually implement the algorithms and the math that are discussed. There was nothing required at a high level because all functions and overall software structure were already provided.

Despite this limitation I was satisfied with the exercises because they also gave me an introductory exposure to Python and to Jupyter notebooks. You do end up with some complete machine learning models that you can explore further and play with.

Moreover, there are now lots of good frameworks that provide this level of functionality – I think of TensorFlow, Keras, PyTorch, Scikit-learn, and others – so my guess is that very few of us will need to write code at this level. There are brief tutorials on Keras and TensorFlow.

Warning: the course honor code forbids posting your code snippets in the forums, either to provide or to request help. I noticed that this made it really, really difficult for students who are not used to debugging code.

This is a low touch series of courses. The user forums are staffed by volunteer mentors who may or may not respond to your questions and problems. Sometimes other students do, but that is pretty hit and miss because each student can start at any time, so there is no synchronizing of where people are in the material, no identifiable cohort of which you are part.

If you can learn this kind of material on your own and be able to do the coding with little support and little human contact, this specialization can be a very good learning experience and a good value. Otherwise, it might be more of an exercise in frustration.