Udacity Deep Learning Nanodegree — Part 3

I haven’t written much about the course recently because other than describing the syllabus there hasn’t been much to write. As the course progressed, there was a lot of discussion of neural networks and finally an real introduction to Tensorflow. I found TensorFlow initially confusing but then quite comfortable. It’s odd how after programming in a language like Python for a while, it becomes confusing that you have to declare “placeholders” (variables) and constants up-front. You then have to run a routine within TensorFlow to initialize the global variables used by the session. It’s all pretty obvious stuff if you learned to program in an older-style language but over the course of the past few years has become unfamiliar. I can’t even count how many times my program failed for lack of a sess.run(tf.global_variables_initializer()) line.

Beyond TensorFlow we’ve moved on to  the second project that I’m still completing. An image classification project using the CIFAR-10 image dataset.

My home system running Ubuntu and slightly upgraded since this pic, is why I don’t find the need for using AWS…

Also, along the way there was an introduction to “cloud computing” that really wasn’t. This is about the third course I’ve taken that has introduced me to “cloud computing” when what they really meant was “we’ll show you how to set up an AWS instance, so you can run things somewhere other than your computer.” In this case, the goal is to get you running stuff on a computer with a good-sized GPU. As I already have a nice workstation with a GTX 1070, I’ll decline to do that for the most part. Even with the AWS credit that they give you, it’s both cheaper and easier for me to run it at home.

I’m finding the major frustration is that rather than having you write code from scratch, they’re providing a lot of “background” code and having you fill in the main logic. Unfortuantely, the way they set things up is not necessarily the way I would, and the code isn’t particularly well commented, so I’m spending more time than I should trying to figure out why they set up the data structures the way they did rather than a million different ways they might. If there’s a particular reason or convention for doing things the way they are, it’s not explained. Often, it seems like it would have been easier for me to just write the whole thing from scratch than figure out what they’re doing.

But even that is turning out to be reasonably good Python practice and it is the reality of the world: you’ll often work on code that somebody else wrote. In the absence of a strong convention about how things should be set up, you will see lots of variation. It’s obvious that the code examples in this course were prepared by multiple individuals, so I’m seeing a lot of approaches I haven’t before.

That said, I am appreciating the message of one of the talks at PyCon, the one that spoke specifically to code readability. It’s something that many people need to do better.

All in all, I continue to get my money’s worth.

One thought on “Udacity Deep Learning Nanodegree — Part 3

Comments are closed.