You should be forced to write efficient code and know how to do it in situations where it matters. You should also be able to recognize when it’s more important to optimize for readability, maintainability and clarity.
It’s possible to simplify the “normal” things for normal users while still providing source code and repositories that allow us strange creatures to continue experimenting, testing, and yes, struggling with inevitable incompatibilities all night long.
I found TensorFlow initially confusing but then quite comfortable. It’s odd how after programming in a language like Python for a while, it becomes confusing that you have to declare “placeholders” (variables) and constants up-front, then initialize them.
I left last week’s PyData Meetup with more questions than answers. Questions like “why does that neural net I just wrote perform the way it does?” So, with a couple of weeks left until the next project is due, I decided to go back and revisit the second half of the neural networks topic before moving forward.
Tonight I joined the first Southern California PyData meetup. It featured two speakers discussing how to better understand the predictions made by machine-learning models, and why it might be important to do so. I was impressed by the capabilities of the packages demonstrated and the likely importance of having such capabilities as we move forward with deep learning-based automation that could cause catastrophic results if it fails in unexpected ways.