Tensorflow v1.x - Lesson #1

Let’s just say that you woke up this morning and you realized:
  1. Ah! I have a lot of smart IoT devices in my house
  2. I can make them smarter and tailor their behavior to my needs
  3. I can build Skynet
And that’s when you realized how do I do that?! Well, my friend, I woke up with the same idea and we’re going to learn that together. As usual, my disclaimer is the same for everything that I attempt to learn in DIY mode.

Assumptions

I will be using a Mac (High Sierra) however very likely you will be able to adapt to another platform almost all I am going to use given the nature of the tasks. Bash is my default terminal and Python (Anaconda) will be the language of reference. Luck will be with me, particularly when searching for a corner where to hammer my head in difficult times while searching for the why the WTF something won’t work.

Recipe of the day

  1. Download the installer for Anaconda. Install.
  2. Create an environment for our projects
  3. Install a Tensorflow build that is compatible with Anaconda
  4. A quick test to make sure that everything runs
STEP 2 - DETAILS
At the time I am writing this article Tensorflow isn’t compatible with Python 3.6 so there are a couple of tricks to make sure that we can sail between obstacles.
conda create -n tensorflow python=3.5
now we activate the environment we just created using:
source activate tensorflow
you will notice that the prompt changes to indicate the name of the logical environment you created.
STEP 3 - DETAILS
Now is the time to install Tensorflow and according to the documentation you should be doing a simple pip install. As turns out, if you do that, you are going to get a runtime mismatch that has been running people crazy. I am one of them for one of my computers where I tried it.
My solution to that issue has been to install it using conda forge system, a community package manager so to speak.
conda install -c conda-forge tensorflow
STEP 4 - DETAILS
Now, let’s run a quick test to make sure that everything runs and runs properly.
type Python and then enter the following code.
image
If it is your first time in Python, remember that indenting in Python is crucial, so add spaces on the 4th line and hit enter twice to end editing and execute.
Hit CTRL+D once then type
source deactivate
to exit the conda environment we originally created. You can re-enter the same way we activated in the first place.
The code will be executed and likely it will tell you something about your CPU/GPU. Now, that seems little thing or less important, however, the difference between emulating what one writes (me) and actually learning (us) it is important to pay attention to those details.
Wait, grab a cup of coffee first, this is going to put you asleep after 5 words…
image
Alright… Google which publishes Tensorflow, decided to stop supporting GPUs for Mac after release 1.2 of Tensorflow. At the time I am writing this article we are on 1.5 and they actively baking 1.6. If by any chance you have the urgent need to use GPU and have a CUDA compatible video card then I suggest you take a look at Darien’s post. Bless his heart for clarity and guidance.
You will notice that the message output from the previously executed program tells you that some instructions of your CPU are not supported. It is not a big deal if you understand what that means.
Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:
Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.
It is not used because Ttensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it’s expected for medium- and large-scale machine-learning training to be performed on a GPU.
If you have a GPU, you shouldn’t care about AVX support, because most expensive operations will be dispatched on a GPU device unless explicitly set not to. The warning might get annoying after a while so if you want to get rid of it (silence it for good). You can use:
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
If you don’t have a GPU and want to utilize CPU as much as possible, you should build Tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. 
Someone on StackOverflow mentioned that TensorFlow Serving has separate installs for non-optimized CPU and optimized CPU (AVX, SSE4.1, etc). the details are here. I had underestimated the value of his comment until I start running into some problems that required some of those crafty builds. So, don’t be afraid of making a mess and see what works for you.
Speaking of making a mess, you might want to consider using Docker, if you are afraid of destroying the planet from a terminal. If you never used Docker, no crime for that, at least for a few more years but you can read this and watch this to start to redeem yourself. And if that isn’t enough the dude below makes a bunch of very good cases too.
I hope this helps you to start this new quest. My objective is to learn the basics and then take on one specific project that I can use around the house for some day to day Skynet experience.
Happy End of the world from terminal.

Comments

Popular posts from this blog

Postgres on Synology

The Making of Basculo

Build an independent watch app - Part II