Monday, February 26, 2018

Object Detection Tensorflow

To get started with object detection have a look at the following jupyter notebook:

Assuming that you have already setup your environment with tensorflow , in my case its  a docker container . You need to still execute the following instructions .

One issue i was getting was that the jupyter notebook kept failing at the following line  despite having followed all the instructions:

from object_detection.utils import ops as utils_ops

So i discovered that this was due to Python libraries not being availabe in PYTHONPATH:

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

if despite having executed the above in your container or your tensorflow environment the problem still persists in your Jupyter notebook consider adding it directly as can be seen below :

====Extract Jupyter Notebook=============================
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from timeit import default_timer as timer
import cv2

sys.path.append('/notebooks/models/research') # point to your tensorflow dir
sys.path.append('/notebooks/models/research/slim') # point ot your slim dir

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from object_detection.utils import ops as utils_ops

if tf.__version__ < '1.4.0':
  raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')


Note that I have also changed the default method to use opencv for faster image IO and a timer to determine performance.

You install opencv using :
sudo apt-get install python-opencv

==============Extract ========================
for image_path in TEST_IMAGE_PATHS:
  start = timer()

  #image =
  image = cv2.imread(image_path)
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.

  #image_np = load_image_into_numpy_array(image)
  image_np = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  #image_np_expanded = np.expand_dims(image_np, axis=0)
  image_np_expanded = np.expand_dims(image_np, axis=0)

  # Actual detection.
  output_dict = run_inference_for_single_image(image_np, detection_graph)
  # Visualization of the results of a detection.
  end = timer()
  duration = (end - start)

  print('Image: {0} took {1} to be processed'.format(image_path,duration))


Tuesday, February 20, 2018

Installing an SSL certificate for NGinx on Ubuntu

Never thought that it would be that easy with a tool called Letsencrypt but basically if you want to add SSL certificate on your Nginx server all you have to do is follow the instructions here that is :

- Update / Install the following packages:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx 
then you ask certbot to install :
sudo certbot --nginx
Note: You will be asked for a domain name , ip addresses will not be allowed

Also certificates need to be renewed , cerbot can take care of that according to documentation

Friday, February 16, 2018

TensorFlow running in Docker deployed on Ubuntu

This entry will provide a view of the different steps required to setup TensorFlow on a Ubuntu environment by running it within a docker container .

Whether your starting off on Machine learning with TensorFlow or your a veteran and you want to setup an infra with a docker container running Tensorflow then this article is for you

Update (17/3/2018): For doing the setup using GPU instances on AWS check my article here 

Why we want to use docker ?

  • We do not want to have our configurations for Tensorflow being messed up by other python versions and configs for other applications so we are isolating it
  • Installing from Docker image is very practical and saves us a lot of time so we can focus directly on the our coding 

Prerequisites :

1. Ubuntu VM , am using one with 8Gb RAM , 100Gb SSD
2. Ubuntu CE installed , follow this link 
3. Make sure you have TCP access to 8888, 9000 if running on AWS ( or other cloud platforms )

Once docker has been installed do make sure that non-root users can also execute the docker command by following instructions from docker site .

Running Tensorflow Container

At time of writing the current version of tensorflow is 1.5 so kick start just execute the following command which is document here .

docker run -it -p 8888:8888

Note that there will be some output when the docker container runs with a token url please copy paste and keep somewhere .

Using Tensorflow

When launching the run command for the TensorFlow docker container above  a url with a token looking something like this would be shown on your console:

Use this url directly to login to your jupyter notebook.

Now this is great but we need to also create a volume such as we can easily access files for example pulled from a git repository .

So you might want to remove the docker container that you just started in the last step and use docker-compose file below

Docker compose

Docker compose needs to be installed using the following instructions .

Here is a simple docker-compose file to be able to run the command to run a Tensorflow :


version: '3'
    build: .
    container_name: tensorflow
    - notebooks:/notebooks
    - "8888:8888"
    - "6006:6006"
    - IMAGE_SIZE=224
    - ARCHITECTURE=mobilenet_0.50_224


The command then to start the container is simple

docker-compose -f  docker-compose-tensorflow up 

and to stop the container :

docker-compose -f  docker-compose-tensorflow down

Note that the docker-compose file contains a port mapping for 6006 which is used for Tensorboard and a volume mapping  to  notebooks .

The volume notebooks ensure that you persist your notebooks on subsequent up - down cycles of your container . Else you would lose all your contents on each shutdown of the container.

Managing containers 

The best way I have found to manage by containers in a practical manner is through Portainer and you can install this on docker using the following command ( site here):

$docker volume create portainer_data
$docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Now login to http://your_ip_adddress:9000/ to set a password for admin

Container shell access 

If you want to connect to your running container's shell and assuming that this is called tensorflow as in our case just do :

docker exec -it tensorflow /bin/bash 

Else you can also use Portainer as explained below .

Accessing Tensorflow

You should be able to log onto your docker containers directly through Portainer by clicking on the container name and clicking on console and the click connect on bash

This by default will give you access to /notebook directory :

I found it particularly useful to use this feature of portainer as it meant that directly from portainer web app you could access the bash of your running container .

Also keeping files under /notebook allows you to view them through your jupyter notebook instance.

Reverse Proxy (optional step)

Although not absolutely required I find it useful to be able to access to the tools all directly from port 80 you can install a reverse proxy in front of the Portainer and Tensorflow Jupyter notebook by installing nginx .

Installation is pretty straight forward please check instructions here  :

  • sudo apt-get update
  • sudo apt-get install nginx

Now assuming you are using the default ports as mentioned above (else modify as required ) you need to create a file with a *.conf  e.g myreveseproxysettings.conf  (or whatever suits you) , then you sudo cp (copy) this file to directory /etc/nginx/conf.d  

Not that nginx main config includes configurations files which have a *.conf extension within the /etc/nginx/conf.d directory .

Remember to change My_IP_ADDRESS_OR_DOMAIN_NAME with your IP Address or Domain name .


 server {
        listen       80;
        listen       [::]:80;
        server_name  My_IP_ADDRESS_OR_DOMAIN_NAME;
        # root         /usr/share/nginx/html;

        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

location /portainer/ {
            proxy_http_version 1.1;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";

            proxy_pass "http://localhost:9000/";     

        location / {
            proxy_pass "http://localhost:8888/";
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $http_host;
    proxy_http_version 1.1;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 86400;

        error_page 404 /404.html;
            location = /40x.html {

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {


Following commands are useful:

1. Start nginx

sudo systemctl start nginx

2. Stop nginx

sudo systemctl stop nginx

3. Check status nginx

systemctl status nginx

To get out of the status message just do a " :" followed by a "q"