Wednesday, April 25, 2018

Docker with Jenkins Plugins

At times you might want to add have a version of Jenkins that automatically has a predefined set of plugins installed .

You might want to find an existing jenkins with all the installed plugins assume the url is http://myjenkinshostname/jenkins to get the list of plugins you might need.

Navigate to http://myjenkinshostname/jenkins/script  there you want to past the following codes and execute :

def plugins = jenkins.model.Jenkins.instance.getPluginManager().getPlugins() plugins.each {println "${it.getShortName()}: ${it.getVersion()}"}


There are other ways to also get the  list of plugins , check this site .

You would then create a docker file where you define all these necessary e.g

===Dockerfile==================
FROM jenkins/jenkins:lts

ENV JENKINS_OPTS --prefix=/jenkins
RUN /usr/local/bin/install-plugins.sh  aws-credentials amazon-ecr amazon-ecs pipeline-maven

==========================

When the docker container is initialised this will automatically install all the plugins you define within it.

Apparently there is a way to all install tools directly through a groovy script as mentioned on the official github repo for jenkins but I have yet to try it .

This list of scripts might then be interesting to try out for automatic installation of tools e.g to automatically install a specific version of maven .

Saturday, March 17, 2018

Object Detection DL training with Tensorflow on GPU AWS

Turns out that when if you want to train a model with say 5 types of different category of images you would need to make use of an Ec2 instance on AWS that has GPU capabilities.

Else what happens with EC2 CPU instances is that they quickly run out of memory on the first dozen steps and the process gets killed .

For that you would need at the very least a p2.xlarge and this is billed at around $0.9/hr ( at time of writing of this article) so still very expensive.  So make sure that this VM is turned off the moment its not in use.

I tried installing a vanilla p2.xlarge but ended up having issues with NVIDIA drivers so when you do launch an EC2 instance from AWS try to do so with an already configured AMI e.g AWS Deep Learning AMI.

Follow the following steps:
  1. NVDIA drivers are properly installed - if your using the AWS Deep Learning AMI then chances are you don't need to worry about that
  2. Then install Docker CE for Ubuntu
  3. Ensure that the following post-installation instructions  are also covered
  4. Then install NVIDIA Docker using instructions on the page.

You should then be able to launch your docker instance using the following command:


docker volume create notebooks

nvidia-docker run -it --name tensorflow -v notebooks:/notebooks -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu


Once you got the container running then its just a question of following my other articles to continue with the training:

1. Setup of environment , in my case using Docker
2. Labeling and creation of tfRecord
3. Training Custom Object Detection

So typically you would use the GPU instances  to train your models and CPU instances only to run test against your frozen inference graph for example using jupyter as less expensive.

Monday, March 12, 2018

Mauritius Heat Map for Real Estate Prices per m2 using Python

Yesterday I thought I would give a try to see whether it is possible to create a heatmap of real estate prices per m2 in Mauritius , this covers the prices of apartments , houses and villas.

Given that there is no dataset to work against readily I managed to collect information from popular online sources that had prices of real estate in an attempt to then plot them on our Mauritian map.

I used only Python and Jupyter notebook for all the process of collecting data , analysing and plotting.

The process gathering data off from web pages is called webscraping and I will not give out codes in regards to this if you are not careful enough you might cause DoS attacks against the websites and you don't want that.

I used  beautifulsoup4  to parse through the retrieved HTML data to retrieve specifically what data I was interested in to create some pandas dataframes .

The first analysis based on the data showed the following :



The price in Rs per m2 is lower in the center than on the North and its much more expensive that on the East of the island . However note that 4000+  records were used  on which an average was made for each Area.

Below are the regions where the prices are the lowest per m2 in Mauritius :

And here are the regions which have the highest prices m2 in Mauritius :




Note that there are over 150 regions so I cannot show the full list but that gives a good enough indication of prices where the costal regions obviously being more expensive.

Then I used the Google Maps API to get the coordinates based on the region name  , the code for this can be seen below, function name is geodataMapper , it also handles the problem that if you are making a number of subsequent calls to Google Maps API (in this case over 150+) , then at one point you will get an exception saying something like "too many retries" :


To retrieve latitude and longitude data for all the regions i through the following code that will then iterate through a panda dataframe containing the region and price m2 and for each region call the above geodataMapper , at the end it will create a csv file containing region , price per m2 , latitude and longitude :


Now once you have this data you can install gmaps which is awesome for creating some basic heatmaps and it even has extension for jupyter notebook .

Below are the codes for it , all you need is a pandas dataframe from the csv you saved:


And you should be able to see something like this appearing on your jupyter notebook:


Which is pretty neat I still need to work out how to create the heat map based on google map boundaries instead of just one actual point but that I will leave for another time.


Sunday, March 04, 2018

Training Custom Object

The following activities have been done:

1. Setup of environment , in my case using Docker
2. Labeling and creation of tfRecord

Now we need to launch the actual training of tensorflow on the custom object . I have been following the tutorial from python programming to do that.

Docker 

1. Start by downloading a copy of ssd_mobilenet_v1_coco_11_06_2017.tar.gz:

wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz

2. Copy this to your object_detection folder:

docker cp ssd_mobilenet_v1_coco_11_06_2017.tar.gz tensorflow:/notebooks/models/research/object_detection/

3. Then untar the model:
tar -xvzf ssd_mobilenet_v1_coco_11_06_2017.tar.gz

4. Now taking the ssd_mobilenet_v1_pets.config  copy it to :
docker cp ssd_mobilenet_v1_pets.config tensorflow:/notebooks/models/research/object_detection/training

======ssd_mobilenet_v1_pets.config =====
# SSD with Mobilenet v1, configured for the mac-n-cheese dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "${YOUR_GCS_BUCKET}" to find the fields that
# should be configured.

model {
  ssd {
    num_classes: 1
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v1'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
          anchorwise_output: true
        }
      }
      localization_loss {
        weighted_smooth_l1 {
          anchorwise_output: true
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 10
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "ssd_mobilenet_v1_coco_11_06_2017/model.ckpt"
  from_detection_checkpoint: true
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "data/train.record"
  }
  label_map_path: "data/object-detection.pbtxt"
}

eval_config: {
  num_examples: 40
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "data/test.record"
  }
  label_map_path: "training/object-detection.pbtxt"
  shuffle: false
  num_readers: 1
}
====================================

5.  Copy the following object-detection.pbtxt to the object_detection/data directory:

docker cp object-detection.pbtxt tensorflow:/notebooks/models/research/object_detection/data

========object-detection.pbtxt============

item {
  id: 1
  name: 'object_label_name'
}

====================================

In my case am identifying only one object so there is only 1 item , change object_label_name to the name of your label you defined when annotating your images .

6. Launch the training using the following command:

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

Note if there are any errors try doing the following before executing train.py:
cd /notebooks/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd object_detection


You should start seeing steps being executed as below :



Tensorboard

To monitor progress on tensorboard use the following command  , can be in another docker exec putty window:

tensorboard --logdir='training'


You should be able to see the board on http://your_DOMAIN_OR_IP:6006/  , this took some secs for me before it actually was shown :








Object Detection Labelling image and generating tfRecord

I made use of the tutorial from jackyle  to label my images . Note that pythonprogramming has also the exact same tutorial :) !

Mind you the hardest part is really finding the images , the rest goes more or less pretty fast.

Basically you use the tool labelImage to help in the labelling , which basically creates an XML file for each of the image that you label .

I used the windows binary which can be found here and did all the labelling from windows itself.

Your directory structure should be like this under ROOT_DIR/models/research/object_detection:

|-xml_to_csv.py
|-data
|-images
   |- train
   |- test


Once you have labelled all your images you need to do the following :

1. Place 70 % of your images + xml in a folder images/train
2. Place 30% of your images + xml in a folder images/test
3. Create a xml_to_csv.py file that looks like below:


==========xml_to_csv.py====================
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text)
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def main():
    for directory in ['train','test']:
        image_path = os.path.join(os.getcwd(), 'images/{}'.format(directory))
        xml_df = xml_to_csv(image_path)
        xml_df.to_csv('data/{}_labels.csv'.format(directory), index=None)
        print('Successfully converted xml to csv.')



main()


========================================

4. Excecute  python xml_to_csv.py , this will read all the xml files and create 2 csv files in the data directory train_labels.csv and test_labels.csv

Docker Container

If you installed tensorflow using docker container  ( check my tutorial ) and cloned the following repository ( install git if you dont already have it ):

git clone https://github.com/tensorflow/models.git 

You can copy a zip of the images folder , images.zip , and the python xml_to_csv.py into the container, tensorflow,  using :

docker cp xml_to_csv.py tensorflow:/notebooks/models/research/object_detection/

docker cp images.zip tensorflow:/notebooks/models/research/object_detection/

Now all you need to do is to unzip the images ( install unzip if you dont already have it) :

unzip images.zip


Then you connect to the running instance of the container using :

docker exec -it tensorflow /bin/bash

and execute :


python xml_to_csv.py


Generating TfRecord

Now the next step is based on the generated test_labels.csv and train_labels.csv we are going to create tensorflow record files for each .

1. Copy the following generate_tfrecord.py file into your /notebooks/models/research/object_detection/   directory:

=========generate_tfrecord.py=========================================

"""
Usage:
  # From tensorflow/models/
  # Create train data:
  python generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=data/train.record --images_path=images/train

  # Create test data:
  python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=data/test.record --images_path=images/test
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import

import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('images_path', '', 'Path to Images')
FLAGS = flags.FLAGS


# TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'cocacola':
        return 1
    else:
        None


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):
    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
    path = os.path.join(os.getcwd(), FLAGS.images_path)
    examples = pd.read_csv(FLAGS.csv_input)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())

    writer.close()
    output_path = os.path.join(os.getcwd(), FLAGS.output_path)
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':

    tf.app.run()


===================================================================


Note that its the same file that is mentioned in the jackyle  tutorial however I kept getting file not found exceptions as it was trying to get the image from the images directory directly instead of images/test or images/train. So I made some modifications such as the images directory for train and test could be passed as a flag.

2. Execute for following command to make sure Python is on your path:

cd /notebooks/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd object_detection


3. Then create the train record:

python generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=data/train.record --images_path=images/train

4. Create the test record :

python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=data/test.record --images_path=images/test


You should now have 2 files train.record and test.record under the /notebooks/models/research/object_detection/data   directory.

Monday, February 26, 2018

Object Detection Tensorflow

To get started with object detection have a look at the following jupyter notebook:


Assuming that you have already setup your environment with tensorflow , in my case its  a docker container . You need to still execute the following instructions .

One issue i was getting was that the jupyter notebook kept failing at the following line  despite having followed all the instructions:

from object_detection.utils import ops as utils_ops


So i discovered that this was due to Python libraries not being availabe in PYTHONPATH:

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

if despite having executed the above in your container or your tensorflow environment the problem still persists in your Jupyter notebook consider adding it directly as can be seen below :


====Extract Jupyter Notebook=============================
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from timeit import default_timer as timer
import cv2

sys.path.append('/notebooks/models/research') # point to your tensorflow dir
sys.path.append('/notebooks/models/research/slim') # point ot your slim dir

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from object_detection.utils import ops as utils_ops

if tf.__version__ < '1.4.0':
  raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')

============================================================




Note that I have also changed the default method to use opencv for faster image IO and a timer to determine performance.

You install opencv using :
sudo apt-get install python-opencv


==============Extract ========================
for image_path in TEST_IMAGE_PATHS:
  start = timer()

  #image = Image.open(image_path)
  image = cv2.imread(image_path)
 
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.

  #image_np = load_image_into_numpy_array(image)
  image_np = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
 
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  #image_np_expanded = np.expand_dims(image_np, axis=0)
  image_np_expanded = np.expand_dims(image_np, axis=0)

  # Actual detection.
  output_dict = run_inference_for_single_image(image_np, detection_graph)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
  plt.figure(figsize=IMAGE_SIZE)
  plt.imshow(image_np)
  end = timer()
  duration = (end - start)

  print('Image: {0} took {1} to be processed'.format(image_path,duration))


===========================================





Tuesday, February 20, 2018

Installing an SSL certificate for NGinx on Ubuntu

Never thought that it would be that easy with a tool called Letsencrypt but basically if you want to add SSL certificate on your Nginx server all you have to do is follow the instructions here that is :



- Update / Install the following packages:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx 
then you ask certbot to install :
sudo certbot --nginx
Note: You will be asked for a domain name , ip addresses will not be allowed

Also certificates need to be renewed , cerbot can take care of that according to documentation

Friday, February 16, 2018

TensorFlow running in Docker deployed on Ubuntu

This entry will provide a view of the different steps required to setup TensorFlow on a Ubuntu environment by running it within a docker container .

Whether your starting off on Machine learning with TensorFlow or your a veteran and you want to setup an infra with a docker container running Tensorflow then this article is for you


Update (17/3/2018): For doing the setup using GPU instances on AWS check my article here 




Why we want to use docker ?

  • We do not want to have our configurations for Tensorflow being messed up by other python versions and configs for other applications so we are isolating it
  • Installing from Docker image is very practical and saves us a lot of time so we can focus directly on the our coding 


Prerequisites :

1. Ubuntu VM , am using one with 8Gb RAM , 100Gb SSD
2. Ubuntu CE installed , follow this link 
3. Make sure you have TCP access to 8888, 9000 if running on AWS ( or other cloud platforms )

Once docker has been installed do make sure that non-root users can also execute the docker command by following instructions from docker site .

Running Tensorflow Container

At time of writing the current version of tensorflow is 1.5 so kick start just execute the following command which is document here .


docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow


Note that there will be some output when the docker container runs with a token url please copy paste and keep somewhere .

Using Tensorflow

When launching the run command for the TensorFlow docker container above  a url with a token looking something like this would be shown on your console:
http://your_ip_adddress:8888/?token=eXXXXXXXXXXXXXXXXXXXXXXXXX

Use this url directly to login to your jupyter notebook.

Now this is great but we need to also create a volume such as we can easily access files for example pulled from a git repository .

So you might want to remove the docker container that you just started in the last step and use docker-compose file below

Docker compose

Docker compose needs to be installed using the following instructions .

Here is a simple docker-compose file to be able to run the command to run a Tensorflow :

------------docker-compose-tensorflow.yml--------------------------------------

version: '3'
services:
  tensorflow:
    build: .
    image: gcr.io/tensorflow/tensorflow
    container_name: tensorflow
    volumes:
    - notebooks:/notebooks
    ports:
    - "8888:8888"
    - "6006:6006"
    environment:
    - IMAGE_SIZE=224
    - ARCHITECTURE=mobilenet_0.50_224
 
volumes:
  notebooks:


-----------------------------------------------------------------------------------------

The command then to start the container is simple

docker-compose -f  docker-compose-tensorflow up 

and to stop the container :

docker-compose -f  docker-compose-tensorflow down

Note that the docker-compose file contains a port mapping for 6006 which is used for Tensorboard and a volume mapping  to  notebooks .

The volume notebooks ensure that you persist your notebooks on subsequent up - down cycles of your container . Else you would lose all your contents on each shutdown of the container.


Managing containers 

The best way I have found to manage by containers in a practical manner is through Portainer and you can install this on docker using the following command ( site here):

$docker volume create portainer_data
$docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Now login to http://your_ip_adddress:9000/ to set a password for admin


Container shell access 

If you want to connect to your running container's shell and assuming that this is called tensorflow as in our case just do :

docker exec -it tensorflow /bin/bash 

Else you can also use Portainer as explained below .

Accessing Tensorflow

You should be able to log onto your docker containers directly through Portainer by clicking on the container name and clicking on console and the click connect on bash



This by default will give you access to /notebook directory :



I found it particularly useful to use this feature of portainer as it meant that directly from portainer web app you could access the bash of your running container .

Also keeping files under /notebook allows you to view them through your jupyter notebook instance.



Reverse Proxy (optional step)

Although not absolutely required I find it useful to be able to access to the tools all directly from port 80 you can install a reverse proxy in front of the Portainer and Tensorflow Jupyter notebook by installing nginx .

Installation is pretty straight forward please check instructions here  :

  • sudo apt-get update
  • sudo apt-get install nginx

Now assuming you are using the default ports as mentioned above (else modify as required ) you need to create a file with a *.conf  e.g myreveseproxysettings.conf  (or whatever suits you) , then you sudo cp (copy) this file to directory /etc/nginx/conf.d  

Not that nginx main config includes configurations files which have a *.conf extension within the /etc/nginx/conf.d directory .

Remember to change My_IP_ADDRESS_OR_DOMAIN_NAME with your IP Address or Domain name .

==================myreveseproxysettings.conf=============================


 server {
        listen       80;
        listen       [::]:80;
        server_name  My_IP_ADDRESS_OR_DOMAIN_NAME;
        # root         /usr/share/nginx/html;

        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;


location /portainer/ {
            proxy_http_version 1.1;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";

            proxy_pass "http://localhost:9000/";     
        }

        location / {
            proxy_pass "http://localhost:8888/";
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $http_host;
    proxy_http_version 1.1;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 86400;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }


=================================================

Following commands are useful:

1. Start nginx

sudo systemctl start nginx

2. Stop nginx

sudo systemctl stop nginx

3. Check status nginx

systemctl status nginx


To get out of the status message just do a " :" followed by a "q"