Saturday, March 17, 2018

Object Detection DL training with Tensorflow on GPU AWS

Turns out that when if you want to train a model with say 5 types of different category of images you would need to make use of an Ec2 instance on AWS that has GPU capabilities.

Else what happens with EC2 CPU instances is that they quickly run out of memory on the first dozen steps and the process gets killed .

For that you would need at the very least a p2.xlarge and this is billed at around $0.9/hr ( at time of writing of this article) so still very expensive.  So make sure that this VM is turned off the moment its not in use.

I tried installing a vanilla p2.xlarge but ended up having issues with NVIDIA drivers so when you do launch an EC2 instance from AWS try to do so with an already configured AMI e.g AWS Deep Learning AMI.

Follow the following steps:
  1. NVDIA drivers are properly installed - if your using the AWS Deep Learning AMI then chances are you don't need to worry about that
  2. Then install Docker CE for Ubuntu
  3. Ensure that the following post-installation instructions  are also covered
  4. Then install NVIDIA Docker using instructions on the page.

You should then be able to launch your docker instance using the following command:


docker volume create notebooks

nvidia-docker run -it --name tensorflow -v notebooks:/notebooks -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu


Once you got the container running then its just a question of following my other articles to continue with the training:

1. Setup of environment , in my case using Docker
2. Labeling and creation of tfRecord
3. Training Custom Object Detection

So typically you would use the GPU instances  to train your models and CPU instances only to run test against your frozen inference graph for example using jupyter as less expensive.

Monday, March 12, 2018

Mauritius Heat Map for Real Estate Prices per m2 using Python

Yesterday I thought I would give a try to see whether it is possible to create a heatmap of real estate prices per m2 in Mauritius , this covers the prices of apartments , houses and villas.

Given that there is no dataset to work against readily I managed to collect information from popular online sources that had prices of real estate in an attempt to then plot them on our Mauritian map.

I used only Python and Jupyter notebook for all the process of collecting data , analysing and plotting.

The process gathering data off from web pages is called webscraping and I will not give out codes in regards to this if you are not careful enough you might cause DoS attacks against the websites and you don't want that.

I used  beautifulsoup4  to parse through the retrieved HTML data to retrieve specifically what data I was interested in to create some pandas dataframes .

The first analysis based on the data showed the following :



The price in Rs per m2 is lower in the center than on the North and its much more expensive that on the East of the island . However note that 4000+  records were used  on which an average was made for each Area.

Below are the regions where the prices are the lowest per m2 in Mauritius :

And here are the regions which have the highest prices m2 in Mauritius :




Note that there are over 150 regions so I cannot show the full list but that gives a good enough indication of prices where the costal regions obviously being more expensive.

Then I used the Google Maps API to get the coordinates based on the region name  , the code for this can be seen below, function name is geodataMapper , it also handles the problem that if you are making a number of subsequent calls to Google Maps API (in this case over 150+) , then at one point you will get an exception saying something like "too many retries" :


To retrieve latitude and longitude data for all the regions i through the following code that will then iterate through a panda dataframe containing the region and price m2 and for each region call the above geodataMapper , at the end it will create a csv file containing region , price per m2 , latitude and longitude :


Now once you have this data you can install gmaps which is awesome for creating some basic heatmaps and it even has extension for jupyter notebook .

Below are the codes for it , all you need is a pandas dataframe from the csv you saved:


And you should be able to see something like this appearing on your jupyter notebook:


Which is pretty neat I still need to work out how to create the heat map based on google map boundaries instead of just one actual point but that I will leave for another time.


Sunday, March 04, 2018

Training Custom Object

The following activities have been done:

1. Setup of environment , in my case using Docker
2. Labeling and creation of tfRecord

Now we need to launch the actual training of tensorflow on the custom object . I have been following the tutorial from python programming to do that.

Docker 

1. Start by downloading a copy of ssd_mobilenet_v1_coco_11_06_2017.tar.gz:

wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz

2. Copy this to your object_detection folder:

docker cp ssd_mobilenet_v1_coco_11_06_2017.tar.gz tensorflow:/notebooks/models/research/object_detection/

3. Then untar the model:
tar -xvzf ssd_mobilenet_v1_coco_11_06_2017.tar.gz

4. Now taking the ssd_mobilenet_v1_pets.config  copy it to :
docker cp ssd_mobilenet_v1_pets.config tensorflow:/notebooks/models/research/object_detection/training

======ssd_mobilenet_v1_pets.config =====
# SSD with Mobilenet v1, configured for the mac-n-cheese dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "${YOUR_GCS_BUCKET}" to find the fields that
# should be configured.

model {
  ssd {
    num_classes: 1
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v1'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
          anchorwise_output: true
        }
      }
      localization_loss {
        weighted_smooth_l1 {
          anchorwise_output: true
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 10
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "ssd_mobilenet_v1_coco_11_06_2017/model.ckpt"
  from_detection_checkpoint: true
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "data/train.record"
  }
  label_map_path: "data/object-detection.pbtxt"
}

eval_config: {
  num_examples: 40
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "data/test.record"
  }
  label_map_path: "training/object-detection.pbtxt"
  shuffle: false
  num_readers: 1
}
====================================

5.  Copy the following object-detection.pbtxt to the object_detection/data directory:

docker cp object-detection.pbtxt tensorflow:/notebooks/models/research/object_detection/data

========object-detection.pbtxt============

item {
  id: 1
  name: 'object_label_name'
}

====================================

In my case am identifying only one object so there is only 1 item , change object_label_name to the name of your label you defined when annotating your images .

6. Launch the training using the following command:

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

Note if there are any errors try doing the following before executing train.py:
cd /notebooks/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd object_detection


You should start seeing steps being executed as below :



Tensorboard

To monitor progress on tensorboard use the following command  , can be in another docker exec putty window:

tensorboard --logdir='training'


You should be able to see the board on http://your_DOMAIN_OR_IP:6006/  , this took some secs for me before it actually was shown :








Object Detection Labelling image and generating tfRecord

I made use of the tutorial from jackyle  to label my images . Note that pythonprogramming has also the exact same tutorial :) !

Mind you the hardest part is really finding the images , the rest goes more or less pretty fast.

Basically you use the tool labelImage to help in the labelling , which basically creates an XML file for each of the image that you label .

I used the windows binary which can be found here and did all the labelling from windows itself.

Your directory structure should be like this under ROOT_DIR/models/research/object_detection:

|-xml_to_csv.py
|-data
|-images
   |- train
   |- test


Once you have labelled all your images you need to do the following :

1. Place 70 % of your images + xml in a folder images/train
2. Place 30% of your images + xml in a folder images/test
3. Create a xml_to_csv.py file that looks like below:


==========xml_to_csv.py====================
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text)
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def main():
    for directory in ['train','test']:
        image_path = os.path.join(os.getcwd(), 'images/{}'.format(directory))
        xml_df = xml_to_csv(image_path)
        xml_df.to_csv('data/{}_labels.csv'.format(directory), index=None)
        print('Successfully converted xml to csv.')



main()


========================================

4. Excecute  python xml_to_csv.py , this will read all the xml files and create 2 csv files in the data directory train_labels.csv and test_labels.csv

Docker Container

If you installed tensorflow using docker container  ( check my tutorial ) and cloned the following repository ( install git if you dont already have it ):

git clone https://github.com/tensorflow/models.git 

You can copy a zip of the images folder , images.zip , and the python xml_to_csv.py into the container, tensorflow,  using :

docker cp xml_to_csv.py tensorflow:/notebooks/models/research/object_detection/

docker cp images.zip tensorflow:/notebooks/models/research/object_detection/

Now all you need to do is to unzip the images ( install unzip if you dont already have it) :

unzip images.zip


Then you connect to the running instance of the container using :

docker exec -it tensorflow /bin/bash

and execute :


python xml_to_csv.py


Generating TfRecord

Now the next step is based on the generated test_labels.csv and train_labels.csv we are going to create tensorflow record files for each .

1. Copy the following generate_tfrecord.py file into your /notebooks/models/research/object_detection/   directory:

=========generate_tfrecord.py=========================================

"""
Usage:
  # From tensorflow/models/
  # Create train data:
  python generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=data/train.record --images_path=images/train

  # Create test data:
  python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=data/test.record --images_path=images/test
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import

import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('images_path', '', 'Path to Images')
FLAGS = flags.FLAGS


# TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'cocacola':
        return 1
    else:
        None


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):
    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
    path = os.path.join(os.getcwd(), FLAGS.images_path)
    examples = pd.read_csv(FLAGS.csv_input)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())

    writer.close()
    output_path = os.path.join(os.getcwd(), FLAGS.output_path)
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':

    tf.app.run()


===================================================================


Note that its the same file that is mentioned in the jackyle  tutorial however I kept getting file not found exceptions as it was trying to get the image from the images directory directly instead of images/test or images/train. So I made some modifications such as the images directory for train and test could be passed as a flag.

2. Execute for following command to make sure Python is on your path:

cd /notebooks/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd object_detection


3. Then create the train record:

python generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=data/train.record --images_path=images/train

4. Create the test record :

python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=data/test.record --images_path=images/test


You should now have 2 files train.record and test.record under the /notebooks/models/research/object_detection/data   directory.