Detect Oil Storage Tankers with PyTorch

detect oil storage tankers

In this post we will Detect Oil Storage Tankers with PyTorch to locate Oil Storage Tankers on Satellite Images. In this Article I explain the concepts behind the used tools briefly & show you the most important code-bits, while omitting supporting code lines (like extracting zip train data). For full code & live demo visit the Google Codelab Notebook (Open Public Notebook [New Tab]) created for this article.

Note: In order to be able to follow along this tutorial, you will need to download the Kaggle dataset. For that, you need a Kaggle account and generate an API key [New Tab] for python requests.

Object Detection with PyTorch

Object detection is a computer vision technique for locating instances of objects in images or videos. Object detection algorithms typically leverage machine learning or deep learning to produce meaningful results. As an example of object detection we are working on Oil Storage Tank detection using Airbus SPOT satellite imagery.

We use Megvii’s recent YOLOX architecture and implementation in this Tech Article. [YOLOX on Github, YOLOX on Arxiv]

Why YOLOX

The high performance detector of the YOLO series is YOLOX. YOLOX is a single-stage object detector with a DarkNet53 backend that modifies YOLOv3 in numerous ways. For classification and regression tasks, YOLO’s head is substituted with decoupled heads, respectively. It isn’t as important in this case because we only have one class, but it could be valuable in multi-class detection.

In addition, the anchor mechanism from YOLO v3 to V5 has been eliminated as the anchor mechanism creates many problems, making YOLOX anchor-free. Anchor-free mechanism significantly reduces the number of design parameters which need heuristic tuning, this eliminates the necessity for prior dataset analysis and selection of typical anchors. But this might not be difficult for this task to identify typical anchors. Here in this figure the differences between YOLO series(v3~v5) and YOLOX are seen clearly. For details you can check out this article.

In the YOLOX model for each level of feature pyramid feature, we first adopt a 1 × 1 conv layer to reduce the feature channel to 256 and then add two parallel branches with two 3 × 3 conv layers each for classification and regression tasks respectively. The IoU branch is added on the regression branch.

The YOLOX source can be installed as a Python package and thus can be called from the command line (or from a notebook cell with the ! magic character). The main reasons to use train, test and infer from the command line and not in the notebook are explained in this article.

Detect Oil Storage Tankers

First of all, we have to import all the necessary python modules and packages.

import os
import pandas as pd
import ast
from PIL import Image
import shutil as sh
from pathlib import Path
import json
from sklearn.model_selection import train_test_split

Now we are cloning all the repositories of YOLOX from Megvii’s github & installing all the requirements for Yolox. The dev dependencies are avoided as they are not that important.

! pip install kaggle
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json

Load Data (Source satellite images)

As described in the Airbus Oil Storage Detection Dataset page, the images folder contains 98 extracts of SPOT imagery at roughly 1.2 meters resolution. Each image is stored as a JPEG file of size 2560 x 2560 pixels (i.e. 3 kilometres on ground). The locations are selected worldwide.

# Location of dataset
DATASET_DIR = Path('dataset/')
 
# Read list of JPEG images
img_list = list(DATASET_DIR.glob('images/*.jpg'))
 
# Check number of JPEG images in source dataset folder
print("Found {} JPEG images files in {}".format(len(img_list), DATASET_DIR))

Our dataset path is kept in DATASET_DIR and we have read the JPEG images from the img_list. Here we can see a total of 98 JPEG images files are in the dataset.

Data Preparation

Adding bounding box to dataframe

We need to add the bounding box information to the dataframe. A bounding box is a rectangle around the object detected. The annotation file provides the coordinates of 2 points to describe a bounding box, (top-left and bottom-right corners). In order to analyze the dataset, we commute the width and height of the bounding box and the aspect ratio.  

def literal_eval(x):
    return ast.literal_eval(x.rstrip('\r\n'))

df = pd.read_csv(DATA_DIR / "annotations.csv", 
                converters={'bounds': literal_eval})
df.head(10)

Removing duplicate data

Now we are removing duplicate data from the dataset. Splitting the dataset into 80% training and 20% validation images. 

image_ids = df['image_id'].unique()
train_image_ids, validation_image_ids = train_test_split(image_ids, test_size=0.2, random_state=0)
print(validation_image_ids)

Resize Data

We want to add one preprocessing step which will resize all images to a size of 640 x 640. For training our dataset in YOLOX model we have to convert the dataset into YOLOX format. As YOLOX is trained with COCO dataset, our dataset needs to be in COCO dataset format as well, to be able to be trained in YOLOX.

# YOLOX needs COCO dataset format
HOME_DIR = '../' 
DATASET_PATH = 'dataset/images'
!mkdir {HOME_DIR}dataset
!mkdir {HOME_DIR}{DATASET_PATH}
!mkdir {HOME_DIR}{DATASET_PATH}/train2017
!mkdir {HOME_DIR}{DATASET_PATH}/val2017
!mkdir {HOME_DIR}{DATASET_PATH}/annotations

We will put our dataset into the folders train2017/ and val2017.

# YOLOX-s default input (640, 640)
RESIZE_WIDTH = 640
RESIZE_WIDTH_RATE = RESIZE_WIDTH / IMAGE_WIDTH
RESIZE_HEIGHT = 640
RESIZE_HEIGHT_RATE = RESIZE_HEIGHT / IMAGE_HEIGHT
 
for img_path in img_list:
    folder = 'val' if img_path.stem in validation_image_ids else 'train'
    pil_img = Image.open(img_path, mode='r')
    resized_img = pil_img.resize((RESIZE_WIDTH, RESIZE_HEIGHT))
    resized_img.save(f'{HOME_DIR}{DATASET_PATH}/{folder}2017/{img_path.stem}.jpg')
print(f'Number of training files: {len(os.listdir(f"{HOME_DIR}{DATASET_PATH}/train2017/"))}')
print(f'Number of validation files: {len(os.listdir(f"{HOME_DIR}{DATASET_PATH}/val2017/"))}')

Now we are converting the annotation.csv file into a json file. Here we are converting the annotation of bounding box (xmin, xmax ,ymin , ymax) in COCO json format. Here are new folders after converting into COCO json format. Total number of COCO json files in train.json is 11223 and in valid.json 2369.

Training Model

Before training the model we put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting. We have created different configuration files.We have replaced the COCO class file with our custom class.Except for special cases, we always recommend using our COCO pretrained weights for initialising the model. After getting  the Exp file and the COCO pretrained weights we provided, we can start training our model:

!python tools/train.py \
    -f airbus_config.py \
    -d 1 \
    -b 8 \
    --fp16 \
    -o \
    -c {MODEL_FILE}

After training our model here our model is saved with best outputs. Here best_ckpt.pth is the best outputs of our model.

Testing Model

Now after completing the training phase we want to test our model with an image. We can visually check the predictions and conclude that our detector is already pretty good. A formal qualification on the test dataset should be done, though.

#TEST_IMAGE_PATH = f"{DATA_DIR}/extras/df5ec618-c1f3-4cfe-88b1-86799d23c22d.jpg"
TEST_IMAGE_PATH = f"{DATA_DIR}/extras/b8c0e212-3669-4ff8-81a5-32191d456f86.jpg"
pil_test_img = Image.open(TEST_IMAGE_PATH, mode='r')
resized_test_img = pil_test_img.resize((RESIZE_WIDTH, RESIZE_HEIGHT))
resized_test_img.save('test.jpg')

MODEL_PATH = "./YOLOX_outputs/airbus_config/best_ckpt.pth"
!python ./tools/demo.py image \
    -f airbus_config.py \
    -c {MODEL_PATH} \
    --path ./test.jpg \
    --save_result \
    --device gpu

In the folder “save_result” you will now find images with predicted labels (rectangles). If you have a steady feed of satellite images, you could now use this model to predict Oil Storage Tankers in your images and get their precise locations.

Do you want to work on similar tasks? Come and join Ginkgo Analytics!

Sources:

Comments are closed.