This code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path.

Level
Topic
Type
100
Introduction to computer vision
Article
101
Introduction to IBM Maximo Visual Inspection
Article
201
Build and deploy an IBM Maximo Visual Inspection model and use it in an iOS app
Tutorial
202
Locate and count items with object detection
Code pattern
203
Object tracking in video with OpenCV and Deep Learning
Code pattern
301
Validate computer vision deep learning models
Code pattern
302
Develop analytical dashboards for AI projects with IBM Maximo Visual Inspection
Code pattern
303
Automate visual recognition model training
Code pattern
304
Load IBM Maximo Visual Inspection inference results in a dashboard
Code pattern
305
Build an object detection model to identify license plates from images of cars
Code pattern
306
Glean insights with AI on live camera streams and videos
Code pattern

Summary

Whether you are counting cars on a road or products on a conveyor belt, there are many use cases for computer vision with video. With video as input, you can use automatic labeling to create a better classifier with less manual effort. This code pattern shows you how to create and use a classifier to identify objects in motion and then track and count the objects as they enter designated regions of interest.

Description

Whether it is car traffic, people traffic, or products on a conveyer belt, there are many applications for keeping track of potential customers, actual customers, products, or other assets. With video cameras everywhere, a business can get useful information from them with some computer vision. Applying this technology to videos is much more practical than older methods (for example, using special hardware or a person counting vehicle traffic).

Related work from others:  Latest from MIT Tech Review - A watermark for chatbots can spot text written by an AI

This code pattern explains how to create a video car counter using the IBM Maximo Visual Inspection Video Data Platform, OpenCV, and a Jupyter Notebook. You’ll use a little manual labeling and a lot of automatic labeling to train an object classifier to recognize cars on a highway. You’ll load another car video into a Jupyter Notebook where you’ll process the individual frames and annotate the video.

You’ll use the deployed model for inference to detect cars on a sample of the frames at a regular interval, and you’ll use OpenCV to track the cars from frame to frame in between inference. In addition to counting the cars as they are detected, you’ll also count them as they cross a “finish line” for each lane and show cars per second.

When you’ve completed this code pattern, you will understand how to:

Use automatic labeling to create an object detection classifier from a video
Process frames of a video using a Jupyter Notebook, OpenCV, and IBM Maximo Visual Inspection
Detect objects in video frames with IBM Maximo Visual Inspection
Track objects from frame to frame with OpenCV
Count objects in motion as they enter a region of interest
Annotate a video with bounding boxes, labels, and statistics

Flow

Upload a video using the IBM Maximo Visual Inspection web UI.
Use automatic labeling and train a model.
Deploy the model to create an IBM Maximo Visual Inspection inference API.
Use a Jupyter Notebook to detect, track, and count cars in a video.

Instructions

Find the detailed steps for this pattern in the README. The steps will show you how to:

Related work from others:  Latest from MIT : From steel engineering to ovarian tumor research

Create a data set in Video Data Platform.
Train and deploy the model.
Automatically label objects.
Run the notebook.
Create the annotated video.

Conclusion

This code pattern showed how to create and use a classifier to identify objects in motion and then track and count the objects as they enter designated regions of interest. The code pattern is part of the Getting started with IBM Maximo Visual Inspection learning path. To continue the series and learn about more IBM Maximo Visual Inspection features, look at the next code pattern, Validate deep learning models.

Similar Posts