24

Squats detector with OpenCV and Tensorflow

 4 years ago
source link: https://towardsdatascience.com/squats-detector-with-opencv-and-tensorflow-ce934f19aeb9?gi=1d93de86a46d
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Artificial intelligence in SportTech

During the quarantine, we had limited physical activities and that was not good, especially for children.

But when I made my kid exercise, I met resistance and had to control the whole process with attention.

It was fun and also I got an idea to automate the process. Although it was overkill in the situation, the inspiration turned out to be irresistible.

Considering a point to start, I picked squats. A basic movement with explicit stages and a big amplitude looked like the best contender.

Data Collection

Raspberry Pi with a camera is very handy to take home pictures with minimal efforts.

OpenCV gets the images and writes them into the filesystem.

Movement recognition

Initially, I was going to find a human on the picture with image segmentation. But the segmentation is a pretty heavy operation, especially with Raspberry limited resources.

Also, segmentation misses a fact we have a sequence of frames, not a single picture. The sequence has obvious features and we need to use it.

So I proceeded with background removal algorithms from OpenCV. Combining this approach with some heuristics eventually provided a reliable result.

Background subtraction

First, create a background subtractor:

backSub = cv.createBackgroundSubtractorMOG2()

And feed it with frames:

mask = backSub.apply(frame)

Finally get a picture with a body outline:

6beIfue.jpg!web

Then dilate the image to highlight the contours.

mask = cv.dilate(mask, None, 3)

Applying this algorithm to all frames gives poses masks. Then we are going to classify them as a stand, squat, or nothing.

The next step is to cut a figure from the picture. OpenCV can find contours:

cnts, _ = cv.findContours(img, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)

The idea is the biggest contour fits for the figure more or less.

Unfortunately, the results are not stable and the biggest contour could wrap only body but miss legs, for example.

Anyway, having a sequence of images helps a lot. Squats are happening on the same spot so we can assume, all the actions are going inside some area and the area is stable.

Then the bounding rect can be built iteratively, increasing with the biggest contour if needed.

There is an example:

  • the biggest contour is red
  • the contour bounding rect is blue
  • the figure bounding rect is green

iyii6zB.jpg!web

Using this approach we can get a pose for further processing.

Classification

Then the bounding rectangle is cut out from the image, made a square and unified by size 64x64.

There are masks to be a classifier input:

For stands:

ruqaUzz.jpg!web

For squats:

jm2ENbm.jpg!web

I used Keras + Tensorflow for the classification.

Initially, I started with the classic Lenet-5 model . It worked good and after reading an article about Lenet-5 variations , I decided to play around to simplify the architecture.

Turned out, a very simple CNN shows pretty much the same accuracy:

model = Sequential([
        Convolution2D(8,(5,5), activation='relu', input_shape=input_shape),
        MaxPooling2D(),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(3, activation='softmax')
      ])model.compile(loss="categorical_crossentropy", optimizer=SGD(lr=0.01), metrics=["accuracy"])

There was 86% accuracy on 10 epochs, 94 on 20, and 96 on 30.

Longer training could cause overfitting so it is time to try the model in real life.

Raspberry Pi

I am a big fan of OpenCV-DNN module and intended to use it in order to avoid Tensorflow heavy setup.

Unfortunately, when I converted Keras model to TF and run on Raspberry:

cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\dnn\src\dnn.cpp:562: error: (-2:Unspecified error) Can't create layer "flatten_1/Shape" of type "Shape" in function 'cv::dnn::dnn4_v20191202::LayerData::getLayerInstance'

This is a known issue on Stack Overflow but the fix is not released yet.

So there was no way but with Tensorflow.

Google already supports TF for Raspberry for a couple of years so there are no tricks to get it working.

TF contains adapters to Keras models, no conversion necessary.

Load the model:

with  open(MODEL_JSON, 'r') as f:
  model_data = f.read()
  model = tf.keras.models.model_from_json(model_data)
  model.load_weights(MODEL_H5)
  graph = tf.get_default_graph()

And classify squat masks with that:

img = cv.imread(path + f, cv.IMREAD_GRAYSCALE)
img = np.reshape(img,[1,64,64,1])
with graph.as_default():
  c = model.predict_classes(img)
  return c[0] if c else None

Classification call with input 64x64 takes about 60–70 ms on Raspberry — it is close to realtime for this purpose.

Raspberry app

Bringing together all the parts above into a single app :

Let's make a service using Flask with following entries:

  • GET / — an app page (more info below)
  • GET /status — get current status, squats and frames number
  • POST /start — start an exercise
  • POST /stop — finish the exercise
  • GET /stream — a video stream from the camera

I initialized Tensorflow on the service start. It is generally a bad idea, especially on Raspberry — TF will consume a lot of resources, the service will be slow to respond and could die on hitting limits.

So normally I would start TF in a separate process and provide a channel for interprocess communication but I used a simple way for this prototype.

And there is already mentioned web-app to control squats activity. The app can:

  • show a live video from the camera
  • start/stop an exercise
  • count squats and frames

vMbMfiQ.jpg!web

When an exercise started the service writes pictures into the filesystem.

It is convenient to get them to train the neural network, but normally they are not needed.

The service handles a sequence of pictures, classifies them with TF, and when Stand — Squat — Stand pattern met, the squats counter increased.

Labeling tool

There is a simple labeling tool for manual classification. This is a GUI app with python + OpenCV.

The tool shows pictures with main contour and bounding rectangles and expects keys: S (Stand), Q (sQuat), N (Nothing) and then automatically moves pictures into target subfolders.

Then labeled subfolders should be copied into the Keras model input folder and the training process needs to be repeated.

Platforms

I run the app on Raspberry but nothing prevents using any Linux environment with python, OpenCV and a camera.

Problems

As is it could be accepted as MVP but there are a lot of things to improve.

  • Refine background removal. Shadows generate noisy blobs and they make the classifier dizzy.
  • Collect more data for the neural network.
  • Review the classifier architecture. The simplest one shows satisfying results now but has its own limits.

Links


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK