If you’ve been following along – I’ve been working with AWS Rekognition to detect people in security camera footage.
I have previous posts that discuss the results.
I’m now running the images through OpenCV using the pre-trained HOG + Linear SVM model. The picture in this post is an example of the output from OpenCV with a person detected and a bounding box drawn.
Over the next day or two I’ll start processing all the images with both Rekognition and OpenCV. I’ll also be capturing the results in Neo4j (where I’m already capturing the Rekognition object labels) to allow for comparative analysis.
Last week I wrote a post evaluating AWS Rekognition accuracy in finding people in images. The analysis was performed using the Neo4j graph database.
As I noted in the original post – Rekognition is either very confident it has identified a person or not confident at all. This leads to an enormous number of false negatives. Today I looked at the distribution of confidence for the Person label over the last 48 hours.
You be the judge:
As an extension of my series of posts on handling IoT security camera images with a Serverless architecture I’ve extended the capability to integrate AWS Rekognition
Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, and faces in images. You can also search and compare faces. Rekognition’s API enables you to quickly add sophisticated deep learning-based visual search and image classification to your applications.
My goal is to identify images that have a person in them to limit the number of images someone has to browse when reviewing the security camera alarms (security cameras detect motion – so often you get images that are just wind motion in bushes, or headlights on a wall).