Have a spare raspberry pi or jetson nano (or old laptop/mac mini) lying around? Have wifi connected security cams in your house (or a raspi camera)? Want to get notified when someone exits or enters your main door? When someone waters your plants (or forgets to)? When your dog hasn’t been fed food in a while, or hasn’t eaten? When someone left the fridge door open and forgot? left the gas stove running and forgot? when birds are drinking from your dog’s water bowl? Well, you’re not alone, and you’re at the right place ?
- Take a video input (a raspberry pi camera if run on a rpi, an RTMP stream of a security cam, or a video file)
- Run a simple motion detection algorithm on the stream, applying minimum box thresholds, negative masks and masks
- Run object detection on either the cropped frame where motion was detected or even the whole frame if needed, using tensorflow object detection API. There is support for both tensorflow 1 and 2 as well as tensorflow lite, and custom models as well
- Serves a flask webserver to allow you to see the motion detection and object detection in action, serve a mpeg stream which can be configured as a camera in HomeAssistant
- Object detection is also highly configurable to threshold or mask out false positives
- Object detection features an optional “detection buffer’ which can be used to get the average detection in moving window of frames before reporting the maximum cumulative average detection
- Supports sending notifications to HomeAssistant via MQTT or webhooks. Webhook notification send the frame on which the detection was triggered, to allow you to create rich media notifications from it via the HA android or iOS apps.
- Pattern detection: both the motion-detector and object-detector send events to a queue which is monitored and analyzed by a pattern detector. You can configure your own “movement patterns” – e.g. a person is exiting a door or entering a door, or your dog is going to the kitchen. It keeps a configurable history of states (motion detected in a mask, outside a mask, object detected (e.g. person), etc.) and your movement patterns are pre-configured sequence of states which identify that movement.
door_detect.pyprovides a movement pattern detector to detect if someone is entering or exiting a door
- All of the above functionality is provided by running
stream.py. There’s also
serve.pywhich serves as an object detection service which can be called remotely from a low-grade CPU device like a raspberry pi zero w which cannot run tensorflow lite on its own. The motion detector can still be run on the pi zero, and only object detection can be done remotely by calling this service, making a distributed setup.
- Architected to be highly concurrent and asynchronous (uses threads and queue’s between all the components – flask server, motion detector, object detector, pattern detector, notifier, mqtt, etc)
- Has tools to help you generate masks, test and tune the detectors, etc.
- Every aspect of every detector can be tuned in the config files (which are purposefully kept as python classes and not yaml), every aspect is logged with colored output on the console for you to debug what is going on.
On a pi, as a systemd service
cd ~ git clone https://github.com/angadsingh/argos sudo apt-get install python3-pip sudo apt-get install python3-venv pip3 install --upgrade pip python3 -m venv argos-venv/ source argos-venv/bin/activate pip install https://github.com/bitsy-ai/tensorflow-arm-bin/releases/download/v2.4.0/tensorflow-2.4.0-cp37-none-linux_armv7l.whl pip install wheel pip install -r argos/requirements.txt #only required for tf2 git clone https://github.com/tensorflow/models.git cd models/research/object_detection/packages/tf2 python -m pip install . --no-deps
make a systemd service to run it automatically
cd ~/argos sudo cp resources/systemd/argos_serve.service /etc/systemd/system/ sudo cp resources/systemd/argos_stream.service /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable argos_serve.service sudo systemctl enable argos_stream.service sudo systemctl start argos_serve sudo systemctl start argos_stream
see the logs
journalctl --unit argos_stream.service -f
As a docker container
You can use the following instructions to install argos as a docker container (e.g. if you already use docker on your rpi for hassio-supervised, or you intend to install it on your synology NAS which has docker, or you just like docker)
Install docker (optional)
curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh
Run argos as a docker container
Note: replace the docker tag name below for your cpu architecture
|angadsingh/argos:armv7||raspberry pi 2/3/4+|
|angadsingh/argos:x86_64_gpu||PC, Mac||tensorflow with gpu support. run with docker flag
docker run --rm -p8081:8081 -v configs:/configs \ -v /home/pi/detections:/output_detections \ -v /home/pi/argos-ssh:/root/.ssh angadsingh/argos:armv7 \ /usr/src/argos/stream.py --ip 0.0.0.0 --port 8081 \ --config configs.your_config
docker run --rm -p8080:8080 -v configs:/configs \ -v /home/pi/upload:/upload angadsingh/argos:armv7 \ /usr/src/argos/serve.py --ip 0.0.0.0 --port 8080 \ --config configs.your_config --uploadfolder "/upload"
make a systemd service to run it automatically. these services automatically download the latest docker image and run them for you: (note: you’ll have to change the docker tag inside the service file for your cpu architecture)
sudo wget https://raw.githubusercontent.com/angadsingh/argos/main/resources/systemd/argos_serve_docker.service -P /etc/systemd/system/ sudo wget https://raw.githubusercontent.com/angadsingh/argos/main/resources/systemd/argos_stream_docker.service -P /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable argos_serve_docker.service sudo systemctl enable argos_stream_docker.service sudo systemctl start argos_serve_docker sudo systemctl start argos_stream_docker
see the logs
journalctl --unit argos_serve_docker.service -f journalctl --unit argos_stream_docker.service -f
stream.py – runs the motion detector, object detector (with detection buffer) and pattern detector
stream.py --ip 0.0.0.0 --port 8081 --config configs.config_tflite_ssd_example
||will show a web page with the real time processing of the input video stream, and a separate video stream showing the object detector output|
||status shows the current load on the system|
||shows the config|
||will let you edit any config parameter without restarting the service|
||returns the latest frame as a JPEG image (useful in HA generic camera platform)|
||streams an MJPEG video stream of the motion detector (useful in HA generic camera platform)|
||streams an MJPEG video stream of the object detector|
serve.py --ip 0.0.0.0 --port 8080 --config configs.config_tflite_ssd_example --uploadfolder upload
Home assistant automations
ha_automations/notify_door_movement_at_entrance.yaml – triggered by pattern detector ha_automations/notify_person_is_at_entrance.yaml – triggered by object detector
both of these use HA webhooks. i used MQTT earlier but it was too delayed and unreliable for my taste. the project still supports MQTT though and you’ll have to make mqtt sensors in HA for the topics you’re sending the notifications to here.
serve.py share some configuration for the object detection, but stream.py builds on top of that with a lot more configuration for the motion detector, object detection buffer, pattern detector, and stream input configuration, etc. The example config documents the meaning of all the parameters
This runs at the following FPS with every component enabled:
|raspberry pi 4B||motion detector||18 fps|
|raspberry pi 4B||object detector (tflite)||5 fps|
I actually run multiple of these for different RTMP cameras, each at 1 fps (which is more than enough for all real time home automation use cases)
This is my own personal project. It is not really written in a readable way with friendly abstractions, as that wasn’t the goal. The goal was to solve my home automation problem quickly so that I can get back to real work ? So feel free to pick and choose snippets of code as you like or the whole solution if it fits your use case. No compromises were made in performance or accuracy, only ‘coding best practices’. I usually keep such projects private but thought this is now meaty enough to be usable to someone else in ways I cannot imagine, so don’t judge this project on its maturity or reuse readiness level ? . Feel free to fork this project and make this an extendable framework if you have the time.
If you have any questions feel free to raise a github issue and i’ll respond as soon as possible
Special thanks to these resources on the web for helping me build this.