top of page

What makes us special?

Change recognition on the edge

ezgif-4-84d36a7069ed.gif

Argos only triggers if a new object of interest enters the camera's field of view. We have found this methodology reduces the number of false positives when attempting to find video clips of interest compared to motion-based triggers used by Amazon's Ring and Google Nest.

Customized YOLOv5

Using the YOLOv5 framework we created a customized computer vision model to detect only objects of interest. Instead of using the pre-trained YOLOv5 model which detects 90 different objects. We created our own model with a smaller list of objects. This helps Argos only to trigger on objects of use to the user.

​Object List

  • Ambulance

  • Backpack

  • Box

  • Briefcase

  • Car

  • Cat

  • Dog

  • Envelope

  • Handgun

  • Knife

  • Person

Sentence Lookup vs. Auto-Caption

Natural Language Processing is not perfect. We have a plan for that.

We have created a custom lookup function based on the objects in the video clip. This lookup takes the objects in a video clip and "Looks-up" a human-generated sentence base on those objects. This lookup is then compared to the NLP auto-caption to see which video caption is more accurate and should be shown to the user. 

lookup_vs_caption.png

Device Independent

BYOD (Bring Your Own Device)

No matter the hardware we've got the software. Argos is a containerized software package that supports many different devices

Nvidia Jetson
jetson.jpeg
Ring Doorbell
ring.jpg
Raspberry Pi
r_pi.jpeg
bottom of page