TASS Autonomous Sight System

01.08.2017 | Views 563

  

IBM Global Mobile Innovators Tournament Smart Homes Semi Finalists 2016
AT&T Foundry Winners 2016
AT&T Foundry Winners 2016
Intel / Microsoft / IoT Solutions World Congress Hackathon Intel Experts Award Winners

  

TASS Autonomous Sight System TASS Autonomous Sight System on Intel DevMesh

  

DESCRIPTION:

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

  

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

  

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

  

  • The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program.
  • The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.
  • The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.
  • For the 4th, we now use a system that we developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.
  • For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

  

INTELLILAN MANAGEMENT:

The IntelliLan Management Console/Applications are essentially IoT JumpWay applications, capable of controlling all IntelliLan devices on their network and communicating with the IoT JumpWay. Users can use the console and manage their devices using their voice which is powered by an TOA, an A.I. agent developed to assist home and business owners to use TechBubble web and IoT systems.

  

TASS Autonomous Sight System on GitHub

  

PROJECT VIDEOS

  

Your Comments