Team Guardian

PUBLIC
United States, University of Illinois at Urbana-Champaign | University of California, Berkeley | Stanford University
Team Guardian

Members

Profile Image for Abhi Upadhyay
Abhi Upadhyay
ADMIN
United States

Team Gallery

Report Inappropriate Content

Project Overview

According to the NIH, falls are the second leading cause of accidental or unintentional injury deaths worldwide. Each year, an estimated 646,000 individuals die from falls globally. Even when nonlethal, falls can lead to severe injury, loss of independence, and disability, especially for older adults. A 2010 study estimated that falls cost the US 23.3 billion dollars each year in healthcare costs, a figure that has since risen proportionally with the population. Many instances of severe injury and death take place when people fall and are left unattended; medical consequences are highly correlated to response and rescue time. Much of the human loss related to falling is preventable. The simplest solution is for caretakers to always keep an eye on at-risk populations like senior citizens. Today, there is a slightly easier path, since cameras have widespread presence for surveillance reasons. Especially in public areas like airports, transit stations, malls and streets, cameras are readily available and have wide coverage. Most notably, elderly care centers often have cameras installed. However, it remains infeasible for caretakers to constantly keep track of numerous video feeds so that fallen residents can be flagged and rapidly assisted. That's why we've created Guardian, a vision-based fall detection system that monitors live video feeds and reports dangerous falls. Guardian consists of multiple cameras and an edge computer for running an AI model. To use Guardian, an elderly care center can simply install the cameras and designate a point of contact; then, Guardian will continuously check for falls via live stream and notify the point of contact if a dangerous fall is detected. Our solution consists of 4 cameras designed to be placed around the home connected wirelessly with an Nvidia Jetson Xavier NX inference chip designed to run AI models at the edge. Under the hood, Guardian is powered by a convolutional neural network called I3D which was developed and trained by Google Deepmind to perform human action recognition on the Kinetics dataset, which contains 400 classes of common human activities (e.g. jumping and clapping) with at least 400 videos per class, for a total of over 200,000 videos. Our team then used Microsoft Cognitive Toolkit (CNTK), a deep learning framework for building neural network based models, and Azure Machine Learning, a service that allows for models to be trained and evaluated, to develop an Extract, Train, and Load (ETL) pipeline for processing videos on a low power, low cost edge computer and a custom AI model for detecting human falls in videos. As the Kinetics dataset does not contain examples of humans falling and therefore cannot classify falls, our team had to apply a technique called transfer learning to enable the network to detect a new class of actions (falls) on video clips that are 30 frames long. Transfer learning allows a machine learning model to use the knowledge that it has acquired to classify one set of tasks and use it to classify a new set of tasks given data representing the new tasks. To generate a diverse set of video data that encompassed multiple different camera angles, environments, and lighting conditions, our team used both the Multiple Camera Fall Dataset (MCFD) and the Fall Detection Dataset (FDD) which contain a total of 262 videos. Our team then built an inference engine using TensorRT designed to run the model on stacks of 30 frames 30 times per second, enabling falls to be detected in real time. Finally, we use Azure Machine learning to allow us to easily upload the output of our model to a web service that sends the designated caretaker a text that a fall has occurred. With a groundbreaking accuracy of 94%, Guardian is a highly accurate tool that will both preserve peace of mind and save lives. As Guardian is a vision-based system, its capabilities can easily be expanded to other tasks besides falls. Our system can also detect if a person has remained stationary for an extended period of time. Our system can be modified to detect other kinds of threats including home intrusions and aggression. And In future iterations, the core AI model could be used for more widespread public benefit. Since cameras are already installed in many public places, we are working on modifying Guardian to interface with existing CCTV cameras and surveillance systems to achieve near-universal fall detection.

About Team

We have a team of three: Pranshu Chaturvedi, Abhi Upadhyay, and Jonathan Ko. We met in middle school while doing STEM related projects and competitions including science fair projects and competitions like Science Bowl and the American Mathematics Competitions. Since then, we've continued to work together on ideas related to computer science throughout high school and college, including participating in the USA Computing Olympiad and starting a computer club at our high school. Currently, we are sophomores in college with interests in exploring novel applications for Artificial Intelligence technologies and using software engineering to turn our ideas into a reality, which led us to participate in this competition. Pranshu is a current sophomore at the University of Illinois Urbana-Champaign studying Computer Science and Statistics. His interests lie in High Performance Computing (HPC) and in AI/Deep Learning. Recently he attended the Super Computing 20 conference where he represented his school as a member of the University of Illinois Student Cluster Competition team. The competition requires students to assemble the fastest possible HPC cluster within a given power limit and run a series of simulations and benchmarks during the 3 day competition window. Pranshu recently also won 2nd place at the National Center for Supercomputing Applications Deep Learning Hackathon. At last year’s Microsoft Imagine Cup, Pranshu and Abhi won top 5 at the Americas Regional semi-finals for developing a novel deep learning platform aimed at automating tumor prediction in microscope images of breast biopsies. This semester Pranshu is working at the National Center for Supercomputing applications with a group that is developing deep learning methods for estimating attributes of black holes and neutron stars based on LIGO gravitational wave data. Abhi is a current sophomore at the University of California Berkeley, pursuing a dual-degree in Electrical Engineering & Computer Science + Business Administration as part of the M.E.T. program (Management, Entrepreneurship, & Technology). He has worked in areas including cloud infrastructure, full-stack development, embedded systems, computer vision, and cryptocurrency. His full-stack development work includes projects for Saks Fifth Avenue affiliates, an app for his high school amassing over 1.2K active users, and more. He is well-versed in Golang, Node.js, React, PHP, and Ruby on Rails. Abhi has also conducted research and developed a prototype of an adapter that could alleviate 61% of household standby power use with Jonathan. Additionally, he researched methods to compress standard video by converting objects to vectors, using computer vision. At last year’s Microsoft Imagine Cup, Pranshu and Abhi won top 5 at the Americas Regional semi-finals for developing a novel deep learning platform aimed at automating tumor prediction in microscope images of breast biopsies. He is currently interning at Nvidia, working on the internal cloud infrastructure team, developing build pipelines and frameworks to use at scale. Jonathan is a current sophomore at Stanford University studying Computer Science and Mathematics. He has a passion for applying problem-solving principles from his technical background to create real-world impact, particularly using AI/Deep Learning. He founded and led his high school Technology Student Association chapter to dozens of international awards, including seven personal top-ten finishes. Additionally, he has won national awards for his research on the impostor phenomenon and academic performance. Jonathan has also conducted research and developed a prototype of an adapter that could alleviate 61% of household standby power use with Abhi. He worked last summer as a data engineering intern at Whiterabbit.ai, a startup using machine learning for real-time radiology for potential breast cancer patients. Currently, he is interning in a software role at Facebook, developing internal tools for categorizing and recommending Facebook groups; this summer, he will intern at Adobe on a data science/machine learning team. Our team has grown up together and shared our love of computer science and STEM for almost a decade; we're so excited to continue through the Imagine Cup competition.

Technologies we are looking to use in our projects

Artificial Neural Networks
Azure
Cognitive Services or other AI
Machine Learning
Python

Social Media

No social media pages available