2018 Big Idea Challenge Winners
2018 Big Idea Challenge
XVision is a system designed to automatically detect anomalies and diseases encountered anywhere in the human body with radiologist-level accuracy, just by analyzing common medical X-ray images with the help of the latest Azure technologies such as Machine Learning and Azure Functions. Our product will provide a much needed solution for people in areas of the world that lack access to radiology diagnostics while also acting as an assistant tool for the medical experts examining radiographies.
Existing Problem - Lack of Data One major key to build successful AI is the ability to gather data. The lack-of-data obstacle is hindering the R&D of machine learning (ML), and the adoption of the technology in businesses. There are limited existing channels to access and crowdsource data, especially for data which fits specific business needs. Furthermore, traditional solutions are expensive and centralized server-client model is not suitable for data sharing and exchange because of the ambiguity in the ownership of data. Data sharers have to bear middleman risks when using centralized solutions. Our Solution - Datax Ecosystem Datax is a data exchange platform built on blockchain that ensures data ownership and upholds integrity of data and participants’ reputation. It comprises of two functions: Datax Workforce and Datax Marketplace. Datax Workforce is a data crowdsourcing platform which provides a channel for data requesters to gather data. Data requesters design and post tasks onto the platform at a specific price, while incentivized workers complete the work and await payment. Requesters include researchers, consumer product groups, universities, MNCs and small businesses; whereas workers are individuals. Thanks to blockchain technology, data flows directly from workers to requesters, thus middleman risk is eliminated. Also, task submission and acceptance history are tracked by blockchain with integrity, therefore ensuring reputation of participants. Datax Marketplace empowers data owners to commercialize their datasets without risking their ownership of data, and allows academic researchers and companies to source data for R&D with confidence in data quality. With smart contracts, data ownership can be ensured as transactions are settled without a third-party. In addition, blockchain upholds the integrity of reputation of data buyers and sellers. It is believed that lowered risks and assured data ownership encourage data owners to share or sell their data. Datax Workforce & Marketplace Synergy - We encourage data requesters who crowdsource data from Workforce to resell their data on Marketplace. This synergy increases the monetary value of data crowdsourced from Workforce therefore increasing the reward requesters willing to pay workers. This fully utilizes the data resources contributed by workers and speed up the growth of the whole industry by avoiding duplicate efforts, time and resources used in collecting or generating similar data. Business Model For Datax Workforce , we charge a fixed percentage of commission on every task posted on the platform. We also provide some pay on demand tools that helps requester to collect more accurate data. Requesters can reach out to specific groups of the workforce through our filtering tools which limit the task to only relevant and preferred workers. Every filter deployed is charged separately. For Datax Marketplace , we charge a fixed percentage of commission on every transaction made on our platform. We also provide some pay on demand tools that helps requester to promote and boost sales of their dataset, such as search appearance optimization. Marketing and Sales Plan - At the initial stage of the sales plan, Datax will target academic researchers and startups by leveraging our connections with the academia and startup community. Our user base established from earlier stage would facilitate our penetration into the business sector. Datax will look for collaboration in a greater scale by approaching SMEs and big companies. Social and Community Impact Facilitate Cooperation - Cooperative R&D synthesize findings and accelerate research progress, but cooperation is often obstructed by trust issues. Decentralization encourages data sharing because no single party controls the infrastructure that holds the data. Better AI Models Benefit Society - Datax is establishing a global commons for datasets. People can make good use of others’ data while submitting their own. A global commons for datasets will ultimately realize a open data community, bringing AI technology to the next level. In a thorough data sharing and crowdsourcing environment, diversity of data reaches a new scale. Diversity of data results in qualitatively new datasets. Since data are the fundamental of AI, with more qualitatively new data being used in AI training, there will be qualitatively better resultant AI models, such as more accurate cancer diagnosis and more reliable self-driving vehicles.
Our solution - rientiQ - provides detection of breast cancer tendency for young adult women using gene sequencing and analysis with quantum machine learning. This service includes a test tube for saliva sent by mail which sample we later analyze. The test results can be viewed from anywhere on a user friendly interface. Our quantum machine learning based approach presents the added value of faster analyzation thus response time, lowering the worrying period from a month to a few days. Quantum machine learning also provides new opportunities to find unusual correlations between known genes and mutational tendency. The sent-by-mail model provides a comfortable service in a premium feeling package. Besides being an accessible and easy-to-use service, rientiQ raises awareness for the noble cause of breast cancer prevention through feasible lifestyle changes.
Education is not solely the transfer of knowledge but also the nurture of character and the process of learning beyond the classroom, but the latter two are often given far less attention. Tragedies like students committing suicide become prevalent as academic stress intensifies without proper mental assistance given, notably in Hong Kong and other Asian countries where attainment of flying grades only is upheld in major culture. Our team believes that this problem deserves a fix, starting from filling up the gap between the students in need and support staff on post. Those who are deeply rooted in psychological distress are counter-intuitively reluctant to actively seek help, as such reach-outs are already signs of improvement. We aim at breaking this vicious cycle by introducing intelligent emotion-tracking agents across campus to log emotional trends, and produce both general statistical insights and individual sentiment reports. Emotions may not be the most precise measure of psychological states, but are in fact reasonably accurate as one is unlikely to pretend smiling while feeling sad for an extended period of time. These information would enable educators to pinpoint needy children to provide help. While this product cannot replace long-term observation of student behaviours - what teachers are also responsible for, we strive to offer breadth and scale from the massive amount of data harvested, to complement dedicated help by the staffs. Furthermore, this system can provide a general landscape of the emotions of students. For example, happiness measured by smiles can be gauged. This may serve as a reference indicator of mental well-being at the school level. Beyond the scope of education, in every organisation, the mental statuses of the people are often left behind, even if these statuses could be pivotal to how well they behave. An illustration of this concept could be the use of this product as a source of employee feedback of a given company, because communication from bottom to up can be unreliable and biased.
Everyone face negative times in their lives, but there are several barriers to effective care which include lack of resources, dearth of trained health-care providers and social stigma associated with mental disorders. India, for instance, has less than 4,000 psychiatrists to treat its mentally ill people. This project is being developed to help every such person. The idea is to stop most of the factors leading to depression and to provide positive advice to those who suffer from it. It provides a 3-D virtual therapist which listens to the user’s feeling and provides positive advice by following Cognitive Behavioral Therapy. It also maintains a record of the user’s emotions to help the user further analyze their state.
DUBG is a crowd-sourced mixed reality app for efficient post-disaster management. Rescuer teams are very busy during operation, which is why we make our app based on Mixed Reality so that the rescuers don’t waste time on operating the app. Our solution provides some unique features like AR messages, AR Navigation and AR live tracking which would really helpful in such a time-critical situation like disasters.
I. Summary Our goal is two way translation between American Sign Language (ASL) and English that a user can take on the go. The system has three main components: a pair of motion capture gloves, a mobile app, and machine translation tools that run on a remote backend. The Deaf user wears the motion capture gloves, which connect wirelessly to the app on their phone. The app sends the signing data to the backend, which runs the machine translation algorithms. Then it sends the English translation back to the phone, which plays it out loud using a text to speech tool. Going the other way, the mobile app uses the phone’s microphone to capture the hearing user’s speech and relays it to the backend. It is transcribed by a speech to text tool and put through another machine translation tool. This tool takes English text as input and returns a sequence of glosses. An ASL gloss is an English word used as a representation of a particular sign . Gloss is a convenient way to encode the identity of a particular sign. In the case of our mobile app, the sequence of glosses returned by the back end is used to stitch together an animated avatar that signs the interpretation of what the hearing user said. Google’s new Neural Machine Translation system improved on the accuracy of their production machine translation tool by 60%, but still only achieved 95% of the accuracy of human translators . For formal occasions that demand high accuracy such as interpreting a speech, a professional interpreter would still likely be the better option. However, for informal situations that are more forgiving of the occasional faults of machine translation, business meetings, shopping, and casual social situations, our project is ideal. With current technology, a Deaf employee at a small company where no one else knows ASL would have difficulty engaging in staff meetings and other face to face conversation. We’ve spoken to business owners in our community who said they would be reluctant to hire Deaf individuals for this reason. With our finished product, a Deaf employee can use their phone to interpret what their colleagues say and to speak out the interpretation of what they sign back. Now all the employees can communicate with each other through the medium that each is most comfortable with. This has the potential to lower the barrier that employers may see in hiring Deaf individuals. II. Technology Overview The motion capture gloves have three types of sensors. Custom made flexible bend sensors measure finger flex. Conductive pads placed on the fingertips register finger contacts. Paired 3-axis accelerometers and 3-axis gyroscopes mounted at the base of the wrists track motion. The flex sensors are read by a custom designed PCB with two TI FDC2114 capacitive reader chips. The touch sensors are read by a commercially available capacitive reader board. All devices use an I2C bus to communicate their data back to the controller, which is an Adafruit Feather nRF52. The mobile app was made using the Ionic framework to develop the app like a website using HTML, CSS and AngularJS then automatically generate code for IOS and Android apps. There are two main workflows in the app. In the first, a signal from the motion capture gloves initiates the ASL recording phase. Once the Deaf user ends the phase, the data is sent to the backend. When the interpretation is received, the transcript is displayed on screen. The user has the ability to push a play button and play it out loud for the hearing user. In the second workflow, the Deaf user hits a record button to initiate audio recording. When the recording is complete, the data is sent to the backend. When the interpretation is received, the animation is stitched together and played back to the Deaf user. The backend is built on the MEAN stack. It manages the connection between the users’ mobile phones and the machine learning applications that run on the Azure Function server. At the current stage of development, Function allows us to use limited server time for testing without needing to pay for excess resources. The machine learning tools that run on the backend are currently under development. The goal is to use an existing database of gesture data with a gloss transcription to quickly build a tool capable of parsing gestures to gloss. Then we will build our own data set with our motion capture gloves for final training. Then we can move on to the tools that go between ASL gloss and English. References  M. E. Bonham, “English to ASL Gloss Machine Translation,” Masters Thesis, Dept. of Ling. and Eng. Lang., Brigham Young Univ., Provo, UT, 2015.  Y. Wu, et al., “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,” Tech. Rep., arXiv:1609.08144
High achieving students pay for supplementary courses to get ahead in their schoolwork, something that low-income students simply can't afford. OpenEDU was created with the mission of bridging the gap in opportunity for low-income students. On OpenEDU, students can enroll in online classes in advanced topics. They are paired with volunteer student teachers, who assign material to watch and machine-graded problem sets to complete. Teachers hold weekly office hours and keep students accountable to complete their work and succeed. A student forum allows students to interact with one another. OpenEDU provides volunteers with an opportunity to enhance their resume while giving back to the community, and gives students the resources to succeed regardless of their income level.
Shoplifting is a plague in Retail industry. Veesion aims to detect all thefts automatically and in real time thanks to Artificial Intelligence. Our technology is based on a deep learning algorithm that performs gesture recognition.
Our project titled 'Akashvani' is a news predictor system that helps predict the stability of a country based on the newsfeed from independent news agencies from over a month using a recurrent neural network and thus advising UN and other international policy-makers to take rapid action thus preventing bloodshed and loss of property.
This project is just a initiative by us as to take the computation to a whole lot of a new user experience . once upon there was a time where nobody had ever thought about the fact that we could interact with the machine with just a touch, but today everything is possible with just a touch. In this era peole if we say we can make your device work by just a thought people will call you crazy. But yes! this is true, we have come up with a idea so as that you can interact with your device with just a thought. YES! thats our project named THINK TO INTERACT We have come up the possiblity of this device but using the existing devices and some of the advanced concepts such as "Deep Learning". This NEXT GENERATION OF COMPUTATION is devices work as given below:- -> First of all we use a device that is already existing in the market named emotiv. This device just takes your thought signals and does nothing with it, we have planned to take a particular part of the output data from this device named motor imagery electro encephelogrm waves. ->These waves are collected by the device and the output from the device is sent to a cloud by the help of a wireless network. ->In the cloud we have a pre installed Microsoft Cognitive Learning Toolkit using this we create a trained neural network. This is done by taking a set of data and reccursively passing it through some of the algoithms of deep leaning until the algorithm is trained. -> This is called a trained neural network. This trained neural network is deployed in the cloud so that every time a command is triggered the neural network comes into action and it is executed. -> This is just a common and an outlined framework of the project to discuss the core technical aspects of the project the trained neural network uses some of the core concepts of deep learning as - Logistic Regression - Convolutional neural netowrk - Reccurent neural netork - Long-Short term memory. PROS AND CONS ->This device not only takes the computation to the next level but also creates a oppurtunities for the blind to interact with the devices without anyone’s assistance. ->People do not have to follow any method . Just Think! -For Eg- To increase the brightness of the device you do not have to go to Settings then Display and then change the brightness, rather you just have to simply think. ->Initially not all actions will be available for execution by the device but gradually after the product is open for people with the device’s performance more and more functionallity will be added to the device.
We are making a phone application to help visually impaired people.
Scanit is an app that enables user to take pictures or upload pictures of an outfit where the app will be able to provide links to for the user to purchase the product or a similar product
The purpose of the application is to promote recycling for society and in turn create a social awareness of how much you can help if everyone contributes a little
Our Artificial Intelligence, will process the data, using Java programming and besides computational algorithms. Once our system has learnt the basics of human behavior and social rules, it will be more than ready to do the job on it’s own, helping us to create the future of teaching, and allowing the teacher to improve his or her performance, on a daily basis.
Our project is to develop an App that analyses whether or not it is a biological invasive species based on a species's image and geographic location.
Pakistan is ranked highest in child birth fatalities. Intrauterine deaths and stillbirths are very frequent in rural areas of Pakistan where there are lack of proper medical facilities. There is an urgent need for a solution to capture the vitals and fetal condition of the pregnancy and communicate them to the remote consultants who can take immediate actions in order to avoid stillbirths. It is quite difficult for the parents to be able to travel from rural areas to the cities to see the doctors or have an ultrasound many times in a month to know how their unborn child is since the parents have to wait for the appointments. Complications may arise in between periods of appointments which the parents may be unaware of since the fetal condition is not being monitored on a regular basis. Fe Amaan is a wearable belt, which regularly monitors fetus’s health through the use of IoT sensor device which is to be placed on mother’s abdomen. The device captures the fetal heart rate and movement and sends it via Bluetooth to the mobile application. The data received will be analyzed and displayed, enabling the doctors to know about the health of the fetus. The main features of this system is to provide automated analysis of fetal health on regular basis without providing any harm to both the mother and the child. In case of anomalies in heart rates and movement patterns, this system generates timely alerts so that precautionary measure could be taken before it is too late. As the system makes remote monitoring of the fetus possible, it aims to reduce the high rate of intrauterine deaths and stillbirths in Pakistan. After the mass production, it will be used by lady health workers for monitoring the fetal conditions of expecting mothers residing in rural areas.
Abstract: Diabetes Type 1 community is growing in Pakistan and in the whole world. National Survey indicates 26% of Pakistani population as diabetic . Diabetic type-1 community is estimated to be 7-10% of this whole diabetic community. People with this condition (diabetes type-1) requires insulin (hormone that helps body to absorb glucose from blood) to be injected externally because their Pancreas fails to produce it. In modern world, diabetic community primarily type 1, has adopted Automatic insulin injection system(AIIS) to inject insulin inside the body but in Pakistan and the other third world countries, people still rely on uncomfortable syringe injections as AIIS are not easily accessible in Pakistan and too expensive for an ordinary person. So, we aim to develop an AIIS comprising wearable insulin injection system for precise delivery of dosage and a hand-held device that will act as controller for the wearable system. Our system would be able to interface with off the shelf glucose sensors. The readings from glucose sensor would be sent to hand-held device and user would be able to decide insulin requirements and send directions to wearable pump to provide required dosage. This precise and well organized delivery will reduce complications i.e. Hyperglycaemia (high blood Glucose) and improve lifestyle of diabetic community Existing Solutions: Medtronic and Insulet are the two most prominent companies for AIIS. Medtronic Minimed AIIS consists of wearable Medtronic Connect glucose sensor for continuous glucose monitoring and insulin pump with tubing attached to deliver insulin inside the body. The major problem with this device is disposable tubing and insertion sets (replaceable after 3 days) attached with this that results in high recurring costs and with an insulin pump that already costs more than 7000 USD, makes insulin pump therapy much expensive. Moreover, display screens are also embedded on pump that increases weight and size of the over-all device. The system operates with AAA alkaline battery that normally lasts 3 weeks. This system has a life span of four years. On the other hand, Insulet provides OmniPod (Pump) with a separate hand held device as it’s controller for dosage delivery. Although, Omni Pod is a lightweight insulin pump but the pumping mechanism created by miniaturized gear structure is not reliable and lasts in 3 days so it makes device disposable in itself. Moreover, this system lacks automatic continuous glucose monitoring so a drop of blood has to be provided manually in the test strip of inbuilt glucose sensor at hand held device to measure value. Considering it for recurring costs ,as its cannula(needle) is directly inserted inside the body so it works without tubing sets but as it is disposable (lasts 3 days) with 30 USD/unit so 300 USD/month and 3600 USD/year makes it an expensive solution of insulin delivery Our Solution: Our proposed solution merges the strengths of the current state of the arts and mitigates the weaknesses on a single device. It provides direct insertion of cannula (no need of tubing and insertion sets) long life and separate hand held device as a controller on a single platform. Motor technology is being used in wearable insulin system but the major part of complex visualization circuitry powering display screens is to be shifted on separate hand held device to reduce weight, size and battery consumption at the same time. Hand held device would have a database of past records and visualization circuity to display trends like glucose readings utilizing prior records. Moreover, as display circuitry has been shifted so battery life enhancement would be an add-on to improve user experience. Moreover, for sensing system, wearable glucose sensor (CGM) would be used to directly communicate glucose readings to hand-held device and user will direct the amount of insulin to be delivered inside the body as per already prescribed dosage requirements by his/her physician. These directions would be transferred to wearable pump as actuation signal wirelessly by hand-held device, then microcontroller would drive the highly precise pumping mechanism as per received signal. If any blockage is sensed during delivery, alarm would be generated for the user. Moreover, malfunctioning of any component or unexpected readings as per the general reference stored in microcontroller would also cause an alarm to be generated for the user
BeAR is a cross-platform AR mobile app that brings a new dimension in the real world where users are able to easily change the world the way they want together. From their devices people can place content in augmented reality, which gps location will be saved and others will be able to see it from their devices. The content can be a picture, a draw, a simple text or a 3D model. User can set up interactions with these objects and link them into a single structure while others will be able to rank them, take pictures of a view with these objects and share it in social networks.
CAE Fidesys is an easy-to-use and effective tool for performing a full cycle of engineering-strength analysis including loading a CAD mode and its analysis, meshing, setting loads and material mechanical properties, selecting and setting a FEM-solver, model calculation, and results visualization.
Applied Neural Diagnosis is a system of software and hardware components aimed at collecting and analyzing neuropsychological data. The final solution will consists of the following: 1) Wireless gyro sensors. These are placed on the patient's joints, and then used to capture streams of motion data. For prototyping sensors are available at major electronics stores. 2) Client application. Doctors use this to connect to sensors, run tests, capture readings from sensors, send data to the cloud, receive diagnosis details from the cloud neural nets, visualize and compare to other similar records. 3) Cloud. This provides central storage of anonymized patient records, runs neural net training tasks, responds to client requests for diagnosis. Wireless sensors are placed on a patient's body, one sensor per joint, partially of fully covering all the limbs. Heuristic part consists of mixing data overlays from certain historic periods, filtering raw data, training neural nets to recognize and diagnose certain conditions. With this system doctors can make more robust medical decisions, augment their traditional analysis with our system, incorporate artificial intelligence into their tedious and monotonous review process. We do not claim that this system is mission critical, it cannot be solely relied on. But we believe it can be certified in the future after passing trials, just like any other medical system.
Company Mission: To use data-driven technologies to capture the regulatory industry from the ground up. Proposition: To use artificial intelligence and data science to automate the FDA regulatory submission process and turn decades of FDA submission history into a comprehensive guide for success. We intend to simultaneously streamline the process for regulatory affairs (RA) consultants and provide medical device companies with the power to achieve regulatory success. Problem: Massive amounts of funding and time are necessary for companies to employ RA consultants and secure market approval. $4.6 billion was spent outsourcing regulatory affairs for medical devices in 2016 alone, and the amount is expected to triple by the year 2026. In addition, the process is extremely inefficient, with the average submission taking anywhere from 6 months to 3 years to compile. Our solution: To use intelligence, data-driven systems to capture and automate the regulatory process. Using our solution, we believe we can remove wasteful spending by halving the amount of time and money required to take products to market. We will empower companies by allowing them to automatically generate application materials, access customized analytics, and receive expert feedback on their progress. Applications can even be translated to international markets, with the click of a button. On the other end, RA consultants will have access to a streamlined version of our services and will be incentivised to provide valuable feedback to our medtech consumers. Current stage: Prototyping. Received funding from the Stanford BIOME group and Cardinal Ventures, and have been invited to pitch for multiple venture capital firms. We have created a data analytics engine for visualizations and guidances, along with predictions for device classification, regulatory pathway, and total costs. The core technologies implemented include React.js for a responsive and interactive front-end user interface and MongoDB for powerful database storage. The back-end of our website is continuously synced with the openFDA database so users can make custom queries that are more sophisticated and informative than those provided by the openFDA API. Implementation: The client – RA professional or medical device company – uses knowledge of their new medical device to interact with our service and identify the way forward. First predicates are identified, and then interactive visualizations of competitive markets, application timeline and cost, device recalls, etc. are available at the click of a button. Depending on the results from this stage the client is led through a decision-based process that reflects the least burdensome and best-fit regulatory process for each unique device and its predicate. The client uses our application generation and cloud storage tools to begin developing the perfect application content. At any stage a medtech user can tap into a community of avid regulatory affairs experts looking to network with medical device companies. Payments for these interactions are on a per-case basis, and Nova Approval takes a portion of these earnings.
MindBuddy is a mobile/web application for teenagers and young adults to keep tabs on their personal mental health. Our goal is to provide a more objective measure of psychological well-being that is easily accessible and integrates well into a constantly changing schedule. We accomplish this by giving users a platform to write privately about their day and destress without needing to worry about the judgement of others. In doing so, they also gain valuable insights that they otherwise would not have had access to.
Third eye is your third eye to fire safety. Third eye is an mobile app and iOT product solution for fire hazards California Wild Fires caused an estimated 90 million in loss. Many people lost their homes. We aimed to make a solution to prevent such trajic events from happening.There are so many avenues for fire hazards like kitchen fires from unattended cooking, overloaded Electrical systems , Combustible storage etc. In 2014 alone there was 1.2 million fires ~3000 death and ~12 billion in losses. We can do more to reduce human errors like this. What we need is a solution that interacts with each other and reduces the amount of human error. Something that uses temperature and smoke to detect the potential fire, and warns the user so he can prevent it himself or let the device prevent it. Third eye has many applications to cooking equipment manufacturing, home owners, forest departments, and government entities. Cooking Equipment manufacturers can make safer products with integrating third eye Home owners can protect their house from fire damage. Instead of having to wait for the sprinklers to go off home owners can stop the fire themselves once it goes over the range or alert the fire department before it becomes a problem. Forest department can find the location of the alerts being sent to provide better mapping for large-scale fires. Government entities can ensure more public safety by integrating third eye into areas of fire hazard reducing human errors. Our first implementation of Third eye is an mobile app and iOT product solution for sensing fire hazards and alerting customers. You install the arduino temperature and smoke sensing unit in your homes. You can use the mobile app to specify your alert threshold which is saved with unit ID on the server .take it easy as the unit detects temperature increase beyond the threshold and sends alerts to the specified phone numbers alerting the house hold or property manager to the potential problem. We used Arduino for integrating with the temperature sensor and the wifi-module, an ESP8266 Wifi Module, Temperature Sensors for detecting heat differences, Python for automated messages to twilio, and Swift for users to set up their temperature alert preferences for the product. Third eye 2.0 will be to integrate sprinklers and fire depressors so we realize the vision of automated communicating systems without humans. Future of third eye is limitless as we install the unit in forested areas and allow drone with video integration to react to fire alert bringing the problem ,detection, resolution time to split seconds.Lets remove need for a phone call to the fire station and the long drives of fire trucks make through traffic by installing third eye as part of our streetlights and electrical/heating equipment. Third eye is your third eye to fire safety. Lets make fire safety smarter with a third smart and highly responsive IOT eye.
Security in home is now being directed from traditional methodologies to automation with the help of Internet. The Internet of Things is the network of interconnecting devices (may be physical devices, vehicles, home appliances etc.) embedded with electronics, sensors etc. to exchange data. Nowadays home systems are equipped with computing and information technology which provides them with smartness and intellect. Since doors are the gateway to our homes therefore it is necessary to make them more secure. Currently available mechanism of providing secure access to doors include bare-metal locks and some smart locking systems. The smart locking systems performance can be evaluated on the basis of identification accuracy, intrusiveness and cost. In this paper we introduce the idea to provide secure access to home. It will be achieved through smart doorbell which is a cost effective alternative to currently its counterparts. Our system connects WiFi enabled android devices with firebase server using Raspberry pi and enables user to answer the door when the doorbell is pressed. It learns to identify new user by using face recognition as a unique identity to authenticate the individual.
If your plant is happy, you are happy. So you are productive. Research has shown that interior landscaping has substantial effects on reducing stress, making a more productive environment and producing a feeling of well-being. In addition, the fact of having indoor plants helps to improve the air quality and lower background noise, making a place comfortable, and at the same time it puts people in closer touch with nature. Also, the better you care about your plants, the more you contribute for a global change that benefit everybody even in a social way. Any citizen can become a plant lover. All functionality centers on a mobile application and a synthetic plant that captures live data coming from sensors plugged into the soil next to the natural plant you want to take care of. With that information, we would know the health status of this plant and therefore its happiness. The app would have the ability to share with friends unlocked achievements on different plant species about the good treatment the plants receive in a playful way to motivate people’s participation near your location. So, there is a community that likes plant’s care as well as you and gain Humanity Points!
Vikkor is a handy device that can help you instantly translate your voice to other languages. But that’s not all! Vikkor can listen to your voice when you speak another language and give you suggestions to speak correctly and fluently. With Vikkor you can both go global and speak local by yourself!