InterviewBot is a web-based application tailored to aid students with video or physical interviews when applying for jobs. As we are second-year students and were seeking Internship placements, we could not come across a tool that would help us prepare for interviews. InterviewBot therefore gives real-time feedback on interview-style questions. The motivation behind this project stemmed from our own personal experiences. Pranay had recently conducted his first telephone interview with a firm regarding a summer internship. Upon completion of this, he had realised that he had forgotten to mention a few key points! Furthermore, as it was his first encounter with such an experience, he was unsure as to how he had performed. With Sam and Roneel also looking into Summer internships and opportunities, we believed that InterviewBot would not only benefit us personally, but would have a similar impact on other students in our position as we were unaware of a system that offered the same services! InterviewBot was designed as a practice interview session. Therefore, the computer would ask interview-style questions, and based on your facial expressions and response (through the webcam and microphone), the application would report on different levels, including levels of emotion and camera feedback. InterviewBot was built on 3 Cognitive Services: 1) Emotion API: This reported on levels of positivity and negativity based on the users response. It would do this through grabbing a screenshot of the users face (through webcam) and would analyse it. 2) Bing Speech API: This API had converted the speech to text. Therefore, the user would get a written transcript of their interview once the session was finished. This is extremely useful as users can track back to where they may have felt a lack of confidence or they may use it to remember questions that were asked. 3) Text Analysis API: This API was used to give feedback on speech sentiment and report it back to the user in real time. Although the Microsoft Challenge had stated that we should aim to use 1 of the API’s within the Vision Cognitive Service package, we decided to broaden our knowledge and throw ourselves outside our comfort zone. As second year students, we haven’t been exposed to such levels of complex API’s however, after researching and thoroughly walking through the API documentation, we managed to successfully implement not 1, but 3 of the API’s which all ran in parallel! InterviewBot is structured so that each user has their own account. This is accessed through a login page upon visiting the website, using a HTML and CSS template, and our MySQL server, using PHP for parsing and sessions. Another key feature that we had implemented was a “remember” list. This allowed the user to write a list of points they wish to mention during their interview. These are then automatically ticked off the list when mentioned. We then gathered some common interview questions and had the application loop through these in a random order. These were then read out to the user (through the users audio) using the HTML5 SpeechSynthesisUtterance API. After speaking with various people throughout the Hackathon and collectively condensing their ideas, the expansion of this concept is almost limitless. Not only can this be a tool for students who can get real-time feedback on their performance, but companies can also use it to assess their candidates’ performance, with the written transcript allowing employers to dissect the interview in detail. A YouTube video highlighting the application working can be through this link: https://www.youtube.com/watch?v=xAHucFbN_c8 Not only did we win an Xbox One X (each!) but we had the privilege of writing an article on Microsoft's Developers Blog. The link for this can be found below: https://blogs.msdn.microsoft.com/uk_faculty_connection/2018/01/04/oxfordhack-winners-of-the-microsoft-cognitive-challenge/
Names: Pranay Mistri, Samuel Littlefair, Roneel Bhagwakar. We are three second-year students from the University of Manchester, all studying Computer Science. We had attended OxfordHack 2017 in November, where we had competed in the Cognitive Services challenge held by Microsoft. We had developed InterviewBot, a web-based application used to help students prepare for interviews, and after 24 hours of developing and competing with 52 other teams, we had won the challenge with our concept! The challenge was to implement one of the Vision API's however, we had implemented 3 of them which ran in parallel! These were the Emotion API, Bing Speech API and the Text Analysis API. It was quite challenging to get all 3 API's up and running in real-time, as well as parsing and displaying it appropriately, but we just about managed it. Pranay worked on front-end design, layout and branding, whilst Roneel and Sam set up the backend. After a night of no sleep and copious amounts of coffee, we ended up with a working product: Interview Bot asks 12 questions, listens to your speech pattern, analyses your facial expressions in real-time, and gives you a breakdown of your overall score when you're finished, along with a complete transcript. Once all the teams had finished their respective presentations, a few were extremely impressive as they used the technologies well and stuck to the core of the challenge well. We honestly didn’t think we had a chance in winning and this wasn’t helped with the non-functional demonstration earlier on. However, as the Microsoft panel got on stage, they started describing our concept, which cued a few bemused stares between our team, still unsure whether they were talking about us. After announcing that Interview Bot had won their challenge, we were stunned and went to the stage to collect our prizes. 24 hours of breakthroughs and breakdowns resulted in an extremely challenging yet fun event. Overall, OxfordHack taught us a significant amount about ourselves, identifying our own strengths and weaknesses, improving our teamwork and communication skills, and most importantly, learning how to use new technologies and languages. The Microsoft challenge allowed us to use professional APIs to develop an idea we thought would be useful to a range of people, which was extremely rewarding.