FaceMatch

1st prize €1,000+
Show pro­ject de­tails
Client
AWS
Work
App
AI
Stack
JavaScript
Python
Collaborators
🇮🇳 Mohit Ahuja
Timeline
24 hours
Hide pro­ject de­tails

For the Amazon Web Services (AWS) Hackathon at The Future of High Tech, part of the Startup Fest Europe 2017, I de­vel­oped FaceMatch, a mo­bile app that uses deep learn­ing-based fa­cial de­tec­tion, and dis­plays the re­sults in an aug­mented re­al­ity heads-up dis­play. I ended up win­ning the hackathon’s grand prize of €1,000 along with VIP tick­ets for StartupFest Europe.

View GitHub repo →

The idea is sim­ple — if you en­ter a large con­fer­ence or a room full of peo­ple, you want to know who you should net­work with. FaceMatch uses AI to un­der­stand faces, and finds their rel­e­vant info from their LinkedIn pro­files. This means that you can es­sen­tially point the phone at some­one, get in­for­ma­tion like their age, gen­der, ex­pres­sion, des­ig­na­tion, and com­pany, along with a link to their LinkedIn pro­file.

I also added a sec­ond fea­ture for users who are vi­su­ally im­paired or blind. FaceMatch View reads out the in­for­ma­tion of the peo­ple it rec­og­nizes, so if you’re par­tially or com­pletely blind, you can un­der­stand who’s around. Anand Chowdhary is in the frame, and he’s 19 years old, CEO/Product at Oswald Foundation’ will be read out, as an ex­am­ple.

It also uses ob­ject and scene de­tec­tion, so you can just point your cam­era in the di­rec­tion you’re walk­ing in, and it will tell you what’s around. In this scene, there is: road, lamp­post, foot­path, grass, tree’ will be read out to you. For read­ing out, I used ResponsiveVoice, an in­stant text-to-speech and speech syn­the­sis li­brary that en­sures voice con­sis­tency across plat­forms.

FaceMatch uses Amazon Rekognition, a ser­vice that lets you quickly add so­phis­ti­cated deep learn­ing-based vi­sual search and im­age clas­si­fi­ca­tion to apps. I ended up us­ing Face Comparison, Facial Analysis, and Object and Scene Detection APIs.