Emojat

Emojat is an Android messaging app for the usage in the automotive context. It uses speech control and facial expression recognition to optimize the User Interface while driving and input emojis easily and without any need to use your hands.

Tech stack: Java, Android, Google Speech API, AI Affdex

As part of my bachelor thesis I was researching new concepts, ideas and improvements of the human car interaction at Bertrandt Ingenieurbüro GmbH Tappenbeck in the department of infotainment and operating concepts. As a result of this research I came to the conclusion that while driving there are already existing functions to type in and send messages by speech control but there is still no option of adding Emojis which are nowadays an inseparable part of messaging apps. Emojis are part of most people’s language when communicating via text messages and give them a possibility to express their emotions and feelings better and in a more distinct way. For this reason I developed concepts of giving the driver an easy and especially safe way to enter Emojis with a minimum of distraction. Between various approaches I concluded that a concept which uses one’s own facial expression to enter an emoji would fulfil the requirements best. Furthermore, emojis are based on facial expressions. Therefore, it stands to reason to use one’s own expression to mimic and select them.
To test and evaluate the concept, I designed a user interface and developed an Android application. The app consists of two modes, one for the normal use (as pedestrian) and one for the driver.




In the normal mode the app operates as a usual messenger app. In the driver mode, the interaction with the app uses voice commands instead of the touch input. You can create a message by simply speaking it out loud. The spoken words directly get converted into the message text using the Google Speech API. Like this, you don’t need to use your hands nor eyes and the level of distraction is lowered to a minimum. To add an Emoji into your message, you simply have to say the word “Emoji” out loud. The app then starts the facial expression recognition using the AI Affdex by Affectiva.



Now if you hold a facial expression for a short time, the app will load the according set of emojis. If you, for example, make a laughing face, it will show all laughing emojis, a sad expression will show all sad emoji and so on. In total there are the 11 expressions, namely “neutral”, “smiling”, “laughing”, “winking”, “kissing”, “stuck out tongue”, “screaming”, “flushed”, “smirking”, “sad” and “angry” which can be recognised. After recognising the expression the driver can select one of the emojis of the according set by voice commands. It will be added into your message. You can now continue with your message or send it by simply saying the word “send”.
To introduce the user to the function and possibilities of the app, it shows some introduction slides when starting the app for the first time.




After finishing the application, a survey was made to evaluate the concept. The participants were asked to rate the usability and interaction of the app, the level of perceived distraction and whether they see the concept as a meaningful extension of todays car infotainment systems. The feedback approved the concept in all points. However, although with 82%, the majority saw the distraction from the traffic as still within the parameters, it could be further minimised, by shorter and more precise facial expression recognition. ■