Smart Maps
a gesture-controlled navigation app responds to in-air gestures
Gesture Design
Gesture Study
Primary / Secondary Research
User Experience Design
User Interface Design
Video Shooting
Video Demo

role

Solo
Project

TIME

Oct. 2021
Six Weeks

TOOL

Figma
Premiere
Blender(3D scene)
Adobe Audition

Gestural interfaces enable users to interact with digital devices directly, sometimes without the need for physical hardware that we have to touch - enabling us to use our fingers, hands, arms, and even our whole bodies as the interface to a screen, an object or a responsive environment. Compared with tapping keys on a keyboard, gestural interfaces can include a much wider range of actions, and sometimes feel so natural, they feel ‘invisible’ to the user.

Challenge

People nowadays are relying on the navigation app: they no longer have to use their extraordinary memory to remember every road in the city, and they can go to any place they want through the mobile navigation app. However, they are also struggling with using the app when they are driving because their attention will be distracted to interact with their phone.

GOAL

How might we reduce phone distractions so that people can focus more on the road?

product

smart maps - video demonstration (3:35mins)

Visual and Logo animation of Smart Maps (Animation might take longer time to play)

DESIGN
PROCESS

The following will show a total of 4 design phases, 13 stages for this project. From zero to one, creating an end-to-end immersive user experience.

feature

The product will mainly focus on five key functions; these are the functions that we always use in a navigation map.

· Choose which route to take
· Zoom in / Zoom out of the map
· View upcoming directions
· Pan around the map
· Center Navigation

Stage ZERO

analysis: Secondary research and observation research

Secondary Research

Scientists found that seeing gestures representing actions enhanced understanding of the dynamics of a complex system as revealed in invented language, gestures and visual explanations. Gestures can map many meanings more directly than language, representing many concepts congruently. Designing and using gestures congruent with meaning can augment comprehension and learning.



Another research suggests that the design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. For example, sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing.

The design of hand gestures should follow principles of natural language and consider the discomfort and fatigue associated with distinct hand postures and motions.

Source: https://pubmed.ncbi.nlm.nih.gov/26028955/

Observation Research

In our daily life, gesture is such an instinctual part which can become challenging to capture. We tend not to pay them much attention, so they can go unnoticed. So I document both my conscious and unconscious behaviors by recording myself working on a tablet for one hour to see if there is any pattern that I can learn.

Stage one

Designing and testing gestures & Representing gestures

After learning the gesture language and deciding what functions will be focusing on, I sketched out the gestures and use my hand to stimulate them. I also interviewed people to give comments on how they think about the gestures: Is it understandable? Will it be too hard to perform a gesture? After testing the design of the gestures, I also sketched out different styles of gestures to test which style is more understandable to users.

In the end, I chose to stick with style No.4 which is the most understandable style to the participants that I interviewed.

TAKEAWAY

Based on the feedbacks from user testings, style No.4 is the most understandable and recognizable style to represent a gesture.

Stage two

Echo and Semantic feedback

Feedback is an important part of the user experience. Echo feedback lets users know what they are doing without activating anything in the interface, while semantic feedback gives users a more direct reaction after they perform an action: for example when a user taps a button on a screen, the button changes color to indicate/signal that an action has taken place, and has been understood by the system.

Stage three

Task Flow Diagrams

User Flow is a diagram that describes the path taken by a prototypical user on a website, app, or kiosk to complete a task. I drew three diagrams that cover all five functions in Smart Maps to describe the user actions.

·  navigate the directions ( View upcoming direction  |  Direction lists )

Stage four

Researching comparative UI Patterns

People naturally look for patterns. When they arrive on a web or mobile site, they often think we already know where to click. This is based on their prior experience. When sites incorporate known patterns, users often have an easier time getting where they’re going or accomplishing their goals. Before jumping into designing the wireframe of the Smart Maps, I explored several existing navigation apps in the market and analyzed the UI patterns for different tasks.

Stage five

Designing an Onboarding Experience

The user onboarding experience is the first user experience a person has when they download the product and use it for the first time. It often takes the form of a brief walkthrough that demonstrates how to use the main features of the product in short, simple steps. The main point of a user onboarding flow is to promote user activation and user retention.

·  Teach users how to use your app while they use the app. Let them enjoy the main features of the app straight away.

·  Use clear language that appeals to your audience. Use graphics that express your brand, and animations/images that create a sense of delight. (use wireframes for now - color can come later in the project).

·  Since I'm designing an “in-air” gesture-based app, it's unlikely anyone will already know which gestures to make (unlike tap/swipe/pinch for “on-screen” gestures). So use images or animations of hands to show users which gestures to make for every step of the onboarding process.

·  Include echo and semantic feedback while the user is learning to gesture. After all, they need to learn what the right feedback looks like as well.

·  Give users a way of recalling gestures if they forget. Give them a way of remembering which gesture goes with which function.

Stage six

rough interface design

After gathering all the information I need, I started my first round of wireframing.

The original concept was using a pointer as the user's echo feedback. The pointer will respond to user's actions when the camera catches user's finger. Soon after meeting with my professor, he gave me a feedback about a huge mistake that I made: these wireframe screens were more designing toward the touch-screen interaction, not the gesture-controlled interaction. The buttons are too small to use; there are too many elements in one page to distract user. What more important is, the pointer might work good on a computer but not a small screen like a phone.

Rough Wireframe Drawing (Round 1) : fail to design for gesture-controlled interactions screen
Rough Wireframe (Round 1) : fail to design for gesture-controlled interactions screen

I started my second round of wireframing. This time I chose to simplify most of the elements on the screen to only keep the most important ones(Less is more). No pointer points out the user's actions, instead, highlighting the task that user is performing. The echo feedback would perform in the different way: if the user uses certain gesture, the camera will capture and mirror what the user is doing.

Rough Wireframe Drawing(Round 2)
Rough Wireframe (Round 2)

Stage seven

wireframing

With the synthesis of my two rounds of rough wireframe, I started my third round, which is a higher fidelity round of wireframe. I added more details of considerations to it based on my professor and peer reviews feedbacks:


Reduce the number of gestures on onboarding
There are too many gestures to onboard for users. They can't remember everything when they first start to use it. Rather than teach them at once, setting up a gesture library/help center will be helpful.

A consistent onboarding rather than a separate onboarding
At the beginning of designing a wireframe, since there are many new gestures to learn, walking through the whole process while teaching users the gesture will help them to remember the gestures. However, the feedback I received from the professor and peers is the opposite. Their suggestion was: "You don't have to teach all of the gestures at once; setting up a help center will be helpful since users can't remember those gestures quickly."

Figma
Gesture lists

Stage eight

Wizard of Oz Testing

Wizard of Oz testing is to control what the user sees by clicking through your Figma prototype manually, provide the magic by controlling the user tester’s ability to ‘activate’ the UI. The goal of this stage is to find out if a user can complete each of the tasks by correctly using my user interface design while also making the correct gestures.

The interviewees were classmates from my class. Since the class was held in remote, we can easily do the "wizard of Oz testing" through Zoom. I had done four tests for this stage and received great feedback from both my professor and classmates.In general, most of the gestures are clear and understandable to the users. They appreciate that I design the gesture as they often use it in life and relative to the task. There are some confusion and improvements about the flow and onboarding, I wrote them down as below:

·  The speed of the transition animation should be adjusted
·  The length of onboarding shouldn't be too long
·  Don't throw too many gestures at the users at once, they can't remember all
·  Gestures repeating with different meaning
·  Some animation problems

Stage nine

Style Guide

A style guide is a smaller part of a design system that consists of a set of rules regarding branding and the visual style of products. It is a low level of abstraction of a design language. A style guide also provides guidelines for the way the brand should be presented from both a graphic and language perspective. The purpose of a style guide is to make sure that multiple contributors create clearly and cohesively that reflects the overall style and ensures brand consistency with everything from design to writing.

Stage ten

final design & prototype

Figma

Stage eleven

Green Screening Video

Green Screen filming is a method used by the film industry to ‘cut out’ actors from their background, so they can be artificially composited into a fabricated scene. A green screen lets you drop in whatever background images you want behind the actors and/or foreground.

Stage twelve

Scriptwriting / Recording

A voice-over script is a tool for helping a reader make a voice recording. The script should introduce the app clear through three-point:

·  WHY: Why is this a problem - and why is this important to me? Why is the product even needed?
·  WHAT: What is the product? (Introduce it very briefly)
·  HOW: How it works and solves the problem defined in WHY.

Stage thirteen

reflection

Designing a gesture-controlled app is a brand new experience for me. I always have been so passionate about VR gesture design, but I haven't gotten a chance to design any. This project led me into the door of designing body gestures even though it was more focused on palm/fingers gestures, I still learned so many as I step by step designed them.

If I have more time, I will continue to polish the app, especially the flow of searching, saving location, editing an address, adding a stop by location, etc. For now, the functions are still very limited. With the development of technology, I believe that gesture design will play a critical role in the future of interaction design.

Thank you to my professor who gave me lots of advice, and thank you to all people who had been helping me build up this project.