AR.CAR

Interaction design for Augmented Reality

Problems addressed

Placing the full scale car in AR

Switching parts through menu

Clicking on individual parts

Key Contribution

Instructions in the 3D space

Showing menu in a landscape

Point and shoot to allow hover

Scroll to see the whole journey below

Note: The car itself has been designed and modelled by DEs and JDEs at the IITB Racing team and has only been textured and rendered by me. This project deals with exploring the car in Augmented Reality and not the design of the car itself.

INTRODUCTION

Background: Mentorship Project

Under the mentorship of an AR designer at Google, The project was done in a duration of six weeks from competitor analysis, problem identification, prototyping to suggesting scalable guidelines. In the times of COVID 19, testing was done through convenience sampling on relatives (emergent AR users).

About Augmented Reality

Augmented Reality (AR) is about placing virtual objects in the real world that could be manipulated in real time through an interface. AR has had me fascinated with the immersive experience and the interface that goes beyond the 2D screen, towards tangibile interactions and real life metaphors.

Goals: Problem solving for AR

Through this project, the main aim was to learn about the interaction design problems in AR and explore solutions to them. Learning to prototype for AR and understanding the design process for it was another important learning goal.

Solving usability problems in AR

Storytelling (of how stuff works) in AR

Learning prototyping techniques for AR

Icons made by Kiranshastry and monkik from www.flaticon.com and myself, respectively.

Topic: “How Stuff Works?”

The topic given was to tell the story of “ How Stuff Works”, through the medium of AR. For the scope of this project, I explain how an electric racing car works using handheld AR devices eg, phone/ ipad.

Introducing and familiarizing with the car in your world

highlighting main components and their working

The Role of AR in all this?

There are countless AR apps and it has become somewhat a cool thing. We wonder if it is even required in some cases or will just a 3D representation on a 2D screen suffice? Why have the object in your real environment and use a lot of processing power and battery?

Understanding scale

Seeing a full scale car in your own world creates an authentic experience.

3D spatial relations

Better understanding of the spatial relations as the car is in your own frame of reference.

Active exploration

User becomes an explorer, actively interacting with the car from their own point of view.

STORYBOARDING

Critical User Journey

Based on the particular product goal defined above, I created a simple and critical path a user will take to reach that goal. Critical user journey helps to cut the clutter and focus on a goal.

Building expectations and help the user to anticipte the context of use eg. scale

Naturally introduce the virtual object in the real world and help user position it

Menu on screen space vs on world space. How to display text in AR?

How to know what is clickable? pointing at and selecting the parts.

Designing Scenes not just Screens

Paper prototypes were made, not as just screens but storyboard of how the car should be shown through the screens. I had to decide the elements in screen space and the one in 3D world space. (The words "scenes not screens borrowed from Alexandria Olarnyk's Medium article)

I used these frames with sketched buttons to test with people. This was to check if they are able to navigate through parts and understand the structure of the car.

Screen Elements Reduce Immersion

It is difficult to switch focus from 2D screen space to 3D space. As the paper could not bring out the 3D aspect of AR, users mostly interacted with the buttons permanently on screen and had no knowledge of being able to interact with the parts themselves due to lack of clicking affordance in the objects.

Screen space of the side-view mirror

In a side view mirror, the text is in the screen space and focusing on that makes us ignore the car that are inside the reflected 3D world.

World space of the side-view mirror

The cars are in the world space which goes beyond the surface of the mirror and focusing on the car makes us totally ignore the text.

PROTOTYPING

Jumping into Unity for Testing in 3D

Some insights could be drawn from these paper prototypes about how people could interact with screen space elements and whether they could navigate through the different parts of the car. However, to get any deeper insights about how people could interact with AR elements, I had to jump into making quick working prototypes.

Unity 3D

AR Core

Identifying Key Problems from Testing

During the time of lockdown due to COVID19, I asked my parents to use the application. I could get some useful insights as they were not familiar with AR. It made me think from someone’s perspective who doesn’t even have a conceptual model for AR in their mind.

And after looking at several reviews and feedback on existing apps online, I found out that there are many such people. I had to make several iterations of the app just for the placing the car in the environment and visit back to the journey map to document more specific problems.

Placing large scale objects

Realization text as menu items

What is clickable and how to select?

PLACING OBJECTS

Problems1: Placing Large Scale Objects

Users have problem placing large scale models in AR- They may end up being too close to the model or even inside, the whole model doesn’t fit in the screen, a crowded environment leads to occlusion

Models end up being too close and large ones block the view

Using placement indicator makes it slightly better

“Its only a camera.
Only the camera works”

“Can’t see the model,
it only scans the surface”

“What is this?
Ohhh its so big”

Onboarding and opening AR

User may open the app outdoors, indoors or in a vehicle.

If the camera opens directly, the user is forced to scan and place object

They may be unfamiliar with the context and have a bad 1st experience

Placing the object

The object being placed is of unanticipated scale and form

The user may place the model too close and end up being inside it

Positioning the 3D object using a 2D screen could be unintuitve.

Insights: Placing Large Scale Objects

-Should be familiar with the app and its context before starting the AR.

-May need guidance to know when and where to position the model

-Preventigng large scale objects placed too close and blocking the view

Onboarding and context

-Getting an idea of the (educational) context

-Getting some anticipation of the scale

-Adding a checkpoint and making the user aware of what might follow and make it a concious decision

-Adding a camera icon instead of a usual AR cube for familiarity and to let the users know what exactly is happening

Feedback: A pop-up with one button made the user click on it as a reflex action without reading. Therefore the current onboarding did not make sense and some improvements were needed

-Transitioning the full screen instead of a pop-up

-Adding alternative option to see detailed instructions

-Changing the language on buttons to check oneself

Placement Indicator

Using a full scale placement indicator, makes the user anticipate the scale and to some extent the form. It also helps to use the point and shoot metaphor for placing objects in the 3D where the user first points at a spot and then confirms to place by tapping.

2D indicator as they are more lighter for real time rendering

Top view silhouette helps anticipate without revealing much

A center standing line and dot for a pinning metaphor

Ring contains the elements and ripples out when car is placed

-The vertical pin is to give a "pinning metaphor" to fix position

-It intends to add more dimension as a 2D thing in 3D world may not fit

-For the user, it was more of a distraction, hence I decided to drop it

-The car rises up from the silhouette

-The ring animates like a ripple

-This makes the apearance more impactful and real

Trigger-Rule-Feedback

As the user is interacting with the 3D space using a 2D phone, I showed instructions spatially in the world space and not on the screen space to divert users attention accordingly and help them explore the 3D world space.

When the user points the phone close to himself he sees the indicator and the instructions “point further ahead to view the whole car” and some arrow going out of the screen.

After the user points the phone further, the instruction on the horizontal plane of the indicator becomes less legible and a vertical plane with a new instruction now becomes legible that says “Good to go! Tap to place the car”.

-Spatial instructions in the world space to coreograph the user

-Text in 3D, when viewed from an angle is distorted

-Readability is purposely manipulated with horizontal and vertical planes

-Looking down, the horizontal plane with "point furhter" is readable

-Looking straight ahead, the vertical plane with "tap to place" is readable

Surface Not Big Enough?

The user may place the car on a surface not big enough to fit the whole car likea table-top. In such a case, the car seems to be floating in the air and the experience is not as emmersive.

Giving the car a simple transluscent environment of its own still makes it look grounded and yet in the real world.

...

MENU ITEMS

Role of Text in AR

How to use text in AR is a challenging topic with a lot of scope. There are numerous problems related to readability and legibility. The role of text in AR could broadly be classified into realization text and comprehension text (refer: Min-Young Kim at ATypeI 2019, Tokyo).

Realization text

Small amount of text in the form of menu, label, instructions

You may not know when and where it will pop up in AR

You have to realize the text's existence and visibility is a bigger issue than readability

Comprehension text

Large amount of text in the form of passages

You will know when and where it pops up due to its volume

Readability will be a greater issue.

Problems 2: Realization Text as Menu Items

In this project, I had to display the different subsystems of the car that the user could toggle between. From the previous insights, it was clear that these had to be in the world space for a better immersive experience. I referred to some existing applicatins like Google AR core sevices, Angry Bird AR and Knightfall AR to identify some problems and existing practices. The umbrella problem of visibility has several factora affecting it like

-Text element could be outside the screen

-The real world background is by dynamic and cluttured

-Text can be viewed from different angles leading to distortion

-Text elements next to each other may cover each other form an angle

Insights: Realization Text as Menu Items

-Text should be placed in such a way that it is not hidden by any other object or other neighboring text.

-Text should belong somewhere in the worldspace like a real or virtual surface wit material instead of floating randomly.

-Text should be made visible against different backgrounds and different angles. taking inspiration form billboards could help.

Connecting Menu Items to the model

Taking our first insights further, I decided to make the menu items, a part of the whole system of car and its virtual plane. They are closely linked to the car and have a specific position in space instead of floating.

-The items are arranged as billboards standing behind the car

-The lines dripping down from them help lead the eye towards them even if they are outside the screen

-The lines also intend to provide some affordance of dropping down and conneting with the ring around car and affect the car in some way

Arrangement for Maximum Visibility

Addressing our second insights, the menu items were arranged is such a way so as to prevent them from covering each other. It is just lik ethe leaves in a tree arranging themselves in a fashion so that maximum leaves are exposed to maximum sunlight.

-The billboards are not placed parallaly but little irregularly

-The ones in the middle are higher and also further behind on the z-axis

-Even when viewed at an angle the are visible

-The transluscent billboards also reveal some of what is hidden as well

...

CLICKING AFFORDANCE

Problem 3: What is Clickabke and How?

As observed in the rudimentary testing, the users were not aware if the 3D objects in the world space could be interacted with and what could happen if they tapped on them. The use case for this in our project was where users should click on a part to know more about it or in some cases, view in exploded view.

Apart from the affordance, it is difficult to control objects in a 3D space using a 2D screen. The position of the objects are not fixed with reference to the screen and accidental taps on screen can lead to errors.

Point and shoot metaphor could be used especially in the case of multiple objects with different functions.

Point and Shoot vs Tap and Drag

To test and compare the point and shoot metaphor and tap and drag, I used the existing working prototypes that were made for placing the objects. I reduced the scale of the car so that the previous problem doesnt interfere with our findings and we can test the interactions for selection itself.

Tapping to place the car and then selecting and dragging to position

-Added a move icon at the base with a blue glow to indicate the affordance of selecting and moving on the plane

-The users were still not sure if they could select and move the object as dragging on the 2D surface and moving the model in 3D was confusing.

Pointing and deciding the position of the object and then tapping to place

-Connection to real world was easy to understand as users move the phone in the 3d world itself to point and position

-The placement indicator creates an anticipation of what is going to happen. The placement of the object this way was faster than the previous.

Point and Shoot for Selection: Insights

-Point and shoot is better for the understanding of 3D world

-Pointers can be used for a hover effect to create anticipation

-Accidental taps need to be prevented

Hovering and Selecting with Raycast Reticle

-User clicks the red icon on the right to enable reticle

-The user then points the reticle to a part like the battery

-The battery then explodes partially to signify its affordsance

-The user may finally tap anywhere to view battery in exploded view

...

CONCLUSION

Learnings and Outcome

It was a great learning experience that helped me learn from my mentor about things related to AR and beyond. I understood the current trends in AR with respect to the use cases, interaction techniques, and technological limitations. I learnt to think from the user's perspective, especially those who are unfamiliar with AR. I also learnt to prototype in AR using Unity engine with AR foundation and AR Core plugins and C# script. Apart from this I learnt the importance of storytelling and communication.

Logo

Check Out More

POUR

Assistive Device for Blind Users

Designed an assistive device for blind users to help safely pour hot liquids. The project explored cognitive ergonomics and how the form of the product could communicate useful information, providing a feedback loop for the blind.

FEB 2020 |   Product design   Tactile interactions   User study

Paper Potli

Game for Gender Equality

A craft + AR avatar making game for making children gender aware. Designed an end to end model for promoting gender equality among children through craft workshops and a digital application. The DIY dolls help explain, express and explore gender.

OCT 2019 |   Game design   Social design   User study

Designed with 💙 in Lucknow