In Feb-May 2021, I worked as the sole interaction designer on a team investigating and making autonomous mobility come true, in Shenzhen China.
This page documents the very first part of that exploration, with the team leader’s permission.
Part 01: En Route Interactions With Other Road Users
Entry-01: Feb 22, 2021
︎ Research Question-01: What scenarios will the autonomous vehicle encounter while en route?
From brainstorming and help from a brief survey, I first listed the following en route scenarios:

To dive deeper into the listed scenarios, I first picked out the scenario “Encounter(ing) pedestrian ahead” (pedestrian cross-road), to further analyze what specifics should be taken into account when designing the human-machine interactions for the specific scenario.
(Source: Rasouli, Amir & Tsotsos, John)
It turns out that many factors could be potential influences for pedestrian(s)’ cross-road behavior(s). Since autonomous vehicles are far different from a traditional vehicles with a human driver behind the steering wheel, there are also factors that only apply to traditional vehicles but not autonomous ones (greyed out in the image).
I started to turn to the question of what more information shall an autonomous vehicle convey in order to grant safety and trust.
︎ Research Question-02: What information shall an autonomous vehicle convey to fellow road users?
According to Lasstrom et al., 4 important messages for an autonomous vehicle to communicate to pedestrians includes the following:
︎ Whether the vehicle is driving autonomously
︎ Whether the vehicle noticed the pedestrian
︎ Whether the vehicle will yield
︎ When the vehicle intends to drive
I also went back to the autonomous vehicle’s journey map I made and highlighted the crucial points for vehicle-to-pedestrian communication. (click on the image for enlarged view)
![]()
However, as a driver myself, I thought of how human drivers would also wave/smile/nod etc. at fellow road users, sometimes to communicate intentions, other times just for courtesy.
How shall a “robot” (autonomous vehicle) replace these interactions? How would the display be different for the autonomous vehicle to communicate crucial information v.s. merely being friendly?...
︎ Research Question-03:
What are some ways for an autonomous vehicle to communicate its intention to fellow road users?
I researched existing eHMI (external human-machine interaction) designs for autonomous vehicles. The following are some cases I encountered.
EHMI for autonomous vehicles could be concluded into the following categories: text displays, sign displays, humanoid/objectoid eyes, lights, projections, sound etc.
Each eHMI method would have its own strength and weakness. For example, a humanoid/objectoid pair of eyes on the autonomous vehicles may make the vehicle seem more approachable and empathetic, however, they may not be able to communicate information as intuitive and quick as, say, display of signs.
Perhaps an autonomous vehicle could use different ways of communication according to the need of the situation. (e.g. :expressing courtesy v.s. communicating a stop being made)
Entry-02: Feb 25, 2021
︎ Scenario-01: Pedestrian-cross-road
I decided to go back to the “pedestrian-cross-road” situation as a specific case to further analyze, with information gathered from the above research.
Storyboard-01:
︎Normal driving state:
“autonomous” display with static turquoise light, arrows on wheels
︎Spots pedestrian:
deceleration display with chasing light (turquoise at front, red at back)
︎Wait for pedestrian to cross:
green pedestrian-crossing sign at front with chasing green light, red stop sign with light at back
︎Accelerating:
acceleration display on both screens with chasing turquoise light
Storyboard-02:
︎Normal driving state:
round “eyes”, static turquoise light and arrows (on wheels)
︎Spots pedestrian:
“eyes” turn to pedestrian
︎Wait for pedestrian to cross:
green smiling eyes and light at front, red light at back (+sound)
︎Accelerating:
chasing turquoise light, eyes look to front (+sound)
I observed that buses here in Shenzhen uses both text display on a LED matrix screen and signal lights to communicate turning. Also, I found that signal lights on vehicles are almost in every case “surrounding” the vehicle’s corners, to be easily seen from all angles.
Based on these observations (and reading the SAE’s suggestion of signal lights on A.V. s), I suggested to the team leader an addition of signal lights.
Entry-03: March 4, 2021
After an initial gasp at the picture of autonomous vehicles’ enroute interactions with other road users, and finding out how a seemingly simple scenario such as yielding for pedestrian may have many variations, I decided to proceed in a more structured way, starting with catergorizing enroute interaction scenarios.

After reading research papers and making observations, I catergorized enroute scenarios into two main categories: defined priority and undefined priority.
Without considering other unpredictable elements, the vehicle should also follow traffic laws in defined priority scenarios, but should instead negotiate conflict in undefined priority scenarios.
Without considering other unpredictable elements, the vehicle should also follow traffic laws in defined priority scenarios, but should instead negotiate conflict in undefined priority scenarios.
Some examples for the two categories of enroute interactions:


To start categorizing more specific interactions, I started by analyzing the vehicle’s ODD (operational design domain), in our case, it is US suburbs.
Then, I defined some basic states for the vehicle that occurs the most often while enroute and could be combined differently in different scenarios to interact with other road users. (Such as stopped, accelerating, deccelerating etc.)
The chart shows these basic states along with specifics of the colors/motions of lights and screens on the vehicle for each state.

Further, I categorized some commonly seen objects in the autonomous vehicle’s ODD into static objects (e.g., infrastructures such as road signs and lanes), dynamic objects (e.g., other vehicles, pedestrians), and other/ static and dynamic objects (e.g., road blocks are considered “slow dynamic objects” for they are dynamic in the matter of days).
I also analyzed some differences among one object category that needed to be taken note of in further design process. For example, an elderly, impaired, or child pedestrian.

I further started working on a flowchart of the vehicle’s OEDR (object and event detection and response) to synthesize findings and prepare for prototyping some interactions.

Yinuo Han @2021