6

Tesla’s new ‘mind of car’ UI signals a future we’re not prepared for

 3 years ago
source link: https://uxdesign.cc/teslas-new-mind-of-car-ui-signals-a-future-we-re-not-prepared-for-c38a6212c32
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Tesla’s new ‘mind of car’ UI signals a future we’re not prepared for

A look into the ‘mind of car’ UI showing a car taking a right turn.
A look into the new ‘mind of car’ UI showing a car taking a right turn (Credit: NotaTeslaApp.com)

Amid the much-anticipated release of FSD 9.0 beta for Tesla car owners the other day, Elon Musk tweeted out shortly before an intriguing detail surrounding one of its key features:

A recent reply by Elon Musk on Twitter — giving a glimpse of what customers can expect on their new dashboard.

What a way to spark some interest…But what exactly is the ‘mind of a car’?

‘Mind of the car’?… What’s this thing for?

The ‘Mind of a car’ is an enhanced visualization of what the AI sees and thinks through its vision-only cameras on a Tesla with FSD installed. In an interview with Lex Fridman back in 2019, Elon shares his early vision for it:

“The whole point of the display is to provide a health check on the vehicle’s perception of reality… That information is rendered into vector space (bunch of objects, lane lines, etc.). You can confirm whether the car is knowing what’s going on or not by looking out the window.” — Elon Musk

It’s a sort of metaphor used to describe a display that helps drivers understand how the system thinks and distinguishes objects or obstructions within its field of view.

A photo of an older version of the ‘mind of car’ display, which shows cars and nearby objects are primitive objects.
A photo of an older version of the ‘mind of car’ display, which shows cars and nearby objects are primitive objects.
An older version of the display. The UI is really primitive. (Source: elektrek)

The purpose of this feature is to provide an experience where the driver can retain some oversight and control of the vehicle. While one can assume its hands-on-the-wheel control will slowly be forked over to FSD in the coming years, it provides drivers a meaningful way to interact with the AI system in the interim.

How does the latest version work?… And what does it do for us?

The best way to understand the latest version is to see it for yourself. After doing some digging, I found a recorded test drive demonstrating how this feature works today. You can have a look at the full video here.

A photo of the ‘mind of a car’ UI which displays nearby objects detected by the pure vision cameras.
A photo of the ‘mind of a car’ UI which displays nearby objects detected by the pure vision cameras.
The ‘mind of a car’ UI displays nearby objects detected by the pure vision cameras (Source: YouTube)

Beyond the UI, this test drive video shows us through the dialogue a good sense of how much the relationship between humans and technology have evolved over the years. We’re at a point where we’re having more conversations like these,

Dad: “Am I driving or is the car driving, sweetie?”

Child: “The car!”

Dad: “That’s right!

Further along the drive, they arrive at a corner and the vehicle slowly but successfully completes a right turn. With some excitement (and relief), the car owner happy exclaims,

Dad: “There ya go… There ya go buddy. Nicely done!”

Moments like these that give you a bit of a chuckle, but there are also instances where the driver just has to step in.

A photo of a driver in their Tesla overriding the AI to make a cleaner right turn.
A photo of a driver in their Tesla overriding the AI to make a cleaner right turn.
Driver making overriding the AI to make a cleaner right turn (Source: YouTube)

If you skip over to 9:00 on the video, you’ll notice the driver has to “intervene” and override the AI to keep it from an awkward turn. While it’s become more commonplace to see everyday consumers adapt to AI, it’s without a doubt an adjustment. You may say…

“Okay Jon. So, how does this change the way we interact with AI and technology as a whole?…“

The ‘mind of a car’ is simply the latest example of an increasing need for everyday consumers to exercise patience, understanding and empathy — towards AI.

As far as we’re concerned, everything we need to know and understand about empathy extends only towards sentient life — from stepping inside the shoes of real people we looktounderstand their needs, goals, pain points and desires. However, that’s beginning to change. In the same way we’ve seen in the video example above with the driver testing the new ‘mind of a car’ feature, we have to stomach the idea of extending that same patience, understanding and empathy towards an AI system. Does it sound crazy? A little bit, yes. But…

Like a child, a new AI system learns through trial and error in an effort to reach a mature understanding to discern what is right and wrong.

By processing extremely high volumes of quality data, they learn over time to make decisions and recommendations for users. In the same way we learn to be patient with a service representative who’s starting out as a trainee, people will have to exercise the same kind of empathy for the AI engine until it reaches a certain level of maturity.

Are we prepared for this?

While it’s clear AI will eventually reach its potential, the journey towards it will certainly won’t come without any bumps and bruises.

“There are hundreds of thousands of Teslas with autonomous capabilities… You can see the evolution of the car’s personality and thinking with each iteration of autopilot. You can see if it’s uncertain about this…. Now it’s more certain — it’s moving in a slightly different way.” — Elon Musk

It will make mistakes

When the system makes an error, we have to acknowledge that it’s a part of the training process. It’s not a glitch. It just hasn’t accounted for it yet. Or, it just doesn’t know what to do yet in that moment. We don’t ask for a refund, we acknowledge that errors are a bi-product of using a superior technology that will improve in its reliability and efficacy over time. So to answer the question: Are we prepared for this?

The simple answer: No.

Most people are resistant to change, relinquishing control, familiarity — and even safety. But, like anything we can grow comfortable with it and nullify any threats if we commit to educating ourselves and set the right expectations for everyone. Here’s how:

#1. People will position themselves less as direct users of technology

In the context of driving, consumers will need to get used to using less, because they’ll be guiding and overseeing a lot more. To help drivers with this, the ‘mind of car’ UI presents clearly not only if there’s an object in sight, but even what the objects are. What were previously simple polygons, the AI through its cameras can now distinguish car models — even the colour. These developments around the UI and the technology are put in place to ensure we interact with the AI more effectively.

We have to meet AI halfway. So — what can designers, educators and innovators at large do to change the relationship between AI (technology) and people?

#2. We’ll need to develop an etiquette — specifically for handling AI

As strange and weird as it sounds, we’ll need to stop treating Siri or Google Assistant like a dumb robot.

“!@#$ Google….“

This misuse of AI agents in this context, lends itself to a poor inputs that impacts the broader system. As it turns out, between 10–50% of interactions are abusive.

A photo of NOMI — An AI assistant for NIO’s EVs, used to bring a friendly, conversational experience towards the driving experience.
A photo of NOMI — An AI assistant for NIO’s EVs, used to bring a friendly, conversational experience towards the driving experience.
NOMI — An AI assistant for NIO’s EVs, used to bring a friendly, conversational experience towards the driving experience (Source: NIO on Twitter)

To ensure it gets the right inputs, as designers we’ll need to craft experiences where we spur and encourage users to treat AI, not like a clueless object — but like child who needs to learn and be trained to realize its fullest potential.

#3. People will need to actively participate in the community

The saying goes,

“It takes a village to raise a child.” — African proverb

Similarly, it takes a fleet of beta-testers to raise an AI system on behalf of the masses. But it doesn’t stop there — To maximize the return in value, society will have to continue to be comfortable with participating in flagging edge cases (among other kinds of data) to support the neural network over the years.

Final thoughts

As technological capabilities continue to develop and mature, UX professionals will be expected to expand responsibilities or newer roles can emerge. Not only will we continue to define the user’s experience, but also the relationship between humans and AI. So we should ask ourselves:

  • What should the relationship between AI and humans even be?
  • How does it change from context to context?
  • What do we have to forfeit, if anything?

These are big questions and spaces of discovery that can only be answered over time. Until then, we can work to gain more of an understanding in the developments in the world around us and advance ways we can work with the technology.

…What are your thoughts? Comment down below and we can discuss further!

Like what you’ve read? Follow me on: Medium | Twitter

0*g6Z_9vJ_Yp6aTfE9.png?q=20
teslas-new-mind-of-car-ui-signals-a-future-we-re-not-prepared-for-c38a6212c32
The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK