0

Developing UX/AI thinking for a new design world

 1 year ago
source link: https://uxdesign.cc/developing-ux-ai-thinking-for-a-new-design-world-6524b28ea103
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Personification

This took place before the boom in Generative AI and was based on a familiar design exercise known as personification. Known as a literacy device, personification is the act of giving a human quality or characteristic to something that is not human. This differs from another familiar concept known as personalization, which is the action of designing or producing something to meet someone’s individual requirements. The latter is well known to UX designers, linking user metadata to the digital interface. On the contrary, personification focuses on giving human qualities so as to create an emotional appeal when done appropriately.

One of the most obvious examples is how components have taken on a more organic shape over the years. Curves are more pleasing to the eyes because of their similarities to human anatomy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box. (source: Amazon)
One of the most obvious examples is how components have taken on a more organic shape over the years. Curves are more pleasing to the eyes because of their similarities to human anatomy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box. (source: Amazon)

One of the most obvious examples is how components have taken on a more organic shape over the years. Call them fillets, rounded corners, or squircles, curves are more pleasing to the eyes because of their similarities to human anatomy. In modern UI, some primary buttons could easily be mistaken for a thumb if it weren’t for a contrasting color with microcopy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon, incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box.

There is, however, another better example of a device that “breathes.” Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. Contrast this with a utilitarian on/off switch without any sensory feedback. Yes, it gets the job done, but it is void of any human connection. Just like cargo boxes for back-office operations.

Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. (source: Google)
Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. (source: Google)

Anthropomorphism

Yet there is more than meets the eye. The other defining feature is processing spoken language to carry out subsequent actions. For once, it could be perceived to be more human, more ‘intelligent’. Thus, it opens the door to a whole new discipline of user experience known as conversational UX, where human conversations are considered.

The term anthropomorphism, is often confused with personification. Here is the key difference between the two words:

Personification is the use of figurative language to give inanimate objects or natural phenomena humanlike characteristics in a metaphorical and representative way. Anthropomorphism, on the other hand, involves non-human things displaying literal human traits and being capable of human behavior.
Masterclass

The key distinction lies in the application of human traits to non-human objects. While personification interprets human attributes, anthropomorphism applies them directly. Smart speakers are a transition between personification and anthropomorphism because users are starting to imagine inanimate objects coming to life and becoming human beings. It explains why my three-year-old daughter cried when my Google Home Mini was not listening to her voice command. She treated the inanimate object like a human, even though it didn’t look like one.

A new design world

Today, she is much older and knows the smart speaker is a digital assistant. That being said, the AI horizon has continued to develop at a tremendous pace with the emergence of large language models (LLMs), raising anthropomorphism to a significant level with its ability to generate continuous natural dialogs through prompts. And although the most recognizable interface is ChatGPT, a LLM’s API will allow further exploration into other forms of interfaces, such as voice assistants, robotics, and even humanoids.

A famous fable comes to mind: the story of Pinocchio, a wooden puppet toy that came to life. Desiring to be a real boy, Pinocchio had to learn many hard lessons about dealing with human behavior. The story could draw parallels with Artificial Intelligence, and Steven Spielberg was on the right track when he and Stanley Kubrick (director of 2001: A Space Odyssey) produced the underrated yet highly emotional movie, A.I. Artificial Intelligence.

As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. (image source: Disney; Warner Brothers and Dreamworks)
As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. (image source: Disney; Warner Brothers and Dreamworks)

In this story, the protagonist, David, was a prototype Mecha child given to a couple who were going through grief over their son’s medical condition. Over time, David develops a love for the mother but is rejected after the son recovers. Abandoned, David and his teddy bear Teddy embark on a quest to find their own “Blue Fairy” to become real. However, the movie ends with a defeated truth: there was no way for David to be a human being, even with the most advanced technology of the future. David spends his happiest day by recreating his “mother”, allowing them a final day together before finally falling asleep.

UX/AI

As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. But before that happens, humanity-centered designers with an innate knowledge of user experience and artificial intelligence need to step in. A new breed of UX/AI designers needs to join forces with other professions in similar fields to provide an ethical yet delightful human experience through anthropomorphic design.

Here are three observable instances where a UX/AI designer comes into play:

1. Dealing with the uncanny valley

First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. Although the theory has its critics, who dispute that the younger generation (i.e., Gen Z, Gen A) may be more akin to acceptance, there is a general consensus of mistrust happening when there is a mismatch in experience.

First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. (source: Mori)
First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. (source: Mori)

Try watching Spielberg’s A.I. movie to compare your emotional response between humanoid David and robotic bear Teddy. Chances are you will experience a higher sense of creepiness with some of David’s interactions. While uncanny valley is often associated with visual appearances, conversational interfaces like chatbots generate similar reactions based on inappropriate responses. Microsoft’s early AI experiment, Tay, created one such incident when it hurled abusive tweets at people before being shut off.

Therefore, as UX/AI designers, strategizing the best aesthetic treatment is of utmost importance, but to execute on such a transdisciplinary practice requires the designer to have good taste. Here is an excerpt beautifully written by

and :

Taste is the ability to identify quality. To understand quality we need to look critically at: materials that are fit for purpose, ergonomy that considers audience needs, effective use of affordances, usability, accessibility, harmonic color choices, aesthetic choices that elicit emotion, intentional visual hierarchy — amongst others. Taste is in the observer, quality is in the object. The concept of taste becomes more productive when framed objectively around quality, and in ways that are measurable or at least comparable.

At the same time, the ability to measure uncanniness as a UX metric is also worth developing. From past mistakes made by Meta’s low-quality avatar design to overly-expressive CGI characters, such as Cats, the challenge is to find the right emotional balance without tipping over. We are likely to see a new set of UX/AI methods to test for uncanniness and other emotional responses from end-users.

2. Dealing with pareidolia of consciousness

Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none.

Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none. (image source: Taubert)
Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none. (image source: Taubert)

This same phenomenon also exists in AI when a user imagines consciousness inside an LLM when there is actually none. And as models continue to become more sophisticated, detection of the illusion becomes harder because conversations will feel real.

One such company to look at is Air AI, which claims to be the world’s first-ever conversational AI tool that can engage in full-length phone calls, lasting anywhere from 10 to 40 minutes while sounding just like a real human. In other words, there could be a high chance of a person speaking to an AI bot while imagining it to be a real human.

UX/AI designers should step in to create systemic solutions that benefit all parties. They are to create experiences that are in accordance with existing AI governance, standards, and principles. The role of UX/AI designers is thus to translate these policies into actual product experiences. (source: Air AI)

In such instances, the UX/AI designers should step in to create systemic solutions that benefit all parties. They are to create experiences that are in accordance with existing AI governance, standards, and principles. Thankfully, resources, such as Microsoft’s Responsible AI, are publicly available. The role of UX/AI designers is thus to translate these policies into actual product experiences.

One way of breaking pareidolia is to create a preamble to inform users of any AI involvement. Conversations can also be synced in a user’s account so that users can refer back to annotations made by the AI. Such interventions create transparency and give agency to the user when interacting with a more anthropomorphic AI. Lastly, rather than reduce the propensity for deepfakes, another way to establish user authenticity lies in the increasing use of badges within a user’s profile or other newer authentication methods.

3. Dealing with AI Hallucination

Perhaps the most famous debacle in the story of Pinocchio was his growing nose with each lie he told the Blue Fairy. But we should take a closer look. Was Pinocchio lying with malicious intent or out of desperation for his situation? Or maybe he did not have the self-awareness to recognize the false information he was providing, especially since he was only about a day old?

In the world of AI, this is better known as hallucination, where the LLM attempts to provide a plausible answer. The output may sound convincing because the LLM uses statistics to generate language that is grammatically and semantically correct, but it may actually be factually inaccurate or even nonsensical. Largely due to the quality of the data, the AI could be perceived as naive. Or the user could be ignorant without providing the right parameters or doing any fact-checking. In any case, the inconsistencies may cause doubt about every output from the AI.

A classic example of ChatGPT hallucination when the chatbot fabricates a response with the URL slugs even though the URL was fake. (image source: wiki)
A classic example of ChatGPT hallucination when the chatbot fabricates a response with the URL slugs even though the URL was fake. (image source: wiki)

While there are methods to mitigate hallucinations, such as writing clearer and more specific prompts with examples, UX/AI designers can also incorporate user-centric features that seek improvements. Already, we see this in ChatGPT whereby users can provide a binary 👍 👎 feedback by reporting whether the output was accurate. More can also be done in this area by creating modules that allow users to adjust the temperature of randomness with ease, or display accuracy-strength meters to show the confidence of the result.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK