3

Conversational AI's Manipulation Problem Could Be Its Greatest Risk to Society |...

 1 year ago
source link: https://www.barrons.com/articles/conversational-ai-will-learn-to-push-your-buttons-manipulation-problem-c9f797e8
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Conversational AI Will Learn to Push Your Buttons. We Need to Solve the Manipulation Problem.


By

Louis Rosenberg

Feb. 23, 2023 4:00 am ET

Text size
Your browser does not support the audio tag.
This feature is powered by text-to-speech technology. Want to see it on more articles?
Give your feedback below or email [email protected].
im-729993?width=639&height=426

We’re in a honeymoon period with new conversational AI tools, writes Louis Rosenberg.

Dreamstime

About the author: Louis Rosenberg is the CEO of Unanimous AI, chief scientist of the Responsible Metaverse Alliance, and global technology adviser to the XR Safety Initiative.

Ever since Captain Kirk spoke to the ship’s computer in the 1967 season of Star Trek, researchers have dreamed of enabling natural conversations between humans and machines. It took more than 50 years, but the largest companies in the world are finally on the verge of bringing this capability to billions of users around the globe. Most notably, Microsoft is integrating OpenAI’s impressive ChatGPT technology into its Bing search engine while Google is racing to release a competitive chatbot called Bard based on its LaMDA technology.

As a Star Trek fan and a researcher of human-computer systems for over 30 years, I believe natural language is one of the most effective ways for people and machines to interact. On the other hand, I am deeply concerned that without sensible guardrails, conversational AI could be used to manipulate individuals with extreme precision and efficiency.

To distinguish this danger from other AI-related risks, I refer to this growing threat as the AI “manipulation problem.” I believe it’s now an urgent issue for policy makers to focus on. What makes the problem unique is that conversational AI involves a real-time engagementduring which an AI system can impart targeted influence on a user, sense that user’s reaction to the influence, and then adjust its tactics to maximize impact. This might sound like an abstract process, but we humans usually just call it a conversation. After all, if you want to influence an individual, your best approach is often to speak with that person directly and adjust your arguments as you sense expressions of resistance or hesitation.  

The danger is that conversational AI has now advanced to the point where automated systems can engage individual users in flowing dialog that is coherent, convincing, and could easily be deployed with a targeted persuasive agenda. And while current systems are primarily text-based, they will increasingly be combined with real-time voice, enabling natural spoken interactions between humans and machines. In addition, they will soon have a visual presence, being combined with photorealistic digital faces (digital humans) that look, move, and express like real people. And while interacting with online products and services through realistic dialog has a great many benefits, it could also become the ultimate deployment mechanism for AI-powered influence campaigns. 

The fact is, we’re now entering the age of natural computing, in which we interact regularly with “virtual spokespeople” that look, sound, and act like authentic persons, but who are designed to represent the specific needs and objectives of the entities that deployed them. Corporations, state actors, or criminal enterprises could field these AI-driven conversational agents to skillfully pursue a persuasive conversational agendathat aims to convince you to buy a particular product, believe a piece of misinformation, or even fool you into revealing your bank account or other sensitive information.

And trust me, these AI-driven spokespeople will be extremely skilled at achieving their persuasive goals. Unless limited by regulation, these systems will have access to personal data (your interests, values, and background) and will use it to craft dialog that is specifically designed to engage and influence you personally. In addition, these systems (unless regulated) will be able to analyze your emotional reactions in real-time, using your webcam to process your facial expressions, eye motions, and pupil dilation—all of which can be used to infer your feelings at every moment. This means that a virtual spokesperson that engages you in an influence-driven conversation will be able to adapt its tactics based on how you respond to every word it speaks, detecting which strategies are working and which are not. 

You might argue this isn’t a new risk. Human salespeople already do the same thing, reading emotions and adjusting tactics. But consider this: AI systems can already detect reactions that no human can perceive. For example, AI systems can detect “micro-expressions” on your face and in your voice that are too subtle for human observers but which reflect inner feelings. Similarly, AI-systems can read faint changes in your complexion known as “facial blood flow patterns” and tiny changes in your pupil size, both of which reflect emotional reactions. Unless protected by regulation, virtual spokespeople will be far more perceptive of our inner feelings than any human representative.

Conversational AI will also learn to push your buttons. Unless limited by regulation, these platforms will compile data about how you reacted during each prior conversational interaction, tracking which tactics were most effective on you personally.  In other words, these AI systems will not only adapt to your immediate verbal and emotional responses, they will get better and better at “playing you” over time, learning how to draw you into conversation, guide you to accept new ideas, get you riled up, and ultimately drive you to buy things you don’t need or believe things you’d normally realize were absurd. And because this technology will be easily deployed at scale, these methods can be used to target and influence broad populations.

Of course, these are future dangers.  We have no reason to believe that current conversational systems have implemented deliberately manipulative techniques. In many ways, we’re in a honeymoon period, similar to the early days of social media before large platforms adopted ad-based business models. In those early days, there was no motivation to monetize users through aggressive tracking, profiling, and targeting techniques. The new risk, therefore, is that conversational platforms adopt similar business models that prioritize targeted influence.  If they do, it could motivate many of the abuses described above.   

This is why I believe the manipulation problem could be the most significant threat that AI poses to society in the near future. Regulators must consider this an urgent danger.  After all, ChatGPT was launched less than three months ago and has already reached over 100 million active users, making it the fastest adopted application in history. We need guardrails that protect the public from real-time interactive manipulation through AI-driven conversational agents. It’s coming fast.   

Guest commentaries like this one are written by authors outside the Barron’s and MarketWatch newsroom. They reflect the perspective and opinions of the authors. Submit commentary proposals and other feedback to [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK