3

AI Weekly: The road to ethical adoption of AI

 3 years ago
source link: https://venturebeat.com/2021/08/13/ai-weekly-the-road-to-ethical-adoption-of-ai/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AI Weekly: The road to ethical adoption of AI

SHANGHAI, CHINA - JULY 7, 2021 - A humanoid service robot plays Chinese chess with a human during the Waic World Conference on Artificial Intelligence in Shanghai, China, July 7, 2021. (Photo credit should read Costfoto/Barcroft Media via Getty Images)

Secrets to driving subscriber growth

The most efficient channels, pricing and packaging, even strategies for cancellations -- find out how the pros do it.

Register now

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines — the Organization for Economic Cooperation and Development’s AI repository alone hosts more than 100 documents — that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AI’s impacts are hard to assess — especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in “ethics shopping,” “ethics washing,” or “ethics shirking,” in which they ameliorate their position with customers to build trust while minimizing accountability.

Welcome to the era of Data Commerce- Activate the full potential of data ecosystems to drive net new value for your business 1

The points are salient in light of efforts by European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” In a paper, digital ethics researcher Mark Ryan argues that AI isn’t the type of thing that has the capacity to be trustworthy because the category of “trust” simply doesn’t apply to AI. In fact, AI can’t have the capacity to be trusted as long as it can’t be held responsible for its actions, he argues.

“Trust is separate from risk analysis that is solely based on predictions based on past behavior,” he explains. “While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.”

Responsible adoption

Productizing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework that’s not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the paper’s coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.

Another paper — this from researchers at the Data & Society Research Institute and Princeton — posits “algorithmic impact assessments” as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.

This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesn’t necessarily measure harms and may even obscure them — real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects — rather than erodes — dignity.

As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: “Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member

AI leaders talk intersectionality, microaggressions, and more at Transform Women in AI Breakfast

VB StaffJuly 12, 2021 05:38 PM

Secrets to driving subscriber growth

The most efficient channels, pricing and packaging, even strategies for cancellations -- find out how the pros do it.

Register now

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


At VentureBeat’s third annual Women in AI Breakfast at Transform 2021, leaders in AI and machine learning across industries came together to discuss some of the most urgent questions in the tech sector today, including what responsible AI and engineering means, and the roles and responsibilities of corporates, academia, governments, and society as a whole in getting more diverse voices into the tech sector, and more.

This year the breakfast, presented by Capital One, was moderated by Noelle Silver, the founder of Women in AI. The breakfast opened with a talk about what inclusive engineering means for the tech sector — and why it can’t just be the responsibility of those who are underrepresented.

Diversity must encapsulate diversity of race and ethnicity and religion, but also economic diversity, diversity of experience, and diversity of education, said Teuta Mercado, director of the responsible AI program at Capital One. And inclusive engineering means ensuring that all voices are represented. Many corporations are starting to do the outreach necessary to bring more diversity into this field with recruiting efforts and imperatives, she noted.

“At Capital One, inclusive engineering means ensuring that our products and our services really reflect our customer base, and are accessible to all,” said Mercado. “Banking customers represent all facets of society, and everyone needs credit services, so it’s really important that we have diverse teams that are working on and building AI and machine learning — so that we can build for all our customers, and not just for some.”

Kathy Baxter, principal architect of ethical AI practice at Salesforce, has a background in psychology and human factors engineering, noting that she’s benefited from the strong mix of genders seen in psychology and user experience research.

“So, when moving into AI, I’ve been able to leverage those different experiences of working across roles,” she said. “Seeking out people with a large variety of expertise and disciplines to contribute and participate in our discussions of what it means to create fair and equitable AI and identifying unintended consequences of AI systems.”

Tiffany Deng, who leads the program management team for Responsible AI at Google, explained that her time in the U.S. Army as an intelligence officer prepared her for what she does today: bringing different voices to the table in order to find solutions.

“That is what inclusive engineering is about — it’s about giving voice to the underrepresented, and ensuring not only that their voice is heard, but it’s acted upon,” Dang said. “And then thinking about going into communities and understanding how a problem may present itself differently in that subtle context, as opposed to what we already think or what we’ve already seen.”

“At the World Economic Forum, we think about inclusive engineering globally,” said Kay Firsh Butterfield, head of AI and machine learning, and executive committee member of the World Economic Forum.

Companies need to have multi-stakeholder teams so when they’re thinking about policy, that should include academics, nonprofits, governments, and businesses in the room — but that’s not enough, she said.

“We’ve been doing some work with a number of the big tech companies to ask what does responsible use of technology actually look like,” she said. “One of the things we know is that you need diverse product teams. We should all be at the table, with different backgrounds, so that our products use good, non-biased AI.”

When talking about increasing the diversity of teams, it comes back to the pipeline issue, or encouraging people of marginalized backgrounds or underrepresented minorities to participate, Baxter added.

“I don’t want us putting all of the onus on women and other minorities to have to push their way into the door, and I feel like that’s what often comes of these discussions,” she said. “Instead we need to look within our own companies, when we are hiring. Really taking significant time and effort to ensure we have a diversity of sources.”

That means not hiring the very first person who meets the job criteria, but continuing to talk to a wide range of people who might be interested in these jobs — which takes time and effort, but needs to be done. And then, once individuals are in the companies, it’s making sure that the environment is actually inclusive.

“Too often, there are the micro aggressions and toxicities that not only impact an individual’s ability to participate in these conversations, but also to keep them long-term, so that we lose the contributions that they could have given to the company,” Baxter said.

“There are so many different perspectives around every single day that we’ve always tried to tap into, in order to come to a better solution for our users,” Deng said. “A thing that really keeps me going is this idea of intersectionality.”

We all have different facets, and identify in very different ways, and companies need to continue to build upon that in truly inclusive ways for these different communities, to ensure that all voices brought into the fold are heard, she said.

“It’s not enough just to have the women. We have to have men who are in positions of power, also supporters, who also understand that there is a role in their companies for us — and that they actually begin to do much better if they have diversity within their country, their cabinet, or within the C-suite,” Firth Butterfield said.

So many have struggled against bias in their field, Mercado said, but many things have kept her going.

“One, the network: just surrounding yourself with women and men, people who care about what they’re doing, who are passionate about ethical AI or passionate about just doing what’s right for our customers for people who are using the products,” she said. “And then the biggest thing for me is working for a company that has a culture of empowerment, allowing you to do the right thing.”

Diversity and inclusion in AI, the erasure of bias in machine learning algorithms, and inclusive engineering is an issue that directly impacts the bottom line, but it’s so much bigger than that, said Deng.

“Me being a Black woman, I think about my children and how can I make the world better for them, and how can I make the world safer for them,” she said. “AI is omnipresent; it’s a part of how we go to school, how we district, how things and services are distributed in our communities. And so I want to make that better. This is really an inflection point, and a really great opportunity for us to have deep impact, in all those different types in places in society.”

Don’t miss the full discussion, from how bias has impacted women’s careers in male-dominated fields to how men often miscalculate what women can bring to the table, to how everyone must continue pushing traditional boundaries of these sectors, building truly inclusive community and ensuring that all perspectives are considered at every moment of the design process, and more.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK