36

AI safety, AI ethics and the AGI debate

 4 years ago
source link: https://towardsdatascience.com/ai-safety-ai-ethics-and-the-agi-debate-d5ffaaca2c8c?gi=ecfef256f508
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

nmAVBz7.png!web

Editor’s note: The Towards Data Science podcast’s “Climbing the Data Science Ladder” series is hosted by Jeremie Harris. Jeremie helps run a data science mentorship startup called SharpestMinds . You can listen to the podcast below:

Most of us believe that decisions that affect us should be reached by following a reasoning process that combines data we trust with a logic that we find acceptable.

As long as human beings are making these decisions, we can probe at that reasoning to find out whether we agree with it. We can ask why we were denied that bank loan, or why a judge handed down a particular sentence, for example.

But today, machine learning is automating away more and more of these important decisions. Our lives are increasingly governed by decision-making processes that we can’t interrogate or understand. Worse, machine learning algorithms can exhibit bias or make serious mistakes, so a world run by algorithms risks becoming a dystopian black-box-ocracy, potentially a worse outcome than even the most imperfect human-designed systems we have today.

That’s why AI ethics and AI safety have drawn so much attention in recent years, and why I was so excited to talk to Alayna Kennedy, a data scientist at IBM whose work is focused on the ethics of machine learning, and the risks associated with ML-based decision-making. Alayna has consulted with key players in the US government’s AI effort, and has expertise applying machine learning in industry as well, through previous work on neural network modelling and fraud detection.

Here were some of my biggest take-homes from the conversation:

  • Machine learning models often come with a handful of “standard” loss functions that everyone has agreed “work pretty well” (e.g. accuracy, AUC score, categorical cross-entropy, etc). Unfortunately the fact that we’ve settled on these standard metrics can make it tempting to stop thinking critically about what’s being optimized. Sometimes the model with the best accuracy or best F1 score only gets reaches that level of performance by sacrificing other things that we should care about too. Our tendency to go on autopilot and accept “standard” metrics because they’re standard can lead to dangerous outcomes.
  • One of the biggest challenges with AI ethics is that we haven’t even come close to working out human ethics yet. That means we’re having to hard-code rules that we can’t even agree on into models whose reasoning we can’t even audit.
  • Despite the lack of broad consensus on key ethical questions, many national governments have worked out ethical frameworks that are remarkably consistent.
  • An area of AI safety that’s much less emphasized today is the risk of runaway artificial general intelligence; most of our attention on AI safety is directed at more immediate and practical concerns. Alayna and I disagreed about whether or not this is a good thing. Where you stand on this question depends on how likely you think AGI is to be developed in the near- or medium-term (I think it’s uncomfortably probable, while Alayna disagrees).

You can follow Alayna on Twitter here and you can follow me on Twitter here .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK