10

Interview on AI Ethics with Sebastian Wieczorek – Vice President for AI Technolo...

 2 years ago
source link: https://blogs.sap.com/2022/06/15/interview-on-ai-ethics-with-sebastian-wieczorek-vice-president-for-ai-technology-at-sap/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Interview on AI Ethics with Sebastian Wieczorek – Vice President for AI Technology at SAP

You can watch a short version of this interview as part of the second Episode of the Developers Digest show by SAP’s Developer Advocates. When Developers Digest host Josh Bentley asked me to cohost Developers Digest 2205, I was super excited to also share some AI content! I was able to schedule an interview with Sebastian Wieczorek, our Vice President for AI Technology at SAP, and I am very grateful for all his valuable insight into the topic of AI ethics. Below you can read the full interview. Enjoy!

Screen-Shot-2022-06-14-at-10.07.27.png

Nora: Hello Sebastian, thank you so much for your time. It is so nice to have you and talk to you about AI ethics today. You are our Vice President for AI Technology at SAP and you are also part of the AI Ethics Steering Committee. What motivated you to be part of the AI Ethics Steering Committee and why are you interested in AI ethics?

Sebastian: I have a technical background, but I have also always been interested in philosophy. It is one of my hobbies, so it was a no brainer for me to also look into this area.

Nora: So, what does the AI Ethics Steering Committee actually do and how do you work together with the AI Ethics Advisory Panel?

Sebastian: The steering committee is a collaboration of senior leaders of our company that came together from all different parts, from sales, from services, from development, from legal and so on and so forth. You can find the information on participants on our site. They are, as the name suggests, steering the way the company is acting upon AI ethics topics and issues and the advisory panel is there to support these senior leaders in their decision-making process. Of course, we are all starting to become experts in AI ethics but it is good to have this external expertise that we can draw in and that’s what we are using the advisory panel for. I have to say it is helping a lot to get their feedback on the processes and on the policies that we are developing but also on individual cases where we get their assessment and their opinion on.

Nora: Do you know if they also advise other companies or the government on the topic of AI ethics?

Sebastian: I am assuming so. I don’t think we hire them exclusively. They are all renown experts in their field. You can find the information on who is on the advisory panel one our site. But yes, for some I know that they are also advising politicians for example.

Nora: You said they also advise you on use cases. Are there certain use cases that SAP would not work on because of AI ethics?

Sebastian: Yes, I think it is quite clear when we talk about ethics that there are areas in this world, for everybody I assume, that you do not want to work on. When you want to do the right thing morally, there are always red lines. The same thing is true for SAP, for example, we don’t touch or want to do public surveillance technology, or we don’t want to work on use cases that are undermining debate or mingling with electoral systems or when there is a clear attempt to do something that is damaging the environment. These are use cases that we are not supporting. That is also clearly stated in our AI policy.

Nora: Have you ever had a use case that you had to turn down? I don’t want any details. I am just curious!

Sebastian: Yeah, I am sure you want details! So interestingly I think my view on humankind is that people generally want to do good things and that is especially true for SAP employees. Of course, SAP is interacting with customers and is therefore sometimes getting use cases that are problematic, but I have to admit that I never came across a use case where people deliberately or knowingly went into something that was fishy. When you talk to people, and you express the potential pitfalls or you ask them how they make sure that certain things are not happening or the personal freedom of users is not oppressed then they sometimes realize the potential danger and do not want to engage any further. But I think it has never happened that the steering committee itself has taken a decision to stop, where people inside the company wanted to push forward.

Nora: That’s good to know! So, what do developers need to know about AI ethics and where can they get the information on the topic?

Sebastian: I think every developer that is somehow getting in touch with the topic of AI and that should be most of the developers nowadays, because everybody in one way or another is integrating AI or utilizing AI functions in the wider context, should be aware of the general framework. Most people think about AI ethics and assume it only has to do with the algorithm that you are developing. That is, about how you train the model, what data you train the model on, what the model does and what kind of behavior is getting trained. Of course, that is important and there are a lot of ethical considerations around these activities. But there are also ethical considerations about the context that you are putting such frameworks in. For example, what degree of automated decision making are you allowing an algorithm like that or do you allow an AI function to take a decision that is potentially changing the faith of a person (hiring case) or what do you do to make it clear to the user that they are interacting with an AI system. When you think about a chatbot for example. Do you give a chatbot a human name or do you give the chatbot a name like “support robot” or “agent”? So that people understand they are interacting with AI and not with a human. These are examples where you don’t have to be an AI expert but by integrating it, you also have a responsibility in terms of ethics. And that’s why people who are working on these systems should be aware of the general concept and of the guardrails that we at SAP have when it comes to AI ethics.

Nora: Okay, so assuming, as a developer, I don’t really care about ethics. What do you think are the biggest risks when ignoring AI ethics or why do you think AI ethics are important?

Sebastian: Yes, as I said I don’t think we have developers that don’t care. Maybe I tell you a story: I was investigating an HR case and I talked to the AI developer about the potential discrimination by the algorithm. The developer felt really bad because the algorithm favored white male applicants. Now, usually you would assume that potential discrimination by an AI occurs because you have white male people developing it and then obviously the outcome would favor white male people. But this guy wasn’t, he was male but he was not white and if the assumptions that I made were right, then he would have clearly been discriminated by his own algorithm. But he was so focused on how to technically solve it, that he didn’t think of the potential discrimination himself. I think it is very common that people don’t deliberately do these things, they do that because they don’t know. I think that too many people think AI ethics are just for AI developers or AI experts, but it is for people writing documentation, for people doing user interface, and so on, it is for everybody. In my opinion, you should have a rough understanding of the do’s and don’ts and that’s what we have to make sure. The ignorant developers, you will never reach, but I don’t think that they are a big risk.

Nora: That’s good to know! Do you think there’s a global consensus on AI ethics with other big tech companies or do you think other companies, sometimes have very different standards and approaches to AI ethics?

Sebastian: That’s a very general question. I’m assuming that there are obviously companies out there, that are doing things that we at SAP would not. As I said, there are some red lines we are drawing like public surveillance, but obviously there are companies engaging in these fields. But I would say that when you think about the guiding principles that we set up at SAP in general, and you compare that to what other big players have been setting out you see that they’re very much the same and when you look at it from a scientific perspective, there has been good work on comparing guiding principles to what governments have been putting out or what organizations like OECD have been putting out. Also the EU has been creating guiding principles and I was involved in what the Enquete commission in Germany has been defining. Even when you compare that to, for example, what the Chinese Government is putting out, it also converges to the same thing. The interesting part is the interpretation of it, so when you break it down to actions, then you’re prioritizing the different principles against each other. You then see that some have priorities on certain aspects and others have priorities on other aspects, but that’s not a surprise. Obviously, there are different concepts of how you rate and how you value certain properties in the US or in China, or in other countries.

Nora: That’s a good point. You mentioned governments, do you think we need more laws regulating AI or the use of data in general?

Sebastian: I think we need to regulate AI as we need to regulate everything that is potentially dangerous. Now, from my perspective when you take a step back, AI ethics is not so different from ethics overall. We are not talking about machines that have developed their own will and that have to be treated as ethical subjects carrying their own intentions. We are talking about human intentions and human intentions of what systems we are building and how we are using it, and for this good regulations actually exist in most of the industries. So, what do we want to do? We want to minimize risk and what does risk mean? Risk is defined by the potential danger that we are looking at and the likeness of this danger to appear. It doesn’t tell you what technology we use to manifest that danger. AI is a technology that can be potentially dangerous, but what you want to regulate, in my opinion, is the risk. What you want to do is risk management, which means you want to contain the danger and you want to minimize the probability that something is happening. I know it is a long answer, but there are a lot of regulations right now already that are doing this risk management. I think AI is posing additional challenges in some areas, but in many areas, it is just contributing to existing issues that are already there. Discrimination is already there. Transparency issues are already there. So, I think rather than creating a targeted regulation framework for AI only, we should look at our regulations for all these industries and update them.

Nora: What do you think are the biggest challenges in AI ethics, right now, or your biggest concerns for the future?

Sebastian: I am a positive person, so I don’t think that we should be worried about AI. The AI systems are designed by people, they are developed by people, they are used by people, so I don’t have any indication that there is a super human power, that is doing something arbitrary and that is getting completely out of human control. I think the biggest challenge that I am therefore seeing, is that humans are ignorant to what they have in their hands. We need to make sure that people understand what AI is and that they understand what the challenges are. People need to get sensitized for these issues. If we are not managing to do that, then we will see a lot of unintended bad consequences. But if we are able to reach people and that’s what we are trying to do also with this interview, then we can infuse the idea that we can do a lot of good things with AI, we just have to make sure that certain conditions are met. Then we can actually use AI for the greater good and that is what I think we all want to see and work with.

Nora: I think those are really good words to end the interview. Is there anything else you would like to share with our audience?

Sebastian: I think I had my long monologue already, but if you are interested in learning more, of course, feel free to visit the website that we put up! If you want to get engaged, please send a message to me or contact anybody in your network. We are setting up an internal community for that as well. If you want to contribute, of course, we are very happy to add you to our discussions. Just reach out!

Nora: Well, you definitely got me more curious on the topic! Thank you so much for your time again, and for talking to me about AI ethics. I think it was really interesting, I learned a lot and I am very excited to share this with our audience. Thank you so much!

Sebastian: Thanks for having me!

Full Episode of Developers Digest 2205 (AI Ethics Interview starts at 18:40:00)

Links:

Full Episode of Developers Digest 2205

AI Ethics Steering Committee

AI Ethics Advisory Panel

AI Ethics Policy


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK