2

World Economic Forum Global Technology Governance Summit - YouTube CEO Susan Woj...

 3 years ago
source link: https://diginomica.com/world-economic-forum-global-technology-governance-summit-youtube-ceo-susan-wojcicki-content
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
YouTube CEO Susan Wojcicki on content, creativity and coping with regulators who don't know what they want

World Economic Forum Global Technology Governance Summit - YouTube CEO Susan Wojcicki on content, creativity and coping with regulators who don't know what they want

By Stuart Lauchlan

April 8, 2021

Audio version

11 min reading

The "violative view rate" (VVR) at YouTube has dropped over 70% since the end of  2017. That’s a new stat coming out of the firm’s quarterly transparency report and one that the company clearly hopes will fend of criticism about lack of vigilance around abuse of the platform by bad actors. 

That VVR now stands at a rate of 16 views out of every 10,000 violate YouTube policies, according to CEO Susan Wojcicki, speaking at the World Economic Forum Global Technology Governance Summit this week. That’s an important milestone, she feels: 

We have been asked many times by governments, by press, by advertising, by the creator community, about this violative rate and we were able to show exactly how good we are at enforcement of our policies. We were able to show that we have a very high ability to find this content and show exactly what that number was. We were also able to show that we were able to reduce it significantly over time. 

The sharp reduction since 2017 she attributes to policy improvements, but also to a lot of hard work that’s gone into the machine learning systems behind the scenes: 

The machines are good, we can find content across the board, but something like hate speech, or something that has a lot of context, would be something that would be harder from a machine standpoint to be able to detect. We've been able to really fine tune our machine so that we can find a lot of this content and it is flagged, but it doesn't necessarily mean that it's removed. So what happens is the machines will flag it and then it will be sent to human reviewers who will determine whether or not this is in fact violative or not.

That last point brings us back to the unanswered question in the whole social media content debate - what actually constitutes bad content? There are clearly some topics that an overwhelming majority can agree are beyond the pale, such as child abuse. But what’s to be done about politicians complaining when their content is judged to be bad? What’s the dividing line between free speech and a naked pack of lies? 

Wojcicki makes the case that there are two different conversations to be had here, the first being the obvious one between users and government, society in general: 

Everyone seems to have an opinion about this - what is  good content, what's bad content, what content what should up, what should be down? We engage with many different groups across many different topics, and I'd say that's one conversation. 

On the back of that, YouTube’s community guidelines are shaped and published and that sparks a second conversation: 

There's a different question, which is, 'Well how good a job do you do at removing that content once you've identified it?'. This report that just came out showed exactly where we are, which is at 99.85% [success]…So it's our goal is to break [the debate] into two different conversations - first what the policies should be, and then do we do a good job enforcing them once we have those policies?

Cute kittens were simpler

An important variable here is what people come to YouTube to look for and how that shapes what is deemed to good or bad content. When all we wanted to see were videos of kittens, life was much simpler. But the past year, with COVID and the US Election, has accentuated both the positive and negative power of the platform, as well as generating further controversy over what is acceptable content. On the one hand, people turned to YouTube for useful COVID information and found it; on the other hand, they also risked finding a lot of misinformation on the same subject. The same  is true of the Election, particularly its violent aftermath and the unfounded conspiracy theories of the more extreme Trump supporters. 

Diversity of information on the platform is one of its greatest assets, argues Wojcicki: 

What a lot of people love about YouTube is they can say, 'I went and I found this specific video that I used to watch when I lived in this foreign country far away 40 years ago and I found it on YouTube', or 'I had to fix something that was very specific in my house and I could do that on YouTube'...I think educational content is incredibly important to YouTube and almost everyone comes to YouTube to learn something. We just had this study that said over 77% of people said they came to YouTube to learn something. Just anecdotally, everyone tells me how they fixed something in their house [using YouTube]. 

When it comes to authoritative content around, for example, health issues, classification of material is taken very seriously, she insists, more so than on ‘how to fix a leaky faucet’, for example: 

If you're looking for COVID information, we actually can say, 'Look, your local health authority, the CDC or whatever [the equivalent in whatever] country you're in, or the World Health Organization, those are all organizations that we can trust, as opposed to some channel that just showed up that we don't have any kind of authoritative information about. So we definitely have a concept with information about authoritative sources and we make sure that when people are looking for information that is sensitive, we show those authoritative sources. But if you're in the entertainment area or you're looking to how to fix something or how to learn something about some obscure topic, it's really hard to put some judgment about what is the best content that's out there.

All about the algorithm

At the end of the day, it all comes back to underlying algorithm powering the search and here, says Wojcicki, a lot of effort has gone in, with a great deal of attention paid to how to handle ‘borderline’ content:

When we deal with information, we want to make sure that the sources that we're recommending are authoritative - news, medical science, etc. We also have created a category of more borderline content, where sometimes we'll see people looking at content or there'll be content that's lower quality and borderline, so we want to be careful about not over-recommending that. That's content that stays on the platform, but it's not something that we're going to recommend. 

But there are pros and cons to this approach. On the one hand, people get more assurance that they’re getting content from reliable sources. The downside though is that smaller channels and content providers can struggle to get their material recognized and promoted. But that’s a trade-off that has to happen, suggests Wojcicki, a hard lesson learned following the Las Vegas mass shooting in 2017: 

Unfortunately, there were a lot of people who were uploading content that was not factual, that was not correct. It's much easier to just make up content and post it from your basement than it is to actually go to the site and have high quality journalistic reporting. That was just an example of what happens if you don't have that kind of ranking. So, we want to enable 'citizen journalism' to be able to report and other people to be able to share information and new channels, but when we're dealing with a sensitive topic, we have to have that information coming from authoritative sources, so that the right and accurate information is viewed by our users first.

As to the need to enable the ‘new Justin Bieber’ to be discovered, this hasn’t been forgotten, she adds: 

We want to be able to enable those new artists to break. But if you're looking for something like cancer information, you don't want to see someone who is just posting information for the first time. When you're dealing with cancer, you want to see it from established medical organizations . So, what we've done is really fine tune our algorithms to be able to make sure that we are still giving the new creators the ability to be found…but when we're dealing with sensitive areas, we really need to take a different approach.

Government wants what?

Putting its own house in order and being seen to do so is important for YouTube at a time when legislators around the world are looking to rein in social media firms. Wojcicki has avoided a turn in the Zuckerberg Seat of Shame in Congress so far, but is aware of the need to find a working relationship with government that results in sensible regulation, not just knee-jerk ‘something must be done’ responses: 

The challenge comes when we get regulation that's very broad and is not well-defined. What is hate, what is harmful? Those are not things that are easily defined. There are many, many different interpretations on that, depending upon what you're handling. The challenge we have is when we have overly-broad regulation that requires us to to potentially remove a lot of content. That would not be good in the end for our users. 

She cites an issue, for example, with what was Article 13 of the European Union Copyright Directive, that needed a lot of lobbying of policymakers to stop it going too far in terms of  how it dealt with copyrighted material. Coming up is whatever the Biden administration decides to do around Section 230 of the US Communications Decency Act, which makes users of social platforms liable for content they post, not the platform providers. Trump demanded this be revoked in the final weeks of his term, but failed to push it through in the face of bi-partisan dissent. Biden has yet to show his hand, but is known to favor some form of reform in this area. Wojcicki says: 

I worry always when I see regulation that would potentially cause us to hurt a lot of the growth that we've seen from the Internet. I'd say we're aligned [with government] when it comes to keeping community safe. We want to do everything we can and we want the definition of the language to be tight enough that we can actually comply in a way that is clear. We also have to just be really careful about the unintended consequences of some of the copyright [laws] or even Section 230. What could go wrong that could cause us to have to remove a lot of content? That would be really devastating for the internet and for the creative economy.

But that means convincing a lot of people with strongly-held views who don’t share Wojcicki’s concerns and who don’t necessarily know what it is that they want: 

There are a lot of lawmakers who want us to remove more content and then a lot of lawmakers who want us to leave up more content. It's not really clear what is it that lawmakers want to solve for in the first place and that makes it really challenging to address. I think there are many ways to be able to address what the objectives are and we'll certainly work closely with them to try to achieve those objectives, but right now it's not clear exactly what those objectives are. There seems to be a lot of disagreement about it. Until that's clarified, it's hard for us to figure out exactly what are the right next steps. 

My take

The responsibility pitch from YouTube is persuasive, but all it takes is a search for QAnon content to find that the conspiracy theorists are still out and about and that the platform has a long game of ‘whack a mole’ ahead of it. Wojcicki’s point about legislators who want to do ‘something’, but don’t have a clear idea of what that ‘something’ is, is very well made. There are too many examples from around the world - US, UK, EU - of politicians on the make who vow to be the person to tame the social media monster and look no further than the resulting ‘I'm tough!’ headlines they crave. Such putative legislators are, in their own way, as dangerous as the QAnon loons...


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK