2

A Pragmatic Analysis Of OpenAI’s Testimony Before The Senate

 1 year ago
source link: https://attilavago.medium.com/a-pragmatic-analysis-on-openais-testimony-before-the-senate-2eef9f75a5d9
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

A Pragmatic Analysis Of OpenAI’s Testimony Before The Senate

Starring no other but OpenAI’s CEO, Sam Altman. Seven key takeaways.

9 min read5 days ago
0*if6GAXPQPCMY5v26

Photo by Aditya Joshi on Unsplash

Artificial Intelligence. While at least as old as far too many of our geriatric, sleepy, droopy, drooling politicians, it has just now become the centre of global attention. To much of the world, it’s surprising, hence the hype at one extreme, and the doom-and-gloom at the opposite end. No wonder, then, that for nearly three hours we all had the opportunity to witness OpenAI’s CEO being grilled with questions about the future of humanity in an AI-enabled world.

While I have not been silent about many of my skepticisms regarding ChatGPT and machine-generated information, I have always been a supporter of moderate automation. Therefore, I find myself in a very small group of individuals advocating for a balanced use of technology, including AI, and able to interpret the entire conversation from several angles. Of course, being human, I may still come with built-in bias, but in what follows, I will attempt to hold as much of it back as possible for the benefit of my readers, my comment section and my own emotional wellbeing. For those of you who want to watch the reference video, I provided it below. It’s nearly 3 hours long, but all very well-worth watching.

It was a conversation

Compared to similar events in the past involving the likes of say TikTok’s or Meta’s CEO, the tone of it all felt very different, and perhaps for good reasons, which I am going to touch on a bit later. It’s safe to say, barely any of it felt like an interrogation. Yes, there were plenty of tough questions, and some of those answers didn’t always feel satisfactory, but the general tone was positive and gave the impression of a proactive space for conversation in an effort to prevent an AI disaster. Both sides admitted to failures being made in the past and not wanting to repeat them.

Another quite noticeable energy in the room was that of an open group of individuals who don’t dismiss the value of artificial intelligence, are not looking to pull the plug, kill it with fire and Men in Black everyone who ever knew about AI. This was more of a group highly concerned about certain transformative technologies like AI and AGI not being regulated to the same extent as say building nuclear power stations — though anyone familiar with the Tree Mile Island accident would argue there’s plenty of work to be done there as well.

So, what about jobs?

Inevitably, one of the big topics was around jobs. Are LLMs like GPT threatening the jobs and therefore the livelihoods of millions of people around the world? Many will claim that is precisely the case, and there has been research published on the matter too. Others, including OpenAI’s CEO, are less worried. In fact, Sam doesn’t seem to worry about it at all. He believes, just like the Industrial Revolution, the AI revolution will actually create more and better jobs than people have today. Except he’s missing a crucial aspect — velocity of change.

The Industrial Revolution, or even the Internet Revolution, didn’t happen in the space of a few months. It took a good 30 years for the internet to become semi-ubiquitous. Believe it or not, there are still plenty of places on the planet without any internet or any connectivity at all. That’s why StarLink sells, and that’s why Apple added satellite communication to its phones. OpenAI, however, released APIs to the general public and is moving towards plugins within just months of their public release. It’s a very different reality, because with the best of intentions and systems in place, humans simply cannot adapt that quickly to an AI-driven job market.

In the long term, I am fairly certain that AI will not be what keeps food off the table, but in the short and medium term, can cause economic and social chaos and a world that is already struggling with severe environmental and political instability. The senate judiciary committee, while concerned, almost felt like they weren’t concerned enough, while Sam simply felt disconnected from reality.

Is copyright still a thing?

Yes, it very much is, and rightly so, the topic came up. I expected this to be one of those points where things started to heat up, because as one of the senate committee members pointed out, copyrighted materials were used by OpenAI, and copyright holders have already lost revenue because of it. A bit of a Napster moment all over again, except in a different era when copyright and DRM are a much more established thing than they were two decades ago.

Sam didn’t seem to genuinely care. While his words expressed a vague acknowledgment of “sure, we made a mistake, we are in conversation with relevant stakeholders”, he also seemed to suggest that copyright laws and rules don’t really exist for an AI use-case, while in the same breath also stating that content creators and artists should be remunerated for their work, making it all sound like it’s a far too grey area for people to get their underpants in a twist over it.

In reality, while a specific law does not state how AI models should or should not have access to copyrighted content, the existing copyright laws, while complex, are more than suitable for anyone with any common sense to know, taking copyrighted works and using them to create another product without any license, is a no-no, and a lawsuit waiting to happen. If Sam doesn’t want his arse hauled into court every day for the foreseeable decade losing copyright cases, he needs to understand one basic fact: copyright owners have the upper hand here. Unless, of course, we all land in a communist dictatorship where no one has a right to anything.

A “small” matter of safety and privacy

Technically, these are two very different topics, but worth discussing under the same heading. National security is perhaps the number one agenda of every country. Artificial intelligence that could be employed to meddle with elections, influence people’s everyday lives in a way that leads to political and social instability is something that nobody wants. Not even Sam Altman. On this, he was clear, and seemed to agree with the panel. He does, however, hope that governmental regulations will tackle this aspect, which to me feels like a very Hiroshima-Nagasaki moment. So, we built a tool that can throw the planet into political and social chaos, so now here you go dear politicians, make sure that doesn’t happen. Trust me, it will happen, because whatever can go wrong, will eventually go wrong. It’s not an if, but a when.

While Sam did seem genuinely concerned about the national security aspect, when it came to people’s general privacy, he was far less convincing, dare I even say, knowingly hid certain facts. His claim that OpenAI provides methods for people to opt out from having their data used for training or delete their data, is true, but it’s only a recent development due to Italy — nice to see they’re good at more than just pizza, spaghetti and riding around in Vespas — banned their asses for not adhering to GDPR. In light of this, I don’t think Sam Altman or OpenAI genuinely cares about privacy, and they’ll just do the bare minimum to appear clean in the eyes of the law.

US vs. global framework for regulation

What came up several times, was the need for regulation. Both sides of the conversation seemed to agree on that, but not necessarily in the same way. While the judiciary committee seemed to adopt a nationalistic, patriotic view where the United States of America had to lead and establish the global rules and standard for AI development, Sam had a somewhat different view on it.

On this topic, I tend to lean much more towards Sam’s take. It must be a global standard, rather than a regional or political one. Sure, the US can and perhaps even should be proactive about it, given how OpenAI is an American-based company, but a solid framework is already in development in the EU, and having a global standard just like we have in other industries, is a far more productive and apocalypse-preventing approach. Representatives of the world should meet on this and co-develop the standard in a way that it benefits humanity.

A far too relaxed view on AGI

AGI, short for Artificial General Intelligence, has come up a few times, and it’s probably the most worrying part of the entire dialogue. Some in the room seemed to take it seriously, others used it to minimise the perceived impact of generative AI, while the rest seemed to just accept that one day, it will be here and there’s nothing we can do about it.

This is the point where I wish I could have been in the room to shout, “have ye all fecking lost it?!?” The overwhelming attitude that “oh well, it’s gonna be here in say 20 years” is incredibly scary. This is a meeting where leaders of the free world and technologists are discussing the ramifications of generative AI as being anywhere between huge and transformative, and somehow when AGI is mentioned everyone just seems to shrug as if it’s something we should worry about when it’s here. How about banning it outright 20 years ahead of time?!? Has anyone considered that? 🤦‍♂️

Technocracy and oligarchy hand in hand with regulators

Perhaps the most hilarious part of the entire testimony is the panel admitting that having regulations and regulating bodies means squat when they tend to just bend to the will of big tech. The tone of the room was that of an exhausted system that repeatedly failed to deliver on a sisyphusean task. The will and the need are palpable in the air, but the capability to deliver is completely non-existent.

Partly because of historical failure of many other regulations that started off with good intentions only to find themselves underfunded, understaffed, and under-resourced and partly because of a market-driven economy that’s in the hands of a small group of tech behemoths. There. You have yourself in inevitably corrupt system that will do more harm than good.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK