3

AI's 'never-ending journey' to Super Intelligence - MIT's Max Tegmark lays out t...

 2 years ago
source link: https://diginomica.com/ais-never-ending-journey-super-intelligence-mits-max-tegmark-lays-out-route-map
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AI's 'never-ending journey' to Super Intelligence - MIT's Max Tegmark lays out the route map

By Martin Banks

April 14, 2022

Audio mode

Dyslexia mode

During my conversation with new Dynatrace CEO, Rick McConnell, it became clear that the thoughts underpinning the work of Professor Max Tegmark, special guest keynote speaker at the opening session of the company’s recent conference, not only represented the underpinning for the company’s next line of development in process management, but also a far wider and more comprehensive mindset for everyone developing, specifying, or even just working with new Artificial Intelligence (AI) applications and technologies. It seemed logical, therefore, to pause to reflect a bit more on what the good professor said.

Tegmark is a Professor at MIT, working in the areas of AI, Machine Learning, Cosmology and physics research.  He is also the President of the Future of Life Institute, and a scientific director at the Foundational Questions Institute. His presentation was aimed at getting the audience to think not just big, but “cosmically big” about AI and its future direction, in much the same way that we have all had to understand that the cosmos is not only vastly more grand than our ancestors ever thought it to be, but that there is still a great deal of further `grandness’ for us yet to discover and explore.

The same applies to the technologies we are now developing, which he said are giving life the possibility to flourish like never before, for billions of years to come. To be sure, this is no five-year road map he is talking about. As he said:

I want to talk with you about a more inspiring journey where, the passengers aren't just three astronauts, but all of humanity, and is powered, not by rocket engines, but by Artificial Intelligence.

The never-ending journey

This, he suggested, means looking at AI in terms of its three core elements: what powers it, how we steer it, and the destinations we want it to help us reach.

When it comes to the first of those three, he wanted to be very clear about the need to move away from what he called “this silly carbon chauvinism idea that you can only be smart when you're made of meat”. To AI, biological and non-biological are irrelevant differentiators. It doesn’t matter whether carbon atoms in brain cells or silicon atoms in technology do the processing. The important part is that the processing gets done.

In addition, it is important that the most appropriate means of processing is used to advance the speed and the scope of what is processable, he added:

Not long ago, we couldn't do face recognition. Now, we can simulate fake faces and simulate your face saying stuff that you never said. Not long ago, we couldn't save lives with AI. Soon, I believe, we'll save more than a million lives per year by eliminating pointless road deaths, with self-driving vehicles, and even more lives by eliminating stupid mistakes in hospitals, and still more lives by improving diagnosis.

As to far Tegmark thinks AI can go, he pitches the analogy of a landscape of tasks, where the elevation represents how hard it is for AI to do each task, and sea level represents what can be done today. So sea level rises as the power of AI grows, which does mean there are careers at the waterfront, those about to be disrupted by automation. At the other end of the scale is the question: will AI eventually end up flooding all the land?

When AI matches human ability at all tasks, that is the point where Artificial General Intelligence (AGI) is achieved. But Tegmark sees things going further, with the development of Super Intelligence:

The idea is very simple. If we ever reach AGI then, of course, from that point on AI progress will be driven primarily not by humans, but by AI, which can go much faster than the typical human research and development timescale of years. And this raises this very controversial idea that AI might very rapidly through recursive self-improvement leave human intelligence far, far behind.

Is any of this actually going to happen? According to Tegmark, surveys show most AI researchers predict AGI will happen within decades. As for Super Intelligence, while this was for some time considered to be impossible, according to the laws of physics, now the answer is:

It's a little bit complicated, but yes. And if it doesn't, something terrible has happened to prevent it. We have to take this seriously, this possibility of Super Intelligence happening and perhaps in our lifetime. It's worth remembering that, from basic physics, what we do today is really pathetic. Physics says that there are all these other technologies, which are vastly more effective than anything we can build today. We haven't been able to figure out how to build that tech yet, but it's possible super intelligence might be able to get us there pretty quickly.

The hard part here is that it is currently impossible to second guess, once AGI is developed, just how quickly that might figure out what such technologies might look like - and build them. But he is willing to consider the possibility that it could be in his lifetime.

Steering to what destination?

The power element is certainly possible, therefore, but what about using it to steer to the goals that humans set? This subject was the driving force behind Tegmarx co-founding the small, non-profit Future of Life Institute, aimed at identifying those beneficial technology uses. He explained:

I'm confident that we can have an inspiring future with high tech. But it's going to require winning the wisdom race: the race between the growing power of the technology and the wisdom with which we manage it. The challenge here is we're used to winning this wisdom race with not a powerful tech but by just learning from mistakes. Learning from mistakes goes from being a good strategy to being a lousy strategy, and we're much better off actually being proactive rather than reactive, and getting things right the first time because that might be the only time that we get.

MIT calls this approach Safety Engineering, the goal of which is guaranteeing the success of a mission or project. Tegmark suggested it should be the strategy to follow with ever more powerful AI, where it will be crucial to think through what could go wrong, so that it can be guaranteed to go right.

The Institute organized a series of conferences with the aim of producing 23 Asilomar AI Principles, subdivided into three categories: Research, Ethics and Values, and Longer Term Issues.  The goal here is to develop AI research aimed at creating beneficial intelligence rather than undirected intelligence.

These are already being adopted by AI researchers worldwide, and include such self-evidently sensible principles as avoiding an arms race in algorithms that decide to kill people, transform today's buggy and hackable computers into really robust AI systems that we can trust, and mitigate against AI-fuelled inequality. Tegmark said:

My opinion is that if we can manage to grow the world GDP by a massive factor, and we still can't figure out how to share all this wealth in such a way that everyone gets better off, well, then shame on us.

Of course, all this cuts both ways: AI can end up being ‘too good’ as well as very bad. Tegmark argued:

Let's not get side-tracked by having Hollywood make us worry about the wrong things. The biggest threat from AGI is not that it'll turn evil like in some silly movie, but that it's going to turn really, really competent and go off and accomplish goals that just aren't aligned with our goals. As we build AGI let's make sure that AI is built so that it can understand our human goals and adopt them and retain them. This is a very important question not just for humanity as a whole, but also for our businesses.

As for the destination for all this future effort, Tegmark suggested that the United Nations Sustainable Development Goals, which have been adopted by most countries on Earth, make a good place to start to ensure that AI can help humanity accomplish those goals better and faster. But in his view that is still scratching the surface:

Let's raise our ambition level and go far beyond them. Why should we limit ourselves just talking about no poverty when we can have amazing prosperity for everybody? Why should we just talk about climate action? Let's raise our ambition to a totally sustainable earth.

My take

It may perhaps be too easy to dismiss this as an academic tub-thumping about their pet subject, but a quick look around will clearly tell us that the human race seems to be sliding into a quagmire, and that includes many of the businesses that it has created over the years. It is no quirk of fate that many of the world’s most successful – and long-lasting – businesses are in the Attack (sorry) Defence sector.

Maybe there is some hope to be found in AI helping us to create a new even keel we can all get on, but it will take not only technology but the skill to exploit it with the best intentions, and have it ensure those intentions are met, as Tegmark concluded with two pieces of advice:

First, draw a really clear red line between what you consider to be acceptable and unacceptable uses of AI. Tell everybody, let's stick to it. And then think hard about the acceptable part and try to articulate what's really exciting, because they (everyone) really need a positive vision, clearly articulated. Any good CEO knows that this is the fundamental driver of collaboration. And this is equally true in geopolitics.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK