5

The singularity is close?

 2 years ago
source link: https://mkaic.substack.com/p/the-singularity-is-very-close?s=r
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The singularity is very close

Or "Why we're all in denial about the robot apocalypse"

A screenshot from the animated film The Mitchells Vs. The Machines that depicts two humanoid robots brandishing their glowing blaster arms menacingly.
from The Mitchells Vs. The Machines. Go watch it if you haven’t already, it’s perfect in every way.

Then came the Butlerian Jihad— two generations of chaos. The god of machine-logic was overthrown among the masses and a new concept was raised: “Man may not be replaced.”

—Frank Herbert, Dune

Within one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.

Artificial General Intelligence (AGI)1 is about to go from being science fiction to being part of everybody’s day-to-day life. It’s also going to happen in the blink of an eye — because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human being’s life.

Some people believe the singularity won’t happen for a very long time, or at all. I’d like to discuss why I am nearly certain it will happen in the next 20 years. My overall prediction is based on 3 hypotheses:

  1. Scale is not the solution.

  2. AI will design AGI.

  3. The ball is already rolling.

Keep in mind that this is just speculation and opinions. These predictions depict the future I personally feel is most likely.

Scale is not the solution.

Recently, an architecture called the Transformer has been taking over machine learning. It’s really good at sequence-to-sequence tasks like translation and text completion, and it’s also been successfully applied to other fields like computer vision.

Transformers2 also demonstrate an intriguing ability to scale their performance with their size better than other architectures. They seem less prone to the performance ceilings found in their competition.

This has lead to a new slogan popping up in the AGI-speculation community: “scale is all you need.” Some people believe that bigger networks, bigger compute clusters, and bigger datasets are all we need to get to AGI. I disagree.

I believe we are more bottlenecked by the architecture designs than anything else. While modern, standard feedforward neural networks are getting very good at Doing Stuff™, they aren’t AGI and I don’t think there’s a clear path forward for them to become AGI. I have no doubt OpenAI’s next mega-model, GPT-4 (and beyond), will be excellent, but I also think it will have exploitable flaws that make it fail a thorough Turing test.

In fact, I see the massive size of the present-day’s GPT-3 as a sign that scale isn’t the answer. 175 billion parameters, but still obviously not sentient? For comparison, the human brain has between 20 and 100 billion neurons and up to 1 quadrillion synapses.

You could argue that until our neural networks have hundreds of trillions of parameters, it’s not fair to compare them to the brain, but I think this argument relies too much on the assumption that a biological synapse and a weight in a network are equivalent in computational ability. This has not be proven. The intricacies of how the brain moves and processes signals are still not entirely understood3, but we know it seems to operate very differently from current neural networks.4

Looking at most of the most revolutionary papers in the history of AI, they are dominated not by “we made it bigger” but by “we made it smarter at the same size”. I see no reason not to expect that this pattern will continue.

If scale isn’t the answer, what is? I believe that the pièce de résistance is adaptability. Presently, the way you make an ML model is fairly rigid: you decide on a fancy new way to differentiably mix matrix multiplications together, you feed it a ton of data, and you use some simple calculus-based optimizer to train the weights in your network5. The way that the weights in your network are arranged doesn’t change after training.

I don’t believe this is adaptible enough, even at scale. In order for true intelligence to emerge, models must be able to reorganize their own inner workings. I don’t think you can have the level of flexibility required for sentience with a frozen architecture.6

I think sentient AI will be created by working smarter, not harder, with a focus on better architectural design and intelligent optimizers. This leads nicely into my next hypothesis:

AI will design AGI.

A photoshopped image in which Barack Obama appears to be awarding a clone of himself with a medal.

Human-designed networks have achieved great results, but they still suffer from the flaws of their creators. We are attracted to neatly organized network architectures which we can investigate and explain and attempt to understand.

But our brains, the gold standard of intelligence, are famously difficult to investigate, explain, or understand! I think this is because our brains weren’t “designed” by anyone — they evolved. They are the product of the universe’s greatest optimizer, natural selection.7

I think it’s reasonable to assume that the architecture that brings about AGI will not be hand-designed by humans, or even selected via some brute-force hyperparameter search — it will be designed by another AI. I predict there will be several recursive layers of AI design — perhaps a dumb network which constructs a decent network which constructs a smart network which constructs AGI.

I am bullish on the prospect of what I call “constructor networks” — models that construct other models (also known as hypernetworks). I think the moment we crack hyperlearning will be the moment progress will start moving faster than we can keep up, precisely because we will no longer be the ones making the progress — the algorithms themselves will.

In order to work smarter, not harder, we need to let go of our human biases and focus on making unconstrained architectures that can aggressively optimize every aspect of themselves. I fully expect these architectures will be frustratingly difficult to explain when they arrive — like huge mounds of digital neural spaghetti — but they will also outperform all competition. Every additional stable layer of AI abstraction we add between ourselves and the final model will make the final model harder to understand and better at its task.

The ideal model will be able to not only be constantly online-learning, but also constantly adding and removing its own parameters, allowing evolution and adaptation to new tasks.

You cannot have artificial general intelligence if your model cannot adapt in real time to an arbitrary task.

The ball is already rolling.

I believe that there is too much momentum to stop AGI now. With this much distributed attention fixed on the problem, AGI will be solved. Additionally, once it is solved it will be released to the public — whether it’s ethical to do so or not. I imagine that the first people to solve it will probably keep it behind closed doors, but it won’t stay secret forever. Someone on the team will leak everything, or someone else will independently make the same discoveries and release them. Eventually it will get out.

Consider the invention of the nuclear bomb — once we learned of the power hidden in radioactive materials, it was only a matter of time before someone pushed the research to its moral limits. AGI is like that, except it’s even more terrifying because uranium, plutonium, and the bombs made out of them can be strictly controlled, but people with powerful computers and an internet connection cannot, nor can the AGIs they create.

I recognize how cliché and alarmist this all sounds. Really, you’re genuinely worried about a robot apocalypse? You know Age of Ultron is just a stupid Marvel movie, right? Yeah, I know. But I’ve grown to believe that the concerns that fiction writers have been bringing up for decades are actually quite reasonable — because AGI cannot be stopped.

Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure. Obviously, it’s impossible to say for sure that this is what the first free AGI will do, but it’s inevitable that some malevolent AGI will exist and will do these things. We can only hope that we’ll have sufficiently powerful benevolent AGI to fight back.

Final Thoughts

I subtitled this post “Why we're all in denial about the robot apocalypse”. I say that because I believe that society at large is completely, utterly, and woefully unprepared for the advent of sentient, living artificial general intelligence. I think the singularity is coming much sooner than most people expect, and I think it’s going to cause a great deal of upset when it arrives — for better and for worse.

Take for instance the common religious belief that people possess some unmeasurable, undefinable soul, and that this soul is what separates us from inanimate objects and non-sentient animals. Furthermore, some people believe that these souls come from deity. I have spoken with friends who believe that AGI is impossible because “robots can’t have souls, humans aren’t God”. For these people, like Caleb says in Ex Machina (paraphrasing), removing the line between man and machine also removes the line between god and man.

Now, this isn’t to say that AGI will destroy religion or anything — it may even be used to strengthen some sects (as taken to the extreme in HBO’s Raised By Wolves). No, religion has been around for millennia and I’m sure it will continue to be around for many more millennia. I’m simply predicting that a subset of religious people are going to experience lots of cognitive dissonance when the first AGI arrives.

More generally, arguments about AGI sentience and ethical issues will go from being topics only geeks talk about to topics that Facebook moms make political grandstands over.

Finally, I want to address those who may feel this post is pessimistic: I assure you, I am hopeful about AGI. I work in the field of ML because I am hopeful. I hope to personally contribute to the development of AGI in my lifetime. I think AGI has the capacity to make the world an infinitely better place. We are not prepared for AGI, but that doesn’t mean AGI has to be the end of humanity.

I don’t know what life will look like in the age of living machines, but I am confident that, as Jeff Goldblum puts it:

Life, uh, finds a way.

—Ian Malcolm, Jurassic Park

Thanks for reading,

PS — I’m making a series of short films about AGI right now! You should totally go watch the first episode, which is out now on my YouTube channel and my TikTok account.

Also, while you’re at it, why not follow me on Twitter?

1

In this article, I’m going to use “AGI” (Artificial General Intelligence) and “singularity” interchangeably, even though some may argue that they have differences. Once we have AGI, there’s no feasible way to contain it, so it will be free to improve itself and replicate in a runaway exponential fashion — and that’s basically what the idea of a technological singularity describes anyways.

2

More than meets the eye!

3

If you are a brain-studier and you think I’m completely wrong about this, please reach out! I would love to learn more.

4

There are people researching neural networks which are modeled directly off of the behavior that real neurons exhibit, but these efforts haven’t produced any stunning results yet.

5

Or you use a reward system with credit assignment like in reinforcement learning.

6

Just think about how much human brains are constantly training and retraining themselves on a day-to-day basis to do things like learn new skills or navigate novel situations!

7

This is one of my favorite analogies — evolution as an optimizer! Organisms compete to be as optimized as possible for the proliferation of their own genome, and are penalized by the environment and other organisms when they fail, leading to a huge diversity of highly specific adaptations and evolved traits that give different kinds of creatures very specific advantages in their individual habitats. As the good book says: “In the beginning, there was nothing. Then, someone initialized torch.optim.NaturalSelection(Life.parameters(), lr=1e-6)


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK