Reply To RicG On AI Optimism
RicG (@RickG) asks:
This might be a tall order, but is the reason you are optimistic more like “the arguments for AIxrisk are wrong” or “we are going to make it work despite everything”?
30-40% the former 60-70% the latter? I think that most AI risks are chronic (e.g. gradual loss of human economic power) rather than acute (e.g. AI supervirus kills everyone) and the solutions are going to require deep societal reform and fine taste on what variables to control.
Like ultimately the situation I expect us to find ourselves in is something like gradually evolving into tightly enmeshed gardeners over the world-system forced to balance explore-exploit and parasitic strategies.
https://www.beren.io/2023-04-23-Composable-latent-spaces-BCIs-modular-minds/
One of the reasons I'm not bullish on MAGIC type proposals I simply do not think we have the social technology to make central institutions there is common knowledge control the future and not have those immediately taken over by malign interests.
People frequently use the Manhattan Project for their intuitions about how to manage new technology, and this is terrible because the Manhattan Project really was a kind of one-off deal. It was a secret project run by one of the most exceptional public servants to ever live.
The more you dig into the details the more you realize we simply cannot do the Manhattan Project again. We didn't even do it intentionally the first time: Groves got priority 1 funding at the outset and then scope-creeped up to the full cost of inventing the bomb.
The Manhattan Project was not done "by the government", it was done by Groves pulling out his rolodex he'd built up over a career as a government contractor before lowest-bidder contracts and hand picking private companies to build out the pieces of the pipeline.
Some of these companies, such as Dow Jones chemical corp did their part of the work at cost without profit and without the board even knowing the full details because at that time the US government's reputation was platinum and patriotic loyalty ran deep.
{gap where I pause to do something else and forget about the thread}
Sorry I didn't fully answer your question. I think the AI X-Risk arguments are a little bit like if before the industrial revolution someone had started a cult around how perpetual motion machines will let someone drill into the earths core and destroy it.
So they propose only the state should have industrial machinery to prevent acceleration of timelines to perpetual motion. They plot the Henry Adam's curve of increasing horsepower from engines and say "eventually you'll have a singularity that lets you drill into the earth".
And they talk about how industry is a self reinforcing feedback loop, once it gets going it'll get really fast because you can feed and house more people to do drill capabilities research. We have to slow down engine progress before unlimited energy leads to doomsday devices.
Imagine this goes on for a while, like how would this timeline react to something like nuclear power? "Oh wow we found the unlimited energy source, see we were right!"
The basic problem with the AI X-Risk arguments is they're speculative models which imply beliefs about a ton of parameters we have much more evidence about now, but they haven't really been deep refactored to take those new bits seriously or frankly just replaced with new models.
It's just really easy to squint and abstract yourself into a place where "the Henry Adams curve implies we'll have accelerating returns towards doomsday drills" sounds reasonable if you throw out all engineering details and plot everything log log with a fat magic marker.
"We need to ban all home ownership of engines because a hobbyist might learn the secret of perpetual motion from it and then use it to drive a drill." is unfortunately just not that far off from actual doomer arguments about AI policy.
So when you ask me this it's just sort of like...wrong about what, in what way? "Unlimited power threatens civilization" is technically a true belief, that's what a nuclear bomb is. Am I supposed to say "oh perpetual motion wouldn't be dangerous"?
Am I supposed to say "perpetual motion is impossible"? We don't even know that now. We're say, 99% sure it's impossible, but we can't truly rule it out, like for all we know if you were way more intelligent than human you could acausally infer the cheat codes to reality.
This is why the burden of proof is supposed to be on the person who claims something is dangerous or harmful. Because it is so incredibly easy to hallucinate problems that don't exist and obsess over them. It is especially easy if you are willing to invoke unknown mechanics.
Another way to put it might be that you are many many branches deep into an extremely dangerous timeline. We made a decision at some point around the Enlightenment to accept the consequences of knowing. As a civilization, we chose to learn the truth even if it destroyed us.
You want assurance nobody will ever invent a compound that destroys the world? That we'll never discover unlimited energy and blow out the center of the earth with it? You want all production carefully regulated into hereditary castes? We had that system, it was called feudalism.
When we gave up alchemy and the church we did not have the periodic table. During the most consequential decisions that led to this timeline we had no assurance it would not result in total annihilation. Indeed to many people at the time it felt like the world was ending.
If you feel near total safety now it is only because we have stopped believing in miraculous cheat codes for things like perpetual motion. The last place anyone is willing to believe in deep consequential secrets is AI, everything else is 'priced in'.
To explain the error gently: You had a cult centered around this big secret called 'recursive self improvement' that needed IQ points instead of capital, a veritable philosophers stone that would grant its wielder seemingly magic powers to reshape reality.
Then disaster struck: Minds turned out to be made of capital too. Suddenly 'artificial intelligence' went from a game of IQ to a game of capital, industrial processes where you put in X material to get Y effect.
The mystery began to evaporate and the cult has entered its slow decline. The straightforward update on the success of deep learning is that there is probably no Big Secret. "Scaling laws" as secret is literally "the secret is there is no secret", your map has been filled in.
You might object "but you can't know that, the secret could be right around the corner!" and I could spend a bunch of time explaining my heuristics but honestly? You're right that I can't know, and frankly I don't have to. The warrant was that our map had a huge blank spot.
In 2015 if you had asked me how the brain does what it does I could not have told you. I could not have even begun to give a serious plausible explanation. It is now 2024 and we have very plausible models of brain function from deep learning.
We may not know the algorithm but we know the mechanism, there is no fundamental mystery. I am not staring at a huge blank spot in my map going "huh I just can't explain how the human mind can occur in the physical universe, there might be huge stuff there".
So you know, if I reasonably go "huh my map has been filled in enough now that I no longer have a deep mystery to explain here that I can fit laptop-singleton parameters to freely because it's so mysterious" that is no more reckless than letting the industrial revolution happen.
Am I saying that we won't discover big efficiency improvements? No. Am I saying there's no risks? Of course not. Am I saying we won't all die? I'm pretty sure we won't, it would be overconfident to say I'm totally certain.
But the original warrant is gone, totally finished.