‘Sic AI’s on each other’ to solve artificial intelligence threat: David Brin, author
David Brin, the Hugo and Nebula-winning science fiction author behind the Uplift novels and The Postman, has devised a plan to combat the existential threat from rogue artificial intelligence.
He says only one thing has ever worked in history to curb bad behavior by villains. It’s not asking them nicely, and it’s not creating ethical codes or safety boards.
It’s called reciprocal accountability, and he thinks it will work for AI as well.
“Empower individuals to hold each other accountable. We know how to do this fairly well. And if we can get AIs doing this, there may be a soft landing waiting for us,” he tells Magazine.
“Sic them on each other. Get them competing, even tattling or whistle-blowing on each other.”
Of course, that’s easier said than done.
Magazine chatted with Brin after he gave a presentation about his idea at the recent Beneficial Artificial General Intelligence (AGI) Conference in Panama. It’s easily the best-received speech from the conference, greeted with whoops and applause.
Brin puts the “science” into science fiction writer — he has a PhD in astronomy and consults for NASA. Being an author was “my second life choice” after becoming a scientist, he says, “but civilization appears to have insisted that I’m a better writer than a physicist.”
His books have been translated into 24 languages, although his name will forever be tied to the Kevin Costner box office bomb, The Postman. It’s not his fault, though; the original novel won the Locus Award for best science fiction novel.
Privacy and transparency proponent
An author after the crypto community’s heart, Brin has been talking about transparency and surveillance since the mid-1990s, first in a seminal article for Wired that he turned into a nonfiction book called The Transparent Society in 1998.
“It’s considered a classic in some circles,” he says.
In the work, Brin predicted new technology would erode privacy and that the only way to protect individual rights would be to give everyone the ability to detect when their rights were being abused.
He proposed a “transparent society” in which most people know what’s going on most of the time, allowing the watched to watch the watchers. This idea foreshadowed the transparency and immutability of blockchain.
In a neat bit of symmetry, his initial thoughts on incentivizing AIs to police each other were first laid out in another Wired article last year, which formed the basis of his talk and which he’s currently in the process of turning into a book.
History shows how to defeat artificial intelligence tyrants
A keen student of history, Brin believes that science fiction should be renamed “speculative history.”
He says there’s only one deeply moving, dramatic and terrifying story: humanity’s long battle to claw its way out of the mud, the 6,000 years of feudalism and people “sacrificing their children to Baal” that characterized early civilization.
But with early democracy in Athens and then in Florence, Adam Smith’s political theorizing in Scotland, and with the American Revolution, people developed new systems that allowed them to break free.
“And what was fundamental? Don’t let power accumulate. If you find some way to get the elites at each other’s throats, they’ll be too busy to oppress you.”
Artificial intelligence: hyper-intelligent predatory beings
Regardless of the threat from AI, “we already have a civilization that’s rife with hyper-intelligent predatory beings,” Brin says, pausing for a beat before adding: “They’re called lawyers.”
Apart from a nice little joke, it’s also a good analogy in that ordinary people are no match for lawyers, much fewer AIs.
“What do you do in that case? You hire your own hyper-intelligent predatory lawyer. You sic them on each other. You don’t have to understand the law as well as the lawyer does in order to have an agent that’s a lawyer who’s on your side.”
The same goes for the ultra-powerful and the rich. While it’s difficult for the average person to hold Elon Musk accountable, another billionaire like Jeff Bezos would have a shot.
So, can we apply that same theory to get AIs to hold each other accountable? It could, in fact, be our only option, as their intelligence and capabilities may grow far beyond what human minds can even conceive.
“It’s the only model that ever worked. I’m not guaranteeing that it will work with AI. But what I’m trying to say is that it’s the only model that can.”
Read also
Individuating artificial intelligence
There is a big problem with the idea, though. All our accountability mechanisms are ultimately predicated on holding individuals responsible.
So, for Brin’s idea to work, the AIs would need to have a sense of their own individuality, i.e., something to lose from bad behavior and something to gain from helping police rogue AI rule breakers.
“They have to be individuals who can be actually held accountable. Who can be motivated by rewards and disincentivized by punishments,” he says.
The incentives aren’t too hard to figure out. Humans are likely to control the physical world for decades, so AIs could be rewarded with more memory, processing power or access to physical resources.
“And if we have that power, we can reward individuated programs that at least seem to be helping us against others that are malevolent.”
But how can we get AI entities to “coalesce into discretely defined, separated individuals of relatively equal competitive strength?”
However, Brin’s answer drifts into the realm of science fiction. He proposes that some core component of the AI — a “soul kernel,” as he calls it — should be kept in a specific physical location even if the vast majority of the system runs in the cloud. The soul kernel would have a unique registration ID recorded on a blockchain, which could be withdrawn in the event of bad behavior.
It would be extremely difficult to regulate such a scheme worldwide, but if enough corporations and organizations refuse to conduct business with unregistered AIs, the system could be effective.
Any AI without a registered soul kernel would become an outlaw and shunned by respectable society.
This leads to the second big issue with the idea. Once an AI is an outlaw (or for those who never registered), we’d lose any leverage over it.
Is the idea to incentivize the “good” AIs to fight the rogue ones?
“I’m not guaranteeing that any of this will work. All I’m saying is this is what has worked.”
Three Laws of Robotics and AI alignment
Brin continued Isaac Asimov’s work with Foundation’s Triumph in 1999, so you might think his solution to the alignment problem involved hardwiring Asimov’s three laws of robotics into the AIs.
The three rules basically say that robots can’t harm humans or allow harm to come to humans. But Brin doesn’t think the three laws of robotics have any chance of working. For a start, no one is making any serious effort to implement them.
“Isaac assumed that people would be so scared of robots in the 1970s and 80s — because he was writing in the 1940s — that they would insist that vast amounts of money go into creating these control programs. People just aren’t as scared as Isaac expected them to be. Therefore, the companies that are inventing these AIs aren’t spending that money.”
A more fundamental problem is that Brin says Asimov himself realized the three laws wouldn’t work.
One of Asimov’s robot characters named Giskard devised an additional law known as the Zeroth Law, which enables robots to do anything they rationalize as being in humanity’s best interests in the long term.
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
So like the environmental lawyers who successfully interpreted the human right to privacy in creative ways to force action on climate change, sufficiently advanced robots could interpret the three laws any way they choose.
So that’s not going to work.
While he doubts that appealing to robots’ better natures will work, Brin believes we should impress upon the AIs the benefits of keeping us around.
“I think it’s very important that we convey to our new children, the artificial intelligences, that only one civilization ever made them,” he says, adding that our civilization is standing on the ones that came before it, just as AI is standing on our shoulders.
“If AI has any wisdom at all, they’ll know that keeping us around for our shoulders is probably a good idea. No matter how much smarter they get than us. It’s not wise to harm the ecosystem that created you.”
Subscribe
The most engaging reads in blockchain. Delivered once a
week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.
Read also
Sam Bankman-Fried’s life in jail, Tornado Cash’s turmoil, and a $3B BTC whale: Hodler’s Digest, Aug. 20-26
Sam Bankman-Fried faces challenges in jail, Tornado Cash’s developer is arrested, and a Bitcoin whale holding $3 billion is identified.
FTX considers reboot, Ethereum’s fork goes live and OpenAI news: Hodler’s Digest, April 9-15
FTX’s new management plans to relaunch the exchange in 2024, Ethereum’s Shapella hard executed on mainnet and OpenAI faces rising competition.
Discover more from reviewer4you.com
Subscribe to get the latest posts to your email.