Tim O’Reilly forwarded a wonderful article concerning the OpenAI cleaning soap opera to me: Matt Levine’s “Money Stuff: Who Controls Open AI.” I’ll skip most of it, however one thing caught my eye. Towards the tip, Levine writes about Elon Musk’s model of Nick Bostrom’s AI that decides to show the world to paperclips:

[Elon] Musk gave an instance of a synthetic intelligence that’s given the duty of choosing strawberries. It appears innocent sufficient, however because the AI redesigns itself to be more practical, it’d determine that one of the simplest ways to maximise its output can be to destroy civilization and convert the whole floor of the Earth into strawberry fields.


Study sooner. Dig deeper. See farther.

That will get me, however not in the best way you assume. It’s personally poignant, for causes totally completely different from the AI-doomerism cults that Musk, Bostrom, and others are propagating.

After I was a graduate scholar at Stanford, I used to be driving round with a pal by means of the infinite maze of parking heaps and strip malls in that nondescript a part of Silicon Valley the place Sunnyvale, Santa Clara, and Cupertino come collectively. My pal identified the window and mentioned, “That’s the place my father’s farm was.” I requested what his father grew; it was very troublesome to think about a farm at that location. He grew strawberries. And what occurred to the farm? His father misplaced it when he was put right into a World Conflict II internment camp for Japanese. An actual property investor ended up with it. My pal’s father ultimately dedicated suicide. The farm turned a car parking zone.

This will get me again to an argument that I’ve made in older Radar articles: Our fears of AI are actually fears of ourselves, fears that AI will act as badly as people have repeatedly acted. We don’t want AI to show the world into strawberries any greater than we’d like it to show the world into parking heaps. We’re already turning the world into parking heaps, and doing so with out regard to the human value. We’re already spewing CO2 at a fee that may quickly make the world uninhabitable for all however the few who can insulate themselves from the implications. If we’re going to unravel these issues, it received’t be by means of expertise. It’s by means of discovering higher people than Elon and, I concern, Sam Altman. We don’t have an opportunity to unravel the AI downside if we are able to’t clear up the human downside. And if we don’t clear up the human downside, the AI downside is irrelevant.

Get the O’Reilly Radar Developments to Watch publication



Leave a Reply

Your email address will not be published. Required fields are marked *