It’s easy to call warnings about artificial intelligence (AI) “sensationalism.” (So many things today are, after all.) In fact, when I told a “terrified” ex-AI researcher on X yesterday that I was writing a story about the alarm he sounded and had questions for him, another respondent — an author, it turns out — wrote of me, “Typical journo… If it bleeds, it leads.” Perhaps there’s no worse insult than “typical journo,” a status below used-car salesman and personal-injury lawyer and just above politician. (Maybe.) But I think my psyche will survive. Will we, however, survive the coming AI revolution?
No, that question isn’t sensationalistic, at least not according to to the man I approached, ex-OpenAI safety researcher Steven Adler. Explaining Monday why he quit his OpenAI position in November, he posted the following quite alarming message on X:
Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point? [Tweet below.]
The “AGI” acronym Adler uses stands for “artificial general intelligence.” This is, IBM’s website explains,
a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software.
In other words, this sounds much like the thinking machines so often portrayed, ominously, in science fiction. “HAL” from 2001: A Space Odyssey comes to mind.
Just Science Fiction?
As for the “alignment” Adler spoke of, that concerns preventing the ominousness. As TechTarget informs:
AI alignment is a field of AI safety research that aims to ensure artificial intelligence systems achieve desired outcomes. AI alignment research keeps AI systems working for humans, no matter how powerful the technology becomes. […]
— Read More: thenewamerican.com