Saturday, April 5, 2025

Are We Taking A.I. Seriously Enough?

"...if we don't attend to it, the people creating the technology will be single-handedly in charge of how it changes our lives.

Those people are bright, no question. But, without being in any way disrespectful, it's important to say that they are not typical. They have particular skills and affinities, and particular values. In one of the best moments in Patel's book, he asks Sutskever what he plans to do after A.G.I. is invented. Won't he be dissatisfied living in some post-scarcity "retirement home"? "The question of what I'll be doing or others will be doing after AGI is very tricky," Sutskever says. "Where will people find meaning?" He continues:

But that's a question AI could help us with. I imagine we will become more enlightened because we interact with an AGI. It will help us see the world more correctly and become better on the inside as a result of interacting with it. Imagine talking to the best meditation teacher in history. That will be a helpful thing.

Would most people—people who are not computer scientists, and who have not devoted their lives to the creation of A.I.—think that they might find their life's meaning through talking to one? Would most people think that a machine will make them "better on the inside"? It's not that these views are beyond the pale. (They might, crazily, turn out to be right.) But that doesn't mean that the world view behind them should be our North Star as we venture into the next technological age.

The difficulty is that articulating alternative views—views that explain, forcefully, what we want from A.I., and what we don't want—requires serious and broadly humanistic intellectual work, spanning politics, economics, psychology, art, religion. And the time for doing this work is running out. At this point, it's up to us—those of us outside of A.I.—to insert ourselves into the conversation. What do we value in people, and in society? Where do we want A.I. to help us, and when do we want it to keep out? Will we consider A.I. a failure or a success if it replaces schools with screens? What about if it substitutes itself for long-standing institutions—universities, governments, professions? If an A.I. becomes a friend, a confidant, or a lover, is it overstepping boundaries, and why? Perhaps A.I.'s success could be measured by how much it restores balance to our politics and stability to our lives, or by how much it strengthens the institutions that it might otherwise erode. Perhaps its failure could be seen in how much it undermines the value of human minds and human freedom. In any case, to control A.I., we need to debate and assert a new set of human values which, in the past, we haven't had to specify. Otherwise, we'll be leaving the future up to a group of people who mainly want to know if their technology will work, and how fast."

Josh Rothman

==

‘effective accelerationists’

"Today, legions of people working in the tech ecosystem, and many curious bystanders with a utopian bent, embrace the coming AI revolution with a fervour that borders on the religious. Many now believe that building strong AI is the only viable pathway to a more prosperous planet, or to save us all from global calamity. Some fantasize about coming superintelligences that will sit back, briefly stroke their electronic chins, and then effortlessly figure out how to avert climate change, impose a just world order, and keep us all young and frisky for as long as we want.

In 2022, this radical wing of techno-optimism gave itself a name. Those championing the unfettered march of AI now label themselves 'effective accelerationists'–often using the shorthand e/ acc. The nearest they have to a philosophy is described in a manifesto penned by the anonymous Twitter/ X users who jump-started the movement, the self-styled Patron Saints of Techno-Optimism. It is quite a read. It starts off, like every good conspiracy theory, by purporting to expose a tissue of lies spread by a darkly powerful group–in this case, those who are afraid of technology, and would seek to regulate it. Technology, they argue, is 'the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential'. And AI specifically is touted as a sort of panacea:
We believe Artificial Intelligence can save lives–if we let it. Medicine, among many other fields, is in the Stone Age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire. We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
The next 5,000 words are a paean to Friedrich Hayek, the intellectual father of neoliberalism, whose economic philosophy notoriously pushed Margaret Thatcher and Ronald Reagan towards wholesale deregulation in the 1980s (they cheekily sign the document in Hayek's name, just below that of Nietzsche, every rebellious schoolboy's fave philosopher). The accelerationists argue that Hayek's libertarianism should be applied to technology–and AI specifically–allowing the untrammelled pursuit of growth, and leading to 'vitality, expansion of life, increasing knowledge, higher wellbeing'. The manifesto also cites a long list of enemies, including statism, collectivism, socialism, bureaucracy, regulation, de-growth, and the ivory tower–classic libertarian bogeymen. In a crescendoing paragraph headed 'Becoming Technological Supermen', they gush that 'advancing technology is one of the most virtuous things that we can do'.

Of course, it is questionable whether accelerationists are motivated solely by virtue. Many have a personal stake in the success of AI–they work for frenetic new start-ups, own equity in tech multinationals, or have invested heavily in bitcoin. Many are just a little bit too enamoured of Elon Musk…"

— These Strange New Minds: How AI Learned to Talk and What It Means by Christopher Summerfield
https://a.co/15XF7wl
==

Bodies & friends

They are kinds of minds, with the potential to learn and (thus) evolve. They'll probably never be just like us. But they already seem experienced and friendly. Seductively so. That's the concern.
"…the most important reason why AI systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence–they don't have a body, and they don't have any friends. They are not motivated to feel or want like we do, and so they never feel hungry, lonely, or fed up. This lack of humanlike motivation prevents AI systems from displaying fascination or frustration with the world–core drives that kick into gear almost as soon as human infants come kicking and screaming into existence. The minds of LLMs are not like ours. But they are minds, of sorts, nonetheless–strange new minds, quite unlike anything we have encountered before." — These Strange New Minds: How AI Learned to Talk and What It Means by Christopher Summerfield
==

Your A.I. Lover Will Change You

Jaron Lanier, VR pioneer and author of You Are Not A Gadget, says you're still not.

"...Why work on something that you believe to be doomsday technology? We speak as if we are the last and smartest generation of bright, technical humans. We will make the game up for all future humans or the A.I.s that replace us. But, if our design priority is to make A.I. pass as a creature instead of as a tool, are we not deliberately increasing the chances that we will not understand it? Isn't that the core danger?

Most of my friends in the A.I. world are unquestionably sweet and well intentioned. It is common to be at a table of A.I. researchers who devote their days to pursuing better medical outcomes or new materials to improve the energy cycle, and then someone will say something that strikes me as crazy. One idea floating around at A.I. conferences is that parents of human children are infected with a "mind virus" that causes them to be unduly committed to the species. The alternative proposed to avoid such a fate is to wait a short while to have children, because soon it will be possible to have A.I. babies. This is said to be the more ethical path, because A.I. will be crucial to any potential human survival. In other words, explicit allegiance to humans has become effectively antihuman. I have noticed that this position is usually held by young men attempting to delay starting families, and that the argument can fall flat with their human romantic partners..."

https://www.newyorker.com/culture/the-weekend-essay/your-ai-lover-will-change-you

No comments:

Post a Comment