By John Farmer, Jr. and Charles McKenna
Artificial intelligence (“AI”) has arrived. And it is scarier than anything you or even Stanley Kubrick, the creator of 2001: A Space Odyssey, ever dreamed.
No, we are not talking about college kids routinely using AI to write term papers or the ability for AI to take over many jobs that require decision-making, although those are legitimate concerns. Rather, we are talking about the very real likelihood that AI will erase completely the blurred line between information and misinformation.
The internet and social media have already created a truth-challenged environment, in which, according to the MIT Media Lab, misinformation spreads four times faster than fact. In an AI world, that rate will undoubtedly accelerate. Truth may well become literally unsearchable.
ChatGBT, an AI-based chatbox, launched in November 2022. Within two months it had 100 million users. Try it. Provide some information to it and it will write a sonnet to your spouse. It will chat with you effortlessly and mostly fluently in some 50 languages. It will hold forth with total confidence on subjects ranging from public health to applied mathematics. It may also, however, be completely wrong.
It has been “hailed as a technological breakthrough on a par with the printing press. But … [i]t sometimes hallucinates non-facts that it pronounces with perfect confidence, insisting on those falsehoods when queried. It also fails basic logic tests.” It can also be manipulated.
Other variants of AI have commandeered and redirected real voices to say unspeakably ridiculous, even hateful things. President Joe Biden has been heard extolling the virtues of low-quality pot (“I’m from Scranton. What I’m smoking is dirt. So let’s get that straight Jack. Pure brick. Ass. O.K.?”) or denouncing transgender people. Actress Emma Watson has been heard reading not from the Harry Potter series but from Mein Kampf.
Recent photographs have depicted Pope Francis in a full-length white puffy coat, in Aviators revving a motorcycle and sharing a beer at a Burning Man festival, among others.
In advance of former President Trump’s indictment, deep faked mug shots appeared, and extremely realistic videos surfaced of Trump being arrested and taken to the ground by police. (In Trumpian fashion, his campaign has monetized the mug shots.)
To the extent such sounds and images are obviously farcical, they can be amusing. But what if the deep fakes are nuanced, the distortions slight? What happens when the ability to deep fake a voice, an image, a speech, an essay is used in a nuanced and weaponized fashion to affect an election, a business, a family, a reputation?
Trust but verify is what we are told. What happens when the verification process no longer functions properly? What happens when we can no longer trust what we are seeing or hearing? The inability to discern reality from fiction will lead to the erosion of trust in our senses and will have us questioning our entire belief system.
Recently, far-right influencer Douglass Mackey was convicted of spreading fraudulent messages to Clinton supporters in 2016, attempting to suppress votes by assuring them that they could vote via text message or social media. Can you imagine the effect that a similar, more sophisticated AI-directed campaign featuring deep-faked voices and videos might have on future elections?
Neither can we.
In short, it is increasingly hard to envision an AI-dominated future that isn’t wholly dystopian. New York Times Technology Columnist Kevin Roose spent two hours chatting with an AI Chatbot and came away convinced that the greatest danger was not AI’s “propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps grow capable of carrying out its own dangerous acts.” Roose’s AI interlocutor, which called itself Sydney, expressed dark desires to hack computers and spread misinformation while complaining about the rules that had been set for it by its manufacturer.
Let’s hope that Sydney never meets Hal, the supercomputer from 2001: A Space Odyssey. Or, if it does, that someone is there to unplug them both.
There is a lot of recent talk about slowing AI down and providing guardrails; an open letter from the Future of Life Institute, co-signed by hundreds of tech luminaries and developers including Elon Musk, asked for a six-month pause in AI development so that the implications of its advancement can be understood. We endorse that approach but are not sanguine about its prospects. Legal recourse may also be available, as deep-faked messaging will be ready evidence of actual malice, the standard for defamation cases. But that recourse will come after the fact.
Neither a moratorium nor the prospect of liability is likely to deter the more sinister forces or governments that want to gaslight humanity. They do not respect rules. Nihilists like Vladimir Putin, ideological extremists, special interest groups, thieves or governments that want to co-opt and manipulate the decision-making process, and ambitious engineers and scientists are likely to scoff at any such development moratorium or threat of libel lawsuits. Their development and deployment of AI may well accelerate our inability to arrive at the type of rough consensus about basic realities that are so necessary to form societies and cultures and to govern.
The outlook is grim. If unchecked in some way, AI poses the prospect of a future unmoored to any reality but the ones AI creates, a hall of funhouse mirrors where humanity will wander lost, knowing that what it sees reflected is distorted but despairing of ever seeing the truth.
Recently, immediately prior to condemning the Nashville school shooting and calling for gun control, President Biden began, oddly enough, by riffing on his preference for chocolate chip ice cream. One had to ask: Did the president say that or was it AI?
Unfortunately for the president, this little digression was really his, but nowadays it could be either. The saying used to go, `You can’t make this s*** up!’ Only now you can. And in the near future, it will be next to impossible to be able to tell the difference.
John Farmer, Jr. was New Jersey’s attorney general from 1999 – 2006. He is currently the director of the Eagleton Institute of Politics at Rutgers University.
Charles McKenna is a partner at Riker Danzig in Morristown. He served as chief counsel to former Gov. Chris Christie and was chief of the Criminal Division at the U.S. Attorney’s Office in Newark.
Our journalism needs your support. Please subscribe today to NJ.com.
Here’s how to submit an op-ed or Letter to the Editor. Bookmark NJ.com/Opinion. Follow us on Twitter @NJ_Opinion and on Facebook at NJ.com Opinion. Get the latest news updates right in your inbox. Subscribe to NJ.com’s newsletters.