If you don’t have enough to worry about with climate change, the pandemic and war in Eastern Europe, here’s one more thing to consider: Artificial intelligence is language aware.


Not like Siri or Alexa; this is more like HAL 9000, the computer in the movie “2001: A Space Odyssey,” according to a recent article in the New York Times Magazine by contributing editor Stephen Johnson, who wrote that experts are concerned about where language-smart machines will take us. 


Some scenarios are not comfortable.


For example, if you think unchecked social media is harmful and disruptive, imagine what it could be like when we turn the keys over to A.I. drones that make up the rules and stories as they go.


Do machines understand the difference between right and wrong? Can they be programmed to follow a moral code? Whose code will they follow?


Those are some of the questions that worry A.I. experts. 


And while it can be argued that things are OK for now, who is to say that the entire apple cart won’t be upset by some technology tyrant of the future? What makes you think some tyrant isn’t already at work, placing malevolent code deep within the ever expanding vocabulary of super computers that are already language aware? 


I realize my rantings sound a bit like Chicken Little. I hope that’s all they are and that the sky doesn’t fall. But that’s not the sense I got when I read Johnson’s April 15 article in the online Times Magazine. The headline was “A.I. Is Mastering Language. Should We Trust What It Says?”


The article is more than 9,700 words and I doubt that I can do it justice in my 600 words, but I’ll attempt a quick summary.


First, the supercomputer featured in the piece is in our backyard. It’s one or more of the three Microsoft server farms built in suburban Des Moines in recent years. Although it’s unclear which Microsoft facility, or facilities, are involved, Johnson wrote, “The whole system is believed to be one of the most powerful supercomputers on the planet.”


This supercomputer, he added, has been involved for several years in “deep learning,” which means it attempts to “solve problems through endlessly repeating cycles of trial and error.”


Scientists say this “large language model” (LLM) teaches computers to fill in blanks, much like laptops and cell phones do when they anticipate text and spell out “Wednesday” when you type “Wed.” 


Large language models search the internet and bring back answers, much like the voice programs Siri and Alexa, but on an infinitely broader scale. 


One example Johnson gives is the supercomputer’s answer to his prompt: “Write a paper comparing the music of Brian Eno to a dolphin.”


The computer responded with full sentences and paragraphs, parts of which were “a little ham-handed,” Johnson wrote, “possibly because the prompt itself is nonsensical.” 


But to the uninformed, the reply made sense. 


Like social media, large language models can exaggerate societal biases about race, culture and history.


“They can spew conspiratorial misinformation; when asked for health or safety information, they can offer up life-threatening advice,” Johnson wrote. That’s because the sources the large language models rely on are “the wider web, …which continues to be plagued by bias, misinformation and other toxins.”


A group of experts recognized this problem in 2015 and agreed not to release open-source models of their work until they had added some guardrails, which they did in 2020. 


Some are now hoping for a “slow emergence” of the next phase of large language models, but a lot of questions about how governable they will be remain unanswered.


Government regulation is one solution, but it only works where there is government agreement, which is something that is lacking in many areas today. 


So, the question remains: Now that machines have acquired language, what next?


I don’t know about you, but I’m not ready for a laptop that sounds like HAL 9000 and says, “I’m sorry, Dave, I can’t do that.”