A.I. isn't intelligent

In March, after San Francisco start-up Open AI released a new version of Chat GPT, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because artificial intelligence (A.I.) technologies pose “profound risks to society and humanity.” Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed Open AI’s technology across a wide range of products.

One person who didn’t sign either of those letters was Dr. Geoffrey Hinton, often called “the Godfather of A.I.” However, in late April, Dr. Hinton officially joined a growing chorus of critics who say that the tech industry’s biggest companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence. He has said that he quit his job at Google, where he worked for more than a decade and became one of the most respected voices in the field, so that he can freely speak out about the risks of A.I. A part of him, he has said, now regrets his life’s work.

Generative A.I. can already be a tool for misinformation. Somewhere down the line, industry leaders say, it could be a risk to humanity. “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

One of the things that has raised alarm in industry leaders is the astounding speed with which A.I. systems have developed. Progress that until recently was believed to take a half century or more has been accomplished in less than five years. “Look at how it was five years ago and how it is now,” Dr. Hinton said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.” His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

“The idea that this stuff could actually get smarter than people - a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 year or even longer away. Obviously, I no longer think that.” He is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. As individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own, truly autonomous weapons become a reality.

I am not an engineer. I don’t think like an engineer. But as an educator, I can freely question these emerging technologies. Part of the problem is that as good as artificial intelligence is, it is not at all like human intelligence. It is good at a particular type of thinking. And it shouldn’t surprise us that what artificial intelligence does is logical mathematical thinking. That is the type of thinking that is required for the development of computers and related technologies. I have often told the story of a young man, who struggled with learning disabilities. He would be diagnosed as being somewhere on the Autism Spectrum these days. In his time he had the label “Asperger’s Syndrome.” After struggling to graduate from high school and finally obtaining an engineering degree he went to work for a technology company. His parents, inquiring about his well-being asked about his new job. He reported, “I love it. Everyone here has Asperger’s!” While that observation is not technically correct, it is true that those attracted to work in the industry do have similar patterns of thinking.

Human society, however, doesn’t operate from only one kind of intelligence. While computers may be becoming more and more competent at logical-mathematical intelligence, we writers who use computers understand that they are less competent at verbal-linguistic intelligence. It isn’t just that we are frustrated by the spelling and grammar checkers in our word processing programs. Computers are not good at understanding the order and meaning of words, especially oral language. They are not good at understanding the sociocultural nuances of language, including idioms, plays on words, and linguistically-based humor. Consider the artificial voices of our GPS units, robot calls, and other technologies. The oral language of computers is not sophisticated.

Technology has become advanced at the manipulation of visual images and designs, but it is not “art smart.” Computers can recognize patterns of shapes and colors in the environment. They can work jigsaw puzzles (or at least computer simulated jigsaw puzzles), but they are not good at forming mental images and imagination. Computer generated art tends to be repetitive and boring.

And computers suck at Intrapersonal intelligence. My computer spell checker thinks that “Intrapersonal” is the same as “interpersonal” and tries to replace my spelling, demonstrating its lack of verbal-linguistic nuance. Self-reflection is not a skill of computers. It shouldn’t surprise us as self-reflection is not a strength of the mathematical-logical thinkers who have led the development of computer technologies. To put it simply, computers do not have inner feelings, values, or beliefs. Human beings, however do, making us in general far more aware of our thinking processes than the machines. While I’m at it, computers are also not good at interpersonal intelligence. They do not work and relate to humans as parts of a team. They are incapable of understanding other points of view. Computers (along with many computer engineers) cannot demonstrate sensitivity to the feelings and ideas of others. Don’t expect them to provide leadership in conflict resolution, mediation, or finding compromise.

I can go on and on. While computers can be useful in locating places like GPS units, they do not exhibit abilities in the full range of recognition, appreciation, and understanding of the natural environment. They can be used to imitate music and rhythm, but lack the sophistication and teamwork of a symphony orchestra. They can reproduce melody and rhythmic patterns, but are not sophisticated with the whole realm of sound, tones, beats, and vibrational patterns.

I share the caution experts are voicing about artificial intelligence. My caution comes from the fact that it is not truly intelligence. It is, at best, capable of partial intelligence. I don’t mind using a computer, but I won’t be falling in love with one. No matter how good artificial intelligence becomes it cannot replace human relationships.

In the end, love is stronger than fear. If we fear artificial intelligence, we must continue to learn to love. And that, friends, is a capacity that computers do not possess.

Made in RapidWeaver