What Form of Artificial Intelligence is More Dangerous to Our Future?

Frank Islam & Ed Crego
10 min readFeb 3, 2023
Image Credits: Tom de Boor, Adobe, et al

What is more dangerous to the future of the American democracy and economy, the artificial intelligence of computers and machines that have been programmed to think like humans or the “artificial intelligence” of we human beings who have been programmed by ourselves and others to believe with confidence that our information and insights are absolutely correct? We believe it is definitely the latter rather than the former.

That was our opinion, stated in a blog that we posted in June 2017.

Given the developments over the past five years, do we feel the same way now? In this posting, we reflect on the current context and provide our answer to this question.

Let’s begin by examining the current status of artificial intelligence (AI). In general, there has been enormous progress in the AI field over the past five years. The emergence of the AI programs Cicero and ChatGPT in the last quarter of 2022 brought AI to the forefront of media and public attention at the beginning of 2023.

Cicero is an AI program developed by a research team at Meta (formerly Facebook) to play the board game Diplomacy. Diplomacy is a strategic game that requires its players to negotiate with one another, form alliances, possibly break those alliances, and develop and implement battle plans to conquer a stylized version of the European Union.

Cicero was entered into a Diplomacy online gaming league. It performed quite well in the 40 games that it played — ranking in the top 10% of all competitors. Those playing the game assumed that Cicero was a human, given its ability to communicate with them.

Cicero’s performance, and ability to communicate with and defeat humans, in a game which required using both natural language processing and algorithms, was seen as a leap forward in the AI skill set. It also has negative potential.

As Pranshu Verma writes in his Washington Post piece on the introduction of Cicero, “…its ability to withhold information, think multiple steps ahead of opponents and outsmart human competitors sparks broader concerns. This type of technology could be used to concoct smarter scams that extort people or create more convincing deep fakes.”

If Cicero was a leap forward for AI game-playing, ChatGPT was a launch into human space. ChatGPT (Chat) is a conversational language model developed by OpenAI which, in response to requests, can do things such as write essays, create poems, and solve mathematical problems. Chat already has more than 1 million users. It has provoked a reaction across the intelligentsia, ranging from the major media — think New York Times and Washington Post — to the Brookings Institution, even inspiring a Garry Trudeau Doonesbury cartoon.

At this point, ChatGPT can do all of the things it has been equipped to do relatively well. In an opinion piece, the Washington Post editorial board states, “What’s new is how convincingly human the outputs are superficially…Probe a little deeper, though, and even the most fluent answers sometimes suffer from collapses of logic or contain complete fabrications.”

It should be emphasized that this version of ChatGPT is merely model 1.0. Future versions will undoubtedly be much more adept and adaptable. Because of this, Chat, and language bots like it, present numerous opportunities and threats.

For example, an opportunity would be to use Chat and similar models to perform mundane or rote tasks and to free up humans to do those requiring ideation and inspiration. A threat could include Chat replacing numerous humans in performing language-related tasks that they currently perform, eliminating jobs in the process. As Karen Attiah of the Washington Post points out, both CNET and the Associated Press already use AI to produce drafts or data for certain articles.

The downside to AI could be far greater than the loss of jobs. It could be the loss of lives — tens of thousands of lives.

That’s what Henry Kissinger warns of in the book The Age of AI: And Our Human Future that he co-authored with Eric Schmidt, former CEO of Google and Daniel Huttenlocher, MIT computer scientist. David Ignatius reports in an opinion essay, written after Kissinger spoke by video to a National Cathedral forum titled Man, Machine, and God, “The former secretary of state cautioned that AI systems could transform warfare just as they have chess or other games of strategy — because they are capable of making moves that no human would consider but that have devastatingly effective consequences.”

Considering the din of alarm bells ringing, it would appear it might be time for us to wake up and reverse our opinion that the real threat to our future is the artificial intelligence of AI bots and not those of us in human bodies.

We still think that’s not the case. In support of our perspective, consider the following.

In February 2022, Russia launched an unprovoked and completely irresponsible attack on the Ukraine. That war began mainly in a traditional manner with fighting between combat troops. But in the nearly one year since, it has degenerated into an endless barrage of missiles directed at destroying apartment and other buildings and killing civilians in cities across Ukraine. This senseless and inhumane act of violence has been conducted at the dictates of a person, not a programmed machine.

A much more subtle form of violence has been perpetuated in the United States since Donald Trump lost the presidential elections in November 2020. It is the war on the truth and democracy being implemented by the election deniers. This anti-patriotic crusade, supported by no credible evidence, contributed to the January 6 attack on the U.S. Capitol, continued on into the midterm elections across the country, and exists today because of those who have enlisted in the cult to support it.

Guns are another source of violence in our country. Through the decades, guns and automatic weapons have been used for mass killings in malls, stores, schools, and other venues. Because of the misappropriation of the Second Amendment by gun rights advocates, the typical response has been to cite mental health as the root cause, and to arm up the sites that have been under attack rather than to call for gun control. The irrationality of this in a nation in which there is more than one gun for every person, with many of them serving no useful purpose, is confounding.

Even more confounding are the tens — perhaps hundreds — of thousands of U.S. citizens who died — and millions who put their lives at risk, rather than getting vaccine shots to protect themselves against COVID-19. They did this in spite of the fact that the data and all reliable sources attested to the efficacy of the vaccines. This resistance to getting vaccine shots is not limited to COVID. Some in this generation of parents are refusing to have their children in school receive the shots that school age-children have gotten for decades.

This hesitancy is just another example of the tribalism that has become prevalent in the U.S. As an example, recent polls and surveys show that trust in the news is, in general, very low. There is no universal common denominator. Instead, we each find our own source for what we consider reliable information. For some in the elder generation, it is the traditional news media. For many in the younger generation, it is social media. For a distinct segment of the population, it is conspiracy theorists and fake news fabricators.

Those examples are just the tip of the human “artificial intelligence’ iceberg. There are millions more. That is why we must conclude, again, that we humans are more dangerous at this point in time than the bots.

Think of it this way: in 2023, humans are writing the algorithms and doing the programming to build the AI models and enabling them to do what they do. In sci-fi stories and movies, those bots are going out of control and trying to terrorize or destroy their enemies and/or the world. We could get to the point when, someday, an AI bot will build an AI bot to do devastating things. But for today, we humans with our “bounded rationality” are still pulling that trigger.

As we wrote earlier, such bounded rationality is not limited to political or economic issues; it is transcendent. False knowledge and faulty logic have been part of the human condition from the time there have been humans. (For more on the sociology and psychology of human beliefs and actions, refer to this blog.)

Given this, what can we as humans do to ensure that the future of artificial intelligence is positive and constructive rather than negative and destructive?

We should begin by recognizing and celebrating the fact that even though we are all imperfect, we are not robots. We can think and act independently and creatively. If we choose to do so, we can distinguish fact from fiction and truth from lies. We can feel compassion for our fellow citizens. And we can build trust with others through collaboration. Those characteristics are all distinguishing and differentiating for us as humans.

We can use them, and our social consciousness, to promote the common good and progress toward a more perfect union. We can do so by providing input and advocating for an AI agenda that makes this a stronger, better, and fairer nation for all.

This is a critical assignment. The editorial board of the Washington Post stresses its importance in the concluding paragraph of its opinion essay on the ChatGPT bot, stating:

Humans today are still in control. We have the ability to decide what systems to build and to shape the future in which we want to live. Ultimately, unleashing the full potential of the technology that appears tantalizingly close our grasp comes down to this: What do we as a species hope to gain from artificial intelligence, and — perhaps more important — what are we willing to give up.

Gary Marcus, an AI entrepreneur and co-author of Rebooting A.I.: Building Artificial Intelligence We Can Trust, published in 2019, thinks we are not “tantalizing close” to realizing the full potential of AI. In fact, he believes we are still quite a way from it.

In a podcast with Ezra Klein of the New York Times on January 6 of this year, Marcus emphasized that while ChatGPT and similar models might be seen as breakthroughs, they are really just cut and paste or “pastiche” systems that contain no real knowledge — and that cannot tell the difference between truth and bullshit.

As a result, they can generate mounds of misleading information that sounds authentic in nature. During his interview, Klein, reading from a piece Marcus wrote, quotes him as saying “It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.”

What Marcus prescribes for systems that can be trusted is that they need to move much closer to genuine human intelligence. In a Psychology Today article based upon an interview with Marcus shortly after his book was published, Cami Rosso writes that Marcus outlined the need for AI to move toward deep learning and understanding causation, as opposed to just identifying statistical correlation and “first-order approximations” for model-building.

Marcus explained “The positive part is, if we made smarter AI, we could be solving all kinds of medical and scientific problems. It could be an enormously useful set of techniques, I think, if we took some of the human cognitive sciences and made it a bit more sophisticated.”

Even with this improvement, Rosso says that “Marcus thinks that machines will be able to do ‘pretty much every cognitive thing that people can do, except experience emotions.’”

Marcus’ last quote took us back in time to the movie the Wizard of Oz, made in 1939, in which a lost Kansas teen named Dorothy, played by Judy Garland, meets a scarecrow who has no brain, a “tinman” who has no heart, and a cowardly lion who has no nerve.

In 2023, AI appears to be getting a little more nerve, and some more brain power, but if Marcus is right, it will never have a heart. That is why, as flawed as we may be, we humans need to stay in command and control in shaping the AI agenda.

Our final thought in this regard come from the case of Damar Hamlin, the Buffalo Bills’ safety, who suffered cardiac arrest after a tackle he made on wide receiver Tee Higgins of the Cincinnati Bengals, during a Monday night football game on January 2. As he lay on the field, Hamlin received emergency medical assistance from first responders who gave him CPR, automated external defibrillation, other treatments, and then rushed him in an ambulance to a hospital, while players from both teams looked on in shock and grief, with tears in their eyes, with many kneeling to pray.

Miraculously, Hamlin has recovered, is out of the hospital, and receiving follow-up assistance at home and at the Bills’ training facility, and he may even be able to return to the playing field someday. Just as miraculously, the good and caring citizens of this nation have now given close to $9 million to Damar Hamlin’s Chasing M’s Foundation Charitable Fund.

No computer, bot, or AI model would have thought of, or been able to do all of this. It was our human brains, bodies, hearts, and souls at work together, demonstrating unity of purpose and concern for another human being. It gives us reason to believe that, in spite of the dangers that our human artificial intelligence presents for us, there is hope for our capacity to overcome those limitations in order to shape an AI agenda that is beneficial for our democracy and the world.

Originally published by the Frank Islam Institute for 21st Century Citizenship. For more information on what 21st century citizenship entails, and to see exemplars from around the world, please visit our website.

--

--

Frank Islam & Ed Crego

Frank Islam is an entrepreneur, investor and philanthropist. Ed Crego is a management consultant. Both are leaders of the 21st century citizenship movement.