Artificial Intelligence Has the Potential to End Humanity As We Know it 

Artificial intelligence has the potential to end humanity as we know it. 

So, am I being ridiculous, perhaps to the point of paranoia, sounding an alarm where there’s no foundation for such a threat? Or is it a real threat where someday machines will take the place of humans unless we do something about it now?

Photo by Appen

Understanding AI

At this point, I need to come to an understanding of artificial intelligence. From what I’ve read, a big part of AI, as worded by IBM, is to “leverage computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.”

I’m not sure about you, but this gives me an uneasy feeling. What this tells me is we’re reaching a point where computers are being groomed to act like humans. Those who support the concept of AI tell us how this will make things easier for humans. Let the machines figure out the problems and come up with solutions while we sit back and … what?

Before I dive into that, perhaps a quick lesson is in order. The concept of AI dates back to 1950 when Alan Turing, an English mathematician and computer scientist among other things, published a paper called Computing Machinery and Intelligence.” In this paper, Turing asks, “Can Machines think?”  Turing, known as the “father of computer science,’’ set up a test where a human would offer tests and then a human would try to distinguish between a human response or a machine. This is where the concept of AI begins.

Photo by Science Friday

How AI Progressed

According to an article from IBM, over the years, artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing. And it’s not just language. Generative models can also learn the grammar of software code, molecules, natural images, and a variety of other data types.

Instead of going deep into the history of AI, my point is what’s going on now. It’s essential that we need to move carefully down this path. AI algorithms are already a big part of our lives. A prime example is your smartphone. You need directions to go somewhere, so instead of fumbling around with maps, you go into your smartphone app and ask, say Siri, for directions to a specific destination.Then Siri responds to give you directions or maybe you’re looking for that great restaurant for dinner.

This is AI at work. We are already dependent on this technology on a small scale. But what happens when this dependance begins to grow? Perhaps AI is the one who decides who gets elected or what appears on ballots. 

How AI Manipulated a Bill

Here is an example that should give you pause. 

As reported by the Washington Post, in the Brazilian city of Porto Alegre, the city council consisting of 36 members there voted in October unanimously to pass a bill to give taxpayers a break on carrying the load of replacing stolen water meters. The law went into effect on Nov. 23.

The bill’s sponsor, Ramiro Rosario, then revealed six days after the bill passed that it had been entirely written by ChatGPT.  Rosário said the chatbot processed a 250-character command and took some 15 seconds to employ the right algorithm to spit out a policy — a process that would normally take about three days. 

Keep in mind, there was no discussion or debate among council members. Lawmakers took the word of the algorithm (which they did not know about), and signed a piece of legislation into law. Rosario defended his actions by saying this will improve public service. That’s totally a bogus maneuver. AI hijacked the legislative process. 

What Could Happen

So what are the ramifications? Should we stop voting altogether and allow the machines to run the legislative process? We are heading down this road and if it doesn’t scare you, there is something wrong.

How long will it be when AI starts taking over general elections in other countries, eventually reaching our country? I submit that AI is a bigger threat to voters in the United States than alleged rigged elections. Not only can AI rig elections, but it is capable of manipulating legislation where actual laws are written by machines.

Potential Threats

From an article produced by Business Insider, here are some potential threats you should be concerned about.

What happens if the creator loses control of the AI? Some experts believe this is inevitable and could be a bigger danger to society than nuclear war or strong pandemics. 

Some of the tech’s early creators warned that we are on the fast track to the destruction of humanity unless lawmakers wake up and start putting together regulations to curb any threat of takeover. Are lawmakers taking a serious look at the consequences of AI? No. Instead, those who welcome AI with open arms claim AI concerns are nothing more than distractions and lies.

It’s true that people are struggling to make sense of these claims. Even someone like David Krueger, an AI expert and assistant professor at Cambridge University, told Business Insider that the risks of AI are still difficult with any degree of certainty.

“I’m not concerned because there is an imminent threat in the sense where I can see exactly what the threat is. But I think we don’t have a lot of time to prepare for potential upcoming threats,” he said.

Photo by IBM

Things to Worry About

What do experts such as Krueger worry about? Let’s start with a complete AI takeover. This seems to be the most popular concern among experts. This is where artificial general intelligence comes into play. 

AGI is where AI is smart or smarter than humans taking on a broad range of tasks. Alan Turing Institute’s Janis Wong points out as an example ChatGPT, which is built for users who feel like they are talking to another person.

This type of technology is potentially dangerous to humanity and must be researched and regulated. 

AI and the Military

Perhaps the biggest concern, according to Krueger, is what role these technologies play with military competitions between nations. 

“Military competition with autonomous weapons — systems that by design have the ability to affect the physical world and cause harm — it seems more clear how such systems could end up killing lots of people,” he said. 

“A total war scenario powered by AI in a future when we have advanced systems that are smarter than people, I think it’d be very likely that the systems would get out of control and might end up killing everybody as a result,” he added.

What If There Are No Jobs?

Then there’s the issue with losing jobs. Oh, those who favor AI will tell us that this technology is good for society since things will be made easier for the general public. But if you read between the lines, that means jobs can be eliminated since machines will come in and learn how to do our jobs, sort of like less jobs for us, but more money for employers. You think unemployment is bad now, just think of a society run by machines and unemployment will soar.

“We need to look at the lack of purpose that people would feel at the loss of jobs en masse,” Abhishek Gupta, founder of the Montreal AI Ethics Institute,said to Business Insider. “The existential part of it is what are people going to do and where are they going get their purpose from?”

How Does Undetected Bias Fit In?

Then there’s the issue of systematic bias where, according to experts who told Business Insider if AI systems are used to make social-type decisions, then we’re talking about some serious risks. 

For instance, Gupta cited cases where if there is undetected bias in AI systems making real decisions, such as the handling of welfare benefits, there could be some serious consequences. 

Take dealing with a language barrier as an example. Indeed, the majority of systems are based on English. What about those who do not speak English? Imagine the cost and time it would take to create other systems in different languages? Do we tell these people good luck?

The problems AI presents are far greater than the intended use of these systems to make things easier for users. So it’s up to lawmakers of all levels to create sensible legislation for AI before it’s too late. We need to slow down the AI process and build safeguards where AI doesn’t become a threat that we can’t handle. 

In other words, take the right steps before we allow machines to take over our lives.

 

No Comments
  1. Your comment is awaiting moderation.

Add a Comment

Your email address will not be published. Required fields are marked *