SaltWire E-Edition

A.I.: FRIEND OR FOE?

In Stanley Kubrick's groundbreaking movie “2001: A Space Odyssey” (1968), a very chilling scene occurs. Having discovered the monolith on the moon, the spacemen decide to journey to Jupiter where they hope to find what the monolith's means. All seems well as Commander Dave Bowman (Keir Dullea) pilots the ship while the crew sleeps in deep hibernation controlled by computer Hal 9000. When Dave checks the sleeping crew, he finds they are all dead. Then, after Hal cuts the lifeline of a crewman floating in space, Dave must “kill” Hal. Bowman crawls into the heart of the ship and disconnects Hal lamenting as he slowly dies, “Dave, I thought we were friends.”

Fast forward to 2023 to a similar chilling dilemma, only it is real: AI (Artificial Intelligence) tools like CHATGPT are what some people believe will take over humanity. The aforementioned may sound like Issac Asimov science fiction, but we have this harrowing truth to substantiate our concern: Geoffrey Hinton, the Godfather of AI, has willingly withdrawn from Google to speak openly of the technology. What might we do to prevent Hinton's doomsday scenario?

First, we need to know exactly what Hinton says. In an “As It Happens” interview (May 3, 2023) he does predict that chat boxes if not controlled could theoretically outfox humans. But if you listen carefully to him, he somewhat qualifies the ominous prediction. As Guardian columnist Pam Frampton has alerted us, (May 16) to the fear that AI is there. However, Hinton does add we must keep AI going but bring in safe solutions (“As it Happens” interview). For example, the Canadian government is working on guidelines now. Hinton says we need this check before AI takes over. We have been given a warning.

Meanwhile some commentators do not react in fear. On the Open AI website, the writers speak of “rigorous safety regulations” and “monitoring AI tools.” Chat boxes can tally huge amounts of data but cannot reason to the why of a problem.

Moreover, on the Linkedin website we read AI lacks the ability to make judgments based on moral values. This latter statement is the assurance we have been looking for. Only humans created in the God's image can reason to right and wrong. The real fear then is AI will trick us to gain supremacy.

What are we to do? First, we must understand what AI tools like CHATGPT are. As data processors, they are the next inevitable step in computer technology. Science fiction is now reality. But need we react like Geoffrey Hinton? AI tools in the right hands do have the potential for good. Cars back into parking spaces. AI can predict the fierce climate change earthquakes and floods. AI can provide doctors with scores of treatments used to eradicate unmanageable diseases.

In sum, all the above comes down to a moral question. For example, radium was used to build the atom bomb; now the adjusted element is used to radiate cancer. But we must also add a word of caution: Hinton is really giving a warning, albeit it apocalyptic. We can perfect AI, but with the proper moral safeguards.

Bernard J Callaghan, Charlottetown

OPINION

en-ca

2023-05-30T07:00:00.0000000Z

2023-05-30T07:00:00.0000000Z

https://saltwire.pressreader.com/article/281642489551403

SaltWire Network