The Pope and AI By Howard Bloom

The Pope and AI By Howard Bloom
Culture

In 1968 Pope Paul VI established a Global Peace Day. A New Year’s Day on which the Pope gives a speech “reflecting the signs of our times.” Wednesday August 9th, the Vatican put out an announcement about the speech the pope will give at next year’s Peace Day, coming up on January 1st, just four months from now.  The theme the pope will zero in on is “Artificial Intelligence and Peace.”

The Holy Father’s concern is “that a logic of violence and discrimination does not take root in the production and use of” artificial intelligence.  And, Pope Francis says, artificial intelligence’s rapid advance, calls for “ethical reflection.”

The Pope is not alone in his concern about artificial intelligence.  In March, Elon Musk said, ‘Mark my words — A.I. is far more dangerous than nukes.”  And Musk knows his artificial intelligence. Musk’s Tesla cars use artificial intelligence for autopilot. Musk’s SpaceX uses artificial intelligence to optimize the performance of its upcoming 100-passenger spaceship, the Starship. And Musk has founded his own artificial intelligence company, xAI.

Meanwhile, the New York Times reported on May 30, two months ago, that, “Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.”

What are these AI founding fathers talking about?  Among other things, they’re concerned about Artificial Intelligence’s weaponization.

Last Friday, in the Ukraine war, where military innovation is happening at the fastest pace on the planet, an autonomous speed boat capable of zipping along at 50 miles an hour, traveling 500 miles, and carrying a thousand pounds of explosives without a human captain raced into Russia’s biggest port, the Black Sea port of Novorossiysk, the home port of Russia’s Black Sea fleet, and slammed into the 360-foot long amphibious landing ship Olenogorsky Gornya with 100 Russian servicemen aboard.  The self-driving suicide boat rendered the landing ship useless, flooding one of its compartments and temporarily closing down the port.

Said  Russian TV personality Sergey Mardan, the “attack by Ukrainian marine drones on Novorossiysk is simply a quantum leap in the geography of the conflict.”

But this was just a preview of the future uses of artificial intelligence to kill. Russia, says the New York Times, has been testing autonomous tank-like vehicles. The U.S. Defense Department is experimenting with AI-piloted F-16 fighter jets that can fly in autonomous swarms. And, says the  New York Times, the Air Force has a hush-hush program called “Next Generation Air Dominance,” under which 200 piloted planes will fly at the center of a swarm of roughly 1,000 drones.

Meanwhile Russia is working on “electronic warfare systems that can target drone operators”, then take them out with artillery shells.  Beyond that, says Douglas Shaw, senior advisor at the Nuclear Threat Initiative, “I can easily imagine a future in which drones outnumber people in the armed forces.”  James Johnson of the University of Aberdeen, in his book AI and the Bomb, writes that we are risking war “turbo-charged by AI-enabled bots, deepfakes, and false-flag operations.”

Even worse would come if the autonomous AIs we train to kill our enemies turn against us.  But the scariest part would be allowing artificial intelligence to decide when to launch nuclear weapons.

Let me get personal for a second. I write books about history and science.  Pinning down the facts in each paragraph is often a research challenge.  I use artificial intelligence engines from Google and Microsoft to help me dig down for the details.  But these Artificial Intelligences are seldom up to the task.

Instead, they make up what they apparently think I want to hear.  And when I ask them for the research papers and books from which they pulled their information, they make up highly convincing quotes from works with highly convincing titles by authors with highly convincing credentials.  They even give me exact page numbers.

But when I fact check, the books don’t exist. The authors often don’t exist either.  And the quotes aren’t quotes from any human who has ever lived.

So what’s the value of using Bard, ChatGPT, and Bing’s AI at all?  They let me put my questions in plain English.  And that reassures me. They give me a sense of control over the chaos of everything that’s ever been written.  They are fun to use.  They get me dreaming of how they’ll be able to help out when they are perfected.  And, most important, the AIs’ phony baloney often gives me search terms I can use to do real research.

In other words,  AIs get things wrong.  They are famous for concocting fantasies that the experts call hallucinations.  But an AI having hallucinations and possessing control of nuclear warheads could end humanity.

Concludes Eric Schmidt, the former Google chairman who served as chairman of the Defense Innovation Board for four years. “The industry isn’t stupid here, and you are already seeing efforts to self-regulate” so that disasters don’t happen.

But the Pope is onto something.

 

References:

https://press.vatican.va/content/salastampa/en/bollettino/pubblico/2023/08/08/230808c.html

https://www.vaticannews.va/en/church/news/2023-08/message-world-day-peace-artificial-intelligence-pope-francis.html

https://www.cnn.com/2023/08/09/tech/pope-francis-ai

https://www.nytimes.com/2023/05/05/us/politics/ai-military-war-nuclear-weapons-russia-china.html

https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

https://www.cnn.com/2023/08/04/europe/ukraine-sea-drone-russian-warship-black-sea-intl/index.html

https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html

https://www.pbs.org/newshour/show/how-militaries-are-using-artificial-intelligence-on-and-off-the-battlefield

https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/

______

Howard Bloom of the Howard Bloom Institute has been called the Einstein, Newton, and Freud of the 21st century by Britain’s Channel 4 TV.  One of his seven books–Global Brain—was the subject of a symposium thrown by the Office of the Secretary of Defense including representatives from the State Department, the Energy Department, DARPA, IBM, and MIT.  His work has been published in The Washington Post, The Wall Street Journal, Wired, Psychology Today, and the Scientific American.  He does news commentary at 1:06 am Eastern Time every Wednesday night on 545 radio stations on Coast to Coast AM.  For more, see http://howardbloom.institute.

Articles You May Like

Movie Review: ‘Red One’ | Moviefone
Reliance Jio Rs. 601 5G Upgrade Voucher With One Year Unlimited 5G Data Launched: Price, Benefits
Ben Stiller to Return for ‘Happy Gilmore 2’
The 11 Best Serums for Winter Skin, According To A Skin Expert
Spike Lee Named Red Sea Film Festival, Saudi Arabia Jury President