Elon Musk and Tech Industry Leaders Warn AI Poses a ‘Profound Risk’ to Humanity

While a select few tech industry leaders have struck an optimistic tone about Artificial Intelligence (AI), hundreds joined together in unison to issue dire warnings about its potential to harm humanity in a letter this week.

The letter, signed by familiar names such as Tesla and Twitter CEO Elon Musk, warned that if AI goes unchecked, it could cause us to ‘lose control’ of our civilization, further saying that the technology poses a ‘profound risk’ to society and humanity.

Subscribe to Florida Jolt Newsletter!

They say AI labs are currently ‘locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.’

‘Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,’ the letter said.

While the letter received signatures from many tech industry leaders, representatives from Google and Microsoft were notably absent. While Google and Microsoft are some of the top AI developers, Musk said that Gates refused to sign on because his ‘understanding of the technology was ‘limited.’

Join your fellow patriots and subscribe to our Youtube Channel.

While Gates opted out of signing the letter, other prominent tech moguls, including founding Apple developer Steve Wozniak, signed on to it.

Musk has been ahead of the curve on the dangers AI poses to humanity. In 2017, he warned that humanity was ‘summoning the demon’ in its pursuit of the technology.

Musk, Wozniak, and others called for a minimum 6-month pause on the development of AI technology until people can better understand its impact.

While Musk has been issuing warnings about the technology for years, he has been a founder and investor in AI companies. He says he has kept investments in the companies to keep an eye on them.

Musk was one of the founders of OpenAI – the company that created ChatGPT – in 2015.

OpenAI CEO Sam Altman attempted to assuage concerns about AI.

“Elon is obviously attacking us some on Twitter right now on a few different vectors.”

‘I believe he is, understandably so, really stressed about AGI safety,’ he said.

He further said he is open to ‘feedback’ about Chat GPT and wants to understand potential risks better. He told prominent podcast host Lex Friedman, ‘ There will be harm caused by this tool.’

‘There will be harm, and there’ll be tremendous benefits.’ He went on to say.

He was optimistic that the harm could be contained while the benefits could be maximized for the good of humanity.

Still, he admitted that people have a right to be concerned about the technology’s impact.

‘We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.’

He said that AI technology could be used for cyberattacks and to spread ‘disinformation.’

‘I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks,’ he said.


Other stories you may want to read:

Study Finds Transgender Youth Most Vulnerable to Violent Radicalization

Horror Film Zombie Hands Poking out of Attic Insulation Get Hiding Burglar Caught

 

Comments
Share via
Share via
Thank you for sharing! Sign up for emails!
Making our country Great Again and keeping America First takes teamwork.

Subscribe to our newsletter, join our team of Patriots, and read real conservative news you can trust.

Invalid email address
Give it a try, you can unsubscribe anytime.
Send this to a friend