HomeTechnologyElon Musk’s Grok chatbot under fire for bigoted, misleading posts

Elon Musk’s Grok chatbot under fire for bigoted, misleading posts

Date:

Popular News

Billionaire Elon Musk’s synthetic intelligence chatbot Grok, developed by his agency xAI, has drawn world consideration for utilizing profanity, insults, hate, and spreading disinformation on X, sparking renewed debate over the reliability of AI methods and the risks of putting blind belief in them.

Sebnem Ozdemir, a board member of the Artificial Intelligence Policies Association (AIPA) in Türkiye, instructed Anadolu that AI outputs have to be verified like another supply of data.

“Even person-to-person information needs to be verified, so putting blind faith in AI is a very unrealistic approach, as the machine is ultimately fed by a source,” she mentioned.

“Just as we don’t believe everything we read in the digital world without verifying it, we should also not forget that AI can learn something from an incorrect source.”

Ozdemir warned that whereas AI methods usually undertaking confidence, their outputs replicate the standard and biases of the information they have been skilled on.

“The human ability to manipulate, to differently convey what one hears for their own benefit, is a well-known thing-humans do this with intention, but AI doesn’t, as ultimately, AI is a machine that learns from the resources provided,” she mentioned.

She in contrast AI methods to kids who be taught what they’re taught, stressing that belief in AI ought to depend upon transparency in regards to the knowledge sources used.

“AI can be wrong or biased, and it can be used as a weapon to destroy one’s reputation or manipulate the masses,” she mentioned, referring to Grok’s vulgar and insulting feedback posted on X.

Ozdemir additionally mentioned speedy AI improvement is outpacing efforts to manage it: “Is it possible to control AI? The answer is no, as it isn’t very feasible to think we can control something whose IQ level is advancing this rapidly.”

“We must just accept it as a separate entity and find the right way to reach an understanding with it, to communicate with it, and to nurture it.”

She cited Microsoft’s 2016 experiment with the Tay chatbot, which realized racist and genocidal content material from customers on social media and, inside 24 hours, started publishing offensive posts.

“Tay did not come up with this stuff on its own but by learning from people-we shouldn’t fear AI itself but the people who act unethically,” she mentioned.

Source: www.anews.com.tr

Latest News

LEAVE A REPLY

Please enter your comment!
Please enter your name here