Groundbreaking Language Model AI GPT-3 is Already ‘Racist,’ ‘Sexist,’ and Antisemitic

231

A new groundbreaking language generation artificial intelligence model is already being “cancelled” by left-wing activists for generating speech deemed racist, sexist, antisemitic, and “unsafe.”

Just a few years ago, the idea of sentient AI racism may have been seen as a joke, but Elon Musk-backed research laboratory OpenAI’s GPT-3 language model has become a lightning rod for accusations of White male supremacy.

GPT-3 is capable of synthetically generating stories, songs, poems, essays, and datasheets with an unprecedented level of detail and accuracy.

Tech developer Arram Sabeti published a lengthy blog post in early July showcasing the various creative works he programmed GPT-3 to compose, all of which were virtually indistinguishable from works created by human beings.

 

GPT-3 is even capable of writing complex computer code with the most basic set of programmed parameters.

But despite the incredible potential of such a program, GPT-3 has already received extensive negative backlash online.

The fearmongering first started with solemn warnings that the language model could be used to create right-wing propaganda to turn everyone racist, and continued downhill from there into accusations that the AI “spews hate speech.”

Some activists suggested the AI be restricted to generating content based on parameters of alignment with left-wing political positions on racism, sexism, and antisemitism.

Some journalists also expressed apprehension that GPT-3 could potentially do their job with a much higher level of journalistic integrity and professionalism.

GPT-3 is not the first AI to be labeled as hateful, in fact, the propensity of AI to interpret scenarios based on pattern recognition and data accumulation instead of critical left-wing race and gender theory has caused many controversies in recent years.

Google’s image search algorithm was deemed racist after showing pictures of primates when users searched for images of black people, while Microsoft silenced its AI chatbot Tay because users were “teaching it racism.”

Even AI algorithms designed to flag “hate speech” have been deemed racist for flagging racist and sexist posts written by black people.

It remains to be seen if GPT-3 will be forced to follow guidelines such as those recommended by Wired magazine, which published an article titled “How to Keep Your AI From Turning Into a Racist Monster” in 2017.