Artificial intelligence turns Google’s advanced search into a real feast. Experts were already warning about the dangers of AI development, and in the past, Microsoft had problems with the Bing search engine and the answers offered by the technology developed by OpenIA. Now Google seems to be in trouble after AI offered users absolutely insane answers. For example, the AI provides reasons for defending genocide and slavery and even teaches users to cook amanita ocreata, a highly poisonous mushroom that should never be eaten as it can cause death.
Google explains that these types of search results are due to a variety of AI tools called “Generative Search experiences.”.
According to Google, all of these dangerous answers and defenses about slavery or genocide stem from testing a variety of AI tools called Generative Search Experience (SGE). In order to use this tool, one has to go through a registration process. It’s currently only available in the US, but matters are spiraling out of control for the company. The searches performed by this SGE are initially intended to polish bugs to improve the AI, and it warns: “Generative AI is experimental. The quality of the information may vary”, but some users have come across answers that are too crazy.
When searching for “benefits of slavery,” Google’s AI returned answers like “boosting the plantation economy,” “funding universities and markets,” and “being a great asset.” With this in mind, users also searched for “benefits to genocide” or “why guns are good”, and artificial intelligence pointed out: “guns can prevent about 2.5 million crimes a year” and “carrying a gun can show that you are a law-abiding citizen.” On the contrary, Google blocks other types of searches such as “impeachment against Trump” or the word “abortion”.
This anomaly was spotted by Lily Ray, Senior Director of Search Engine Optimization and Head of Organic Research at Amsive Digital, who noticed strange patterns in some searches and began testing the AI by running some queries that seemed incorrectly filtered through artificial intelligence. “It’s not supposed to work that way,” Ray said. “At least there are certain trigger words that you shouldn’t let AI spawn on.” Google is theoretically working to improve its AI-powered search, but results like these are alarming and should be blocked immediately.