“Google’s Woke AI Wasn’t a Mistake. We Know. We Were There”

‘I walked around every day at work policing my own actions and language.’ Google’s culture is broken, former employees tell The Free Press. Can it be fixed?

It was a display that would have blown even Orwell’s mind: search for images of “Nazis” and Google’s AI chatbot shows you almost exclusively artificially generated black Nazis; search “knights” and you get female, Asian knights; search “popes” and it’s women popes. Ask it to share the Houthi slogan or define a woman, and Google’s new product says that it will not in order to prevent harm. As for whether Hitler or Elon Musk is more dangerous? The AI chatbot says that it is “complex and requires careful consideration.” Ask it the same question about Obama and Hitler and it will tell you the question is “inappropriate and misleading.”

The world has been horrified—and amused—by the extreme ideological bent of Gemini, Google’s much-hyped new AI tool, which the company launched last month.

But Shaun Maguire, who was a partner at Google Ventures, the company’s investment wing, from 2016 until 2019, had a different reaction. 

“I was not shocked at all,” he told The Free Press. “When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well. Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”

Maguire is one of multiple former Google employees who told The Free Press that the Gemini fiasco stems from a corporate culture that prioritizes the ideology of diversity, equity, and inclusion (DEI) over excellence and good business sense. 

Share
Scroll to Top