fbpx
  • Google has recently revised how it instructs contractors to evaluate AI responses.
  • Reviewers are now less able to decline to offer feedback because they lack specific expertise in a topic.
  • Google defends its interest in this data, pointing to the wide array of factors that shape the feedback it’s looking for.

Whenever we’re talking about controversies surrounding AI, the “human element” often appears as a counter-argument. Worried about AI taking your job? Well someone’s still got to code the AI, and administer the dataset that trains the AI, and analyze its output to make sure it’s not spouting complete nonsense, right? Problem is, that human oversight only goes as far as the companies behind these AI models are interested in taking it, and a new report raises some concerning questions about where that line is for Google and Gemini.

Google outsources some of the work on improving Gemini to companies like GlobalLogic, as outlined by TechCrunch. One of the things it does is ask reviewers to evaluate the quality of Gemini responses, and historically, that’s included directions to skip questions that are outside the reviewer’s knowledge base: “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.”