fbpx
Google Gemini logo on smartphone stock photo (7)

Credit: Edgar Cervantes / Android Authority
TL;DR

  • Google report claims one campaign sent over 100,000 prompts to Gemini in an attempt to clone the model.
  • Attackers tried to coax Gemini into revealing more details about its internal reasoning abilities.
  • Google says it detected the behavior, blocked associated accounts, and strengthened safeguards against misuse.

Copying a successful product has been a practice as long as tools and technologies have existed, but chatbots are a special case. Competitors can’t pull them apart, but they can ask the AI as many questions as you like in an attempt to figure out how it works. According to a new report from Google, that’s exactly how some actors have been trying to clone Gemini. In one case, Google says a single campaign sent more than 100,000 prompts to the chatbot, in what it describes as a large-scale model-extraction attempt.

The findings come from Google’s latest Threat Intelligence Group report (via NBC News), which outlines a rise in so-called “distillation” attacks. In simple terms, that means repeatedly querying a model to study how it responds, then using those answers to train a competing system. Google says this activity violates its terms of service and amounts to intellectual property theft, even though the attackers are using legitimate API access rather than breaking into its systems.