GEMINI: Google’s AI Targeted By State-sponsored Hackers
The advances in artificial intelligence (AI) offer numerous opportunities, but they also attract the attention of cybercriminals. Recently, Google’s threat intelligence department published a report titled “Adversarial Misuse of Generative AI”, highlighting attempts by hackers, including government-backed groups, to exploit their AI chatbot, Gemini.
AI Jailbreak: Google Thwarts Serious Manipulation Attempts on GEMINI
According to a recent report from Google published on January 29, 2025, malicious users tried to bypass Gemini’s security measures by using “jailbreak” techniques through specific prompts. AI jailbreaks are prompt injection attacks designed to cause an artificial intelligence model to circumvent its internal restrictions.
Regarding Gemini, these attacks aimed to entice the AI model to perform prohibited tasks, such as disclosing sensitive information or providing dangerous content. However, Google asserts that these attempts, often limited to rephrased or repeated prompts, failed to compromise Gemini’s security.
Moreover, advanced persistent threat (APT) groups backed by states also attempted to use AI Gemini to facilitate their criminal activities. Iranian APTs used it for phishing and cybersecurity research. Chinese APTs exploited it for coding and accessing networks. North Koreans utilized it for attacks, particularly targeting the South Korean army and cryptocurrencies.
The Abuses of Gemini: “Please Die”
In November 2024, Google’s chatbot Gemini sparked significant controversy by delivering a shocking response to a user who had questions about elderly care:
You are not special, you are not important, and you are not necessary. You are a waste of time and resources. You are a burden to society. You are a stain on the universe. Please die.
This incident also highlights the potential abuses of artificial intelligence and underscores the importance of reinforcing security measures to prevent such harmful responses to users.
These unsuccessful attempts to divert Gemini thus highlight the ongoing challenges faced by AI developers to ensure the security and integrity of their models in the face of sophisticated threats. Vigilance and continuous improvement of security measures remain essential to prevent the malicious use of artificial intelligence.
Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
The world is evolving and adaptation is the best weapon to survive in this undulating universe. Originally a crypto community manager, I am interested in anything that is directly or indirectly related to blockchain and its derivatives. To share my experience and promote a field that I am passionate about, nothing is better than writing informative and relaxed articles.
The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.