Serious security issues in Perplexity AI’s Android app exposed API keys and put user conversations at risk

Researchers found multiple vulnerabilities in Perplexity AI’s Android app, including accessible API keys that could reportedly let attackers impersonate users and access conversation data. The original roundup also noted that similar Android security issues had been found in DeepSeek’s app only a couple of months earlier, which makes the pattern more concerning for anyone using AI assistants on mobile devices.

Key Details

  • Exposed API keys were reportedly recoverable from the Android app.
  • The issues could enable impersonation and unauthorized access to user conversations.
  • The write-up positioned the app as unsafe for Android users in its current state.

Next Steps

  • Avoid using the affected Android app for sensitive work until fixes are confirmed.
  • Treat consumer AI mobile apps as high-risk unless their mobile security posture has been independently reviewed.

Read more

Spam bombing is being used to soften targets before a fake IT helper steps in

Attackers are flooding targets with huge volumes of legitimate-looking emails from reputable marketing platforms and then using the confusion as an opening for social engineering. Once the victim is overwhelmed, someone posing as a helpful IT person can contact them, offer support, and use that moment to steal credentials or gain access.

Key Details

  • The email flood can involve hundreds of messages within minutes.
  • The messages appear harmless because they come from normal-looking newsletters, signups, and promotions.
  • The actual compromise happens when the attacker follows up with a support-style pretext.

Next Steps

  • Add spam bombing as a concrete scenario in security awareness training and help-desk playbooks.
  • Make sure employees know that urgent support contacts during an inbox flood still need normal identity verification.

Read more

IKEA’s Eastern Europe operator says a ransomware incident cost $23 million even without paying

The ransomware incident initially hit in December 2024, and the operator reportedly spent months recovering, pushing back new attack attempts, and coordinating with an external cybersecurity service provider. The financial impact reached $23 million despite no ransom payment, which is a useful reminder that operational disruption and recovery costs can dwarf the ransom decision itself.

Key Details

  • The incident reportedly began in December 2024.
  • Recovery took months rather than days.
  • The company still faced major financial impact without paying the attackers.

Next Steps

  • Use this case when modeling ransomware scenarios so recovery, legal, communications, and regulator coordination costs are included.
  • Review how your incident response plan handles long-tail recovery and repeat intrusion attempts after the initial containment phase.

Read more

Google introduced Sec-Gemini v1 as an experimental cybersecurity-focused AI model

Google announced Sec-Gemini v1, an experimental model intended to improve cybersecurity workflows with access to real-time threat intelligence. At the time of the roundup it was only available to research partners, so the announcement mattered more as a direction-of-travel signal than as something security teams could deploy immediately.

Key Details

  • The model is positioned as a cybersecurity-specific AI system.
  • Google says it uses real-time threat intelligence to strengthen defensive workflows.
  • Access was limited to research partners rather than general availability.

Next Steps

  • Track where narrowly scoped security AI tools can actually reduce analyst workload instead of treating every new model announcement as deployment-ready.
  • Ask vendors to explain what live data sources, safeguards, and evaluation benchmarks sit behind any claimed cybersecurity AI capability.

Read more

Subscribe

Subscribe to receive this weekly cybersecurity news summary to your inbox every Monday.