Gcore CEO warns of deepfake era risks for enterprises
Andre Reitenbach, CEO of Gcore, is urging enterprises to rethink their security posture as AI-generated deepfakes and synthetic media become powerful tools for fraud, espionage and brand sabotage. Speaking on the escalating threat landscape, he argues that traditional perimeter-based cybersecurity is no longer enough to protect sensitive data, executive communications and proprietary AI models.
According to Reitenbach, advances in generative AI now allow attackers to convincingly mimic voices, faces and writing styles, enabling highly targeted social engineering, fake executive orders and manipulated financial or legal documents. For large organisations operating across borders and time zones, the risk of a single convincing deepfake triggering a costly action is rising sharply.
‘Safe rooms’ for AI and critical decision-making
To counter this, the Gcore chief advocates the creation of digital “safe rooms” – tightly controlled environments where the most sensitive AI workloads, data sets and high‑impact decisions are processed. These environments combine hardened cloud infrastructure, strict identity controls, hardware‑backed encryption and continuous anomaly monitoring.
In these safe rooms, access to training data, model weights and inference pipelines is segmented and logged, while executive communications and approvals can be verified using multi‑factor and cryptographic checks. Reitenbach stresses that this is not merely a compliance exercise, but a way to preserve trust in internal signals when external information channels are being flooded with synthetic noise.
From experimental AI to secure production systems
Reitenbach notes that many enterprises still treat AI initiatives as experiments running on ad‑hoc infrastructure, which leaves them exposed as pilots scale into core business systems. He argues that production‑grade AI infrastructure must now be designed with deepfake‑resilient identity management, content authenticity verification and secure data residency from day one.
Providers like Gcore are positioning themselves as partners for this transition, offering sovereign cloud regions, high‑performance GPU clusters and integrated security controls tailored for AI workloads. For boards and CISOs, Reitenbach’s message is clear: building AI capabilities without a parallel investment in “safe room” security could leave even the most advanced enterprises vulnerable to the next generation of digital deception.


1 Comment
This is a really important wake-up call. As deepfakes get more convincing, companies definitely need stronger safeguards beyond traditional security measures. The idea of ‘safe rooms’ sounds like a smart step to prevent costly mistakes caused by fake communications.