
Tuesday, April 15, 2025
Kevin Anderson
As generative AI expands both opportunity and risk in digital ecosystems, OpenAI has taken a decisive step into the cybersecurity sector. In its first-ever investment in a cybersecurity startup, OpenAI’s startup fund co‐led a $43 million Series A round for Adaptive Security—a New York‐based firm developing AI‐driven defense tools against modern social engineering threats.
Adaptive Security specializes in simulating AI‐generated hacks—such as deepfaked phone calls, emails, and text messages—to train employees in recognizing and neutralizing potential attacks. As AI‐powered social engineering becomes more accessible to malicious actors, the need for advanced employee‐focused security training is greater than ever.
The digital security landscape has shifted rapidly with the rise of generative AI. Threat actors now use tools to:
These tactics are not only more convincing but also easier to scale—putting organizations of all sizes at increased risk. Social engineering, where hackers manipulate employees into giving up access or information, remains one of the most effective attack vectors. The rise of generative AI has only amplified this threat.
Unlike traditional cybersecurity firms focused on firewalls and intrusion detection, Adaptive Security takes a human‐first approach to AI security. Its platform uses AI to simulate the same tactics used by attackers, such as:
These simulations help identify weak points in human behavior and train staff to detect manipulative content before it results in a breach. The platform is already being used by over 100 customers, with strong feedback from security teams validating its effectiveness.
This marks the first time OpenAI has invested in a cybersecurity company, signaling a shift in how the organization views its role in the broader tech ecosystem.
Key implications:
Adaptive Security’s co‐founder and CEO, Brian Long, is no stranger to successful tech ventures. His resume includes:
Long’s credibility and experience in scaling B2B tech solutions add confidence that Adaptive can execute effectively as it expands its engineering and go‐to‐market teams.
Adaptive is entering a growing field of startups tackling AI‐enabled threats. Notable examples include:
These startups represent the broader "AI defense layer" emerging in response to generative AI proliferation. Adaptive Security’s niche—human‐targeted simulations—fills a critical gap in proactive, employee‐level defense.
According to Long, the $43 million in Series A funding will primarily go toward:
With phishing, voice spoofing, and deepfake content on the rise, the demand for AI‐native cybersecurity platforms is only expected to grow.
The investment by OpenAI into Adaptive Security reflects a maturing AI ecosystem—one that is beginning to balance innovation with responsibility.
As generative models become more powerful and accessible, it’s critical that defenses evolve just as quickly. Human‐centric tools that simulate real‐world threats provide an effective, scalable way to prepare organizations for the challenges ahead.
By backing Adaptive Security, OpenAI is not only acknowledging the risks posed by its own technology—but also taking an active role in building the infrastructure to protect against them.