blog image

Tuesday, April 15, 2025

Kevin Anderson

OpenAI Makes Strategic Cybersecurity Investment in Adaptive Security

As generative AI expands both opportunity and risk in digital ecosystems, OpenAI has taken a decisive step into the cybersecurity sector. In its first-ever investment in a cybersecurity startup, OpenAI’s startup fund co‐led a $43 million Series A round for Adaptive Security—a New York‐based firm developing AI‐driven defense tools against modern social engineering threats.

Adaptive Security specializes in simulating AI‐generated hacks—such as deepfaked phone calls, emails, and text messages—to train employees in recognizing and neutralizing potential attacks. As AI‐powered social engineering becomes more accessible to malicious actors, the need for advanced employee‐focused security training is greater than ever.

Table of Contents

  1. The New Threat Landscape: AI‐Enhanced Social Engineering
  2. What Adaptive Security Does Differently
  3. Why OpenAI's Investment Matters
  4. The Founder Factor: Brian Long’s Track Record
  5. The Bigger Picture: An AI Security Arms Race
  6. Strategic Use of Funds: Scaling for Defense
  7. Final Thoughts: From Offense to Defense in the Age of AI
  8. Sources and Further Reading


Read next section


The New Threat Landscape: AI‐Enhanced Social Engineering

The digital security landscape has shifted rapidly with the rise of generative AI. Threat actors now use tools to:

  • Clone voices and impersonate executives
  • Craft hyper‐realistic phishing emails
  • Generate fake documents like invoices and receipts

These tactics are not only more convincing but also easier to scale—putting organizations of all sizes at increased risk. Social engineering, where hackers manipulate employees into giving up access or information, remains one of the most effective attack vectors. The rise of generative AI has only amplified this threat.


Read next section


What Adaptive Security Does Differently?

Unlike traditional cybersecurity firms focused on firewalls and intrusion detection, Adaptive Security takes a human‐first approach to AI security. Its platform uses AI to simulate the same tactics used by attackers, such as:

  • Voice cloning for spoofed phone calls
  • Text and email phishing using realistic generative language
  • Scoring organizational vulnerabilities based on employee response

These simulations help identify weak points in human behavior and train staff to detect manipulative content before it results in a breach. The platform is already being used by over 100 customers, with strong feedback from security teams validating its effectiveness.


Read next section


Why OpenAI's Investment Matters?

This marks the first time OpenAI has invested in a cybersecurity company, signaling a shift in how the organization views its role in the broader tech ecosystem.

Key implications:

  • Acknowledgement of dual‐use risk: OpenAI’s move reflects an understanding that generative AI can empower both creators and attackers.
  • Reinforcing trust in AI: By supporting defensive solutions, OpenAI demonstrates a commitment to responsible innovation.
  • Market influence: OpenAI's backing could accelerate interest and funding in AI‐specific cybersecurity startups across the sector.
  • OpenAI co‐leading this round alongside Andreessen Horowitz also validates Adaptive’s business model and places it among high‐growth, high‐visibility AI ventures.


Read next section


The Founder Factor: Brian Long’s Track Record

Adaptive Security’s co‐founder and CEO, Brian Long, is no stranger to successful tech ventures. His resume includes:

  • TapCommerce, a mobile ad tech startup acquired by Twitter for over $100 million
  • Attentive, an enterprise communication platform last valued at over $10 billion

Long’s credibility and experience in scaling B2B tech solutions add confidence that Adaptive can execute effectively as it expands its engineering and go‐to‐market teams.


Read next section


The Bigger Picture: An AI Security Arms Race

Adaptive is entering a growing field of startups tackling AI‐enabled threats. Notable examples include:

  • Cyberhaven, which helps prevent sensitive data from being exposed to LLMs, recently valued at $1 billion
  • Snyk, focused on identifying security flaws in AI‐generated code, with over $300M in ARR
  • GetReal, a deepfake detection platform that raised $17.5M to combat misinformation and identity fraud

These startups represent the broader "AI defense layer" emerging in response to generative AI proliferation. Adaptive Security’s niche—human‐targeted simulations—fills a critical gap in proactive, employee‐level defense.


Read next section


Strategic Use of Funds: Scaling for Defense

According to Long, the $43 million in Series A funding will primarily go toward:

  • Hiring engineers to scale platform capabilities
  • Expanding simulations across new communication channels
  • Keeping pace with emerging threats in the generative AI landscape

With phishing, voice spoofing, and deepfake content on the rise, the demand for AI‐native cybersecurity platforms is only expected to grow.


Read next section


Final Thoughts: From Offense to Defense in the Age of AI

The investment by OpenAI into Adaptive Security reflects a maturing AI ecosystem—one that is beginning to balance innovation with responsibility.

As generative models become more powerful and accessible, it’s critical that defenses evolve just as quickly. Human‐centric tools that simulate real‐world threats provide an effective, scalable way to prepare organizations for the challenges ahead.

By backing Adaptive Security, OpenAI is not only acknowledging the risks posed by its own technology—but also taking an active role in building the infrastructure to protect against them.


Read next section