Palo Alto Networks just put everyone else who is hoping to compete in AI/ML Security on notice.
Their ~$500M+ (intended) acquisition of Protect AI is nothing short of a declaration to win this emerging market before competitors get their strategy together.
This is going to be one of the headline deals for 2025. Maybe even the entire decade if everything goes right.
A deal like this is comes with a lot of questions and skepticism, though. We only see strategic buyers making $500M+ cybersecurity acquisitions a few times per year. Major cybersecurity markets get created even less frequently.
The Palo Alto Networks-Protect AI deal was an extra special blend of combustible ingredients. A dash of Palo Alto Networks, platformization, AI, and a high revenue multiple is a recipe for uproar in the cybersecurity industry.
A lot of things had to come together across the market for AI/ML Security, Protect AI, Palo Alto Networks, and more over several years to make a transaction like this possible.
Let's talk about everything that made this happen, starting with the wildest early stage market I've ever seen.
AI/ML Security is the definition of an embryonic market
AI/ML Security is a hard market for strategic buyers because there isn't a recognized pattern or product architecture to follow yet. This is still a very embryonic market — or, in Nikesh Arora's words, a "rapidly evolving market."
Financing activity in the market is a pretty good indicator for how messy and embryonic this market really is.
According to Return on Security data, financing for this market more than doubled from 2023 to 2024. Investment activity went from $181.5 million in 2023 to $369.9 million in 2024, a 96% year-over-year growth rate.
A disproportionate amount of the money went to Series A and earlier stage companies — so, basically the definition of an embryonic market.
Protect AI was one of the most well-funded pure AI/ML Security companies, with $108.5 million raised across three rounds. They raised a $60 million Series B in August 2024, which made them one of the most mature (but still relatively early stage) companies in the space.
Here's the thing, though. All of this early stage startup activity means we have terms like AI-SPM, AI Detection and Response, and MLSecOps — but most buyers don't intuitively recognize them yet. They just know they have AI problems and need a way to solve them.
Enter the DeepSeek problem.
AI/ML security got real for people quickly during one fateful week back in late January. The conversation went from, "hey, LLM scaling might be slowing down" to "oh, &@%$, this DeepSeek thing is out of control" in 0.002 seconds.
I can't think of another example where an AI app became so popular so quickly.¹
This meant security teams didn't have much time to act. We basically had to rely on tools and processes that were already implemented. There was no time to put something new in place.
The sudden rise of DeepSeek stress-tested the current state of security controls for AI in every large enterprise. Researchers discovered a few bad vulnerabilities, but (thankfully) no widespread attacks or breaches were reported.
Security for AI/ML was on our radar long before then, of course. In hindsight, DeepSeek was a wake-up call that contributed to some of the market activity we're seeing today.
Securing user inputs directly to DeepSeek (opposed to self-hosted models) is an adjacent problem to what Protect AI's product suite does. It was close enough to get business and security leaders thinking about the power and risks of both self-hosted and first-party models.
Right now, the tailwinds for the AI/ML security market are looking exponentially better than they did during our "LLMs aren't scaling" spell in Q4 2024.
Back then, I would have said we were only going to have three to five third party models that everyone was going to use (OpenAI, Anthropic, etc.). In that scenario, there is basically no need for AI/ML security (for the models themselves, I mean) outside the handful of companies building the models.
After DeepSeek (and other open source models), it seems a lot more likely that companies with enough motivation and resources are going to both host and post-train third-party models. Ambitious companies will even train their own first-party models. We already have some good examples of this in cybersecurity with Google's Sec-Gemeni v1 and Cisco's Foundation-sec-8b.
It sure looks like this trend of open source models, reinforcement learning, post-training, and model proliferation is going to continue. This is a fundamentally different world for security than big tech and AI labs controlling a majority of model development and training.
In this version of the story, security teams have to play an active role in protecting their companies' own models — not just prompts and data coming in and out of other people's models.
Training bespoke models is definitely a first world (read: enterprise and tech company) problem, but the addressable market for first-party models (and, by proxy, securing them) is probably going to be bigger than we thought a few months ago.
Let's hop back in time again and talk about how Protect AI put itself on the map with a highly counter-intuitive strategy.
Protect AI’s rapid platform build
The story of how Protect AI put itself in a position to be acquired by Palo Alto Networks is a case study for the ages in cybersecurity industry history.² There are a bunch of interesting strategic moves that the Protect AI founders made to make the company (and acquisition) of today possible.
Protect AI was one of the OG companies in AI/ML Security, founded way back in 2022 (ha) by a team of AI/ML experts with successful exits in the past.
I know 2022 doesn't seem like very long ago, but remember: ML security wasn't an obvious domain for a startup back in 2022. ChatGPT wasn't released until late November that year, and it took even longer for businesses to start adopting it at scale.³
Securing ML code at the source in Jupyter notebooks with NB Defense was even less obvious. They knew what they were doing.
They swiftly built out their "AI Radar" platform strategy from there. Next was a central console and data layer, followed by static analysis for ML models (Guardian), dynamic testing (Recon), and runtime (Layer).
They used four acquisitions to help build their product suite. This is highly atypical for a less than five-year-old company who raised just over $100 million in total.
Protect AI already did the grubby integration work, which ended up being an appealing option for Palo Alto Networks over doing this post-acquisition.
Protect AI broke all kinds of "conventional" rules to build a platform early...and rapidly. From Jupyter notebook plugin to one of the industry's leading AI/ML security platforms in three years is the definition of moving fast.
It takes two sides to make a deal happen, though. Let's talk about the trajectory of Palo Alto Networks' product strategy for AI and how it led to an acquisition.
The convergence of Palo Alto Networks’ AI strategy and Protect AI
A cynic might argue Palo Alto Networks is a mature company jumping on the AI bandwagon, but history tells a different story.
They acquired an ML-based behavioral analytics company in 2017 (LightCyber) before Nikesh Arora became CEO. This iteratively evolved into the Cortex platform we know today.
ML was embedded into their core NGFW product in 2020 (the PAN-OS 10.0 Nebula release). The rest is history.
They doubled down on AI for security after that. The launch of Cortex XSIAM in February 2022 was the beginning of the AI-powered-everything movement.
But in 2025, why would Palo Alto Networks spend half a billion or more on a company in the AI/ML Security market instead of building their own solution or piecing together a few smaller acquisitions?
They could have done either, but it would have cost them time to market. Here's an abbreviated explanation from Nikesh Arora shortly after the acquisition was announced:
What I’m most excited about is making sure that we get this AI thing right…there’s a whole new series of security challenges…I’m really excited that we’re taking a very assertive and aggressive view.
In this assertive and aggressive view, it makes a lot more sense to accelerate the roadmap and get to market faster.
Palo Alto Networks had the beginnings of AI/ML security products, but there still wasn't a ton of product overlap. So, they went for a platform instead of buying piecemeal.
Prisma AIRS is their platform of the future. They wasted no time telling the world about it after the acquisition was announced.
Give them credit for having the conviction to buy an early stage platform instead of trying to do multiple acquisitions and stitch all of this together at a cost of months or years on the roadmap.⁴
Was this the right strategic decision, though? Let's talk about a few of the counterarguments and risks I've seen since the deal was announced.
Demand, competition, moats, and more
Tom Le, the CISO at Mattel, wrote an incredibly thoughtful counter to my LinkedIn post about the transaction. I had to include and expand upon it in the full article.
The summary version of the counterargument is that Palo Alto Networks is throwing market cap (or cash — terms weren't officially disclosed) to buy an AI model protection capability with questionable demand, intense competition, a limited moat, and a risk of being subsumed into existing platforms.
How's that for a counterargument?! Valid points, for sure — and worth discussing in detail here.
Will customers buy the product, or does it have to be bundled for free?
Palo Alto Networks obviously wants to monetize the product and is betting on that happening eventually. They've shown willingness to be both patient and creative with emerging markets, which is going to be the case here.
A similar situation happened with Talon and the secure enterprise browser market. Palo Alto Networks was roundly criticized for paying $700-million-plus to acquire a promising but early stage startup in a promising but embryonic market.
They started out by giving the product away for free to existing customers. Now, it's generating a fair (and growing) amount of revenue — $30 million in total bookings as of their fiscal Q2'25 earnings report.
I'd expect a similar playbook here. They may not proactively give the product away. It could be bundled on a case-by-case basis depending on the customer and circumstances. Either way, there will be an incubation period while the product gets integrated and market demand becomes more established and predictable.
How are they going to compete with 25+ other AI startups?
If you were at RSAC, you probably saw an expo hall lined with companies selling AI guardrail/model/firewall/injection protection. As we discussed earlier, an embryonic market means a highly competitive market with everyone trying to gain traction.
This is another instance of the platformization debate. Sure, there are 25+ startups that can do flavors of AI/ML security — just like many other markets in cybersecurity.
The question becomes if you want to manage another vendor relationship...or just push the easy button and buy the AI security module from Palo Alto Networks. The answer is even easier if you're already a customer, and easier still if they're willing to let you try it for free.
I'm not saying this is the best option for every organization, nor that the however-many-other AI/ML Security startups are hosed.
A relatively complete product suite from an established company is a big deal for a lot of buyers, though — especially large enterprises who want to make this problem go away and would rather not deal with startups in the process.
Does model protection have a low barrier to entry?
This could be the case. It's hard to answer definitively without seeing the underlying economics for a bunch of private companies building in this space.
Anecdotally, the financing data says otherwise — but it's obviously questionable to conclude large amounts of capital invested mean AI/ML security is expensive to build.
Meta released several AI/ML security tools the same week the Protect AI acquisition was announced. The tools aren't a one-for-one match with everything Protect AI does, but still impressive.
You might be able to conclude tech company building and open sourcing a bunch of AI tools as a hobby means model protection isn't difficult or expensive — but this side of the case is anecdotal, too. Meta built Llama, an entire frontier model, and open sourced it. Time and money have different laws of gravity in big tech.
Regardless, we've seen this movie before. Every major cybersecurity product category has respectable free, low cost, and/or open source alternatives. Companies still pay for products. Sometimes top dollar.
A lot of factors go into making moats and creating barriers for market entry. Technical difficulty and cost are two of them. Palo Alto Networks has distribution, brand equity, and several other factors that are equally hard (or harder) to replicate.
Are licensing, fair use, and IP infringement the more complex challenge for AI?
They definitely could be. Matthew Prince at Cloudflare has talked about this exact topic on at least two recent earnings calls. From their Q4'24 report:
Cloudflare counts many of the most important AI companies as customers. We also count a huge portion of the world's content creators as our users. Being between those two puts us in an important role to help figure out the business model of the post-search web. Cloudflare sits in a unique position to help figure out how content creators are compensated, what agents are allowed where, and on what terms, and how the AI-driven web of the future will fit together. It's early days, but the conversations we're having with all the relevant parties feel foundational for the future. Watch this space, definitely exciting times.
I think he's right, at least for the open web. Internal and proprietary data is a slightly different story.
One of the biggest reasons (and advantages) sophisticated companies are going to have with using first-party models is access to their own data.
If you've worked in or around an enterprise, you know a lot of the data is trash, and too many people have access to it. But, presumably, at least some of it is highly advantageous for AI.
The fundamental difference between the Matthew Prince view of the world and enterprises is this: they own their data, dumpster fire and all.
Protecting their data and the first-party models that consume it is priority number one. Their general counsel obviously doesn't want them getting into legal issues either, but that's more about managing risk.
Cybersecurity companies like Palo Alto Networks don't care about content on the open web in the same way Cloudflare does. Their enterprise customers come first, which means helping them secure their AI/ML stack top to bottom.
Will third-party AI platforms build all this protection in natively?
They could, but they have plenty of other high priority problems to solve.
Fragmentation of models, constant one-upping on benchmarks, and...oh, protecting against severe harm are going to keep them busy enough to let the cybersecurity companies deal with the cybersecurity problems.
Companies do build security and privacy features into platforms, of course. GitHub and GitLab building things like vulnerability scans, secrets management, and more into code repositories is a recent example.
The many other cases where native security didn't happen seem like better precedent here, though.
Cloud infrastructure is the closest one I can think of in terms of magnitude. AWS, Microsoft, and Google Cloud all prioritized features and growth over security for years...which, as we just saw, produced a $32 billion company named Wiz.
Even with the relative success GitHub and GitLab have had with native security features, there was still plenty of room left for companies in AppSec, Secrets Management, and more to hit nine figures of revenue.
Uncertainty is part of the deal when you're making strategic decisions in early stage markets. As a wise cybersecurity CEO (not named Nikesh Arora) once told me, “we’re all just making bets about the future.”
Palo Alto Networks made its bet. Let's talk about what comes next for the company and the rest of the industry.
Palo Alto Networks made its move — where do we go from here?
Palo Alto Networks acquiring Protect AI puts significant pressure on their relative peer group and other market-specific competitors to respond with moves of their own.
This doesn't necessarily mean everyone is going to make similarly sized acquisitions. There is more than one way to put together a product portfolio, of course.
What changed was the timing. After the deal closes, Palo Alto Networks will have put together the most complete AI/ML Security product portfolio outside of Cisco and Robust Intelligence.
This is likely going to mark the beginning of an AI security consolidation rush. Palo Alto Networks has spoken and made their move. The clock was already ticking for everyone else, but the window to deliver just got cut in half.⁵
As for Palo Alto Networks, I expect history is going to look back on this deal as another good one by the current leadership team. They made a reasonable bet on a complimentary market with high upside while it was still relatively early.
It's not over, though. Palo Alto Networks still has to keep innovating and building on top of Protect AI — but they have demonstrated a reasonably good track record at integrating past acquisitions.
Palo Alto Networks did everything they could within their control, which is to accelerate delivery of their AI security product suite and give customers a concrete option if they want to buy into a platform.
The strategic bet behind Palo Alto Networks' platformization strategy is that the sum of the parts is better than others can offer as standalone products.
For Palo Alto Networks, this now becomes a market question: how big will the market for AI/ML Security get in the long term?
They don't have to completely and utterly dominate the AI security market for this to be a good deal.
Anything under a $1 billion acquisition price is a reasonable bet (and breakeven point) for Palo Alto Networks — especially in a market with this much upside.
Footnotes
¹I know it's fundamentally a model – I'm referring to their hosted web and mobile apps.
²This is the short, strategy-focused version. Ed Sim and the boldstart team have you covered on the full timeline.
³It's still debatable whether businesses have adopted it at scale or not yet. "Scale" is such a relative term in AI that you could easily argue we're not even close to scale today, and you'd probably be right.
⁴Also, credit to their early investors for investing in AI/ML security super early. The thesis for this market isn't totally clear even today. It definitely wasn't back in 2022 when boldstart and Acrew Capital cut the first checks for Protect AI's seed round. Evolution Equity Partners led both the A and B rounds, and everyone kept doubling down well before an outcome like this was obvious.
⁵Waiting for the market to develop or sitting it out entirely are viable strategic options, although I doubt many peers will intentionally choose to sit on the sidelines.