Earnings call: Dynatrace outlines growth amid AI and cloud trends
2024.06.07 08:27
In a recent earnings call, Dynatrace (NYSE:) CEO Rick McConnell provided insights into the company’s performance and strategic initiatives. McConnell highlighted the importance of observability and AI in driving the company’s growth, with a particular focus on generative AI and Hypermodal AI technologies.
The company’s business model has evolved with the launch of a new pricing strategy, the Dynatrace Platform Subscription (DPS), aimed at improving customer satisfaction and consumption growth. Partnerships with firms like Accenture (NYSE:) and Deloitte have influenced a significant portion of deals, and the company is poised to capitalize on the increasing workload movement to the cloud. Despite competitive acquisitions in the market, Dynatrace remains confident in its differentiated offerings and its potential in the application security space.
Key Takeaways
- Dynatrace’s observability solutions are increasingly centralized, leveraging AI for enhanced performance.
- The company’s partnerships influence over two-thirds of its deals.
- Dynatrace has introduced the DPS pricing model, moving to a consumption-based approach.
- The company is optimistic about its position in the application security market.
- Recent market acquisitions have impacted the competitive landscape, but the full effects are yet to be assessed.
Company Outlook
- Dynatrace is scaling its business by targeting the top end of the market.
- The company is realigning account management resources to support this strategy.
- The convergence of log management and observability presents a significant market opportunity.
Bearish Highlights
- The shift to DPS has reduced upfront visibility into financials.
- Customers have been dissatisfied with the previous licensing model, prompting the switch to DPS.
Bullish Highlights
- Positive customer feedback on product effectiveness and value delivery.
- Strong customer growth in log management and application security sectors.
- Potential uplift from customers transitioning to Dynatrace’s log management solution.
Misses
- The transition to a consumption-based model has delayed the achievement of certain financial targets.
Q&A Highlights
- Dynatrace’s competitive differentiators include contextual analytics, Hypermodal AI, and automation capabilities.
- The market’s response to the company’s go-to-market changes has been positive.
- Investments in application security are expected to yield substantial opportunities, particularly in areas where observability data is crucial.
Dynatrace (ticker: DT) remains focused on leveraging its strengths in AI and observability to drive future growth. The company’s strategic direction, bolstered by its new pricing model and market positioning, aims to address customer needs while navigating the evolving competitive landscape. With a clear vision for the convergence of various data types in observability and a commitment to application security, Dynatrace is poised to continue its trajectory in the tech industry.
InvestingPro Insights
In light of Dynatrace’s recent strategic developments, it’s crucial to consider the financial metrics that could impact investor sentiment. The company boasts an impressive gross profit margin of 82.51% for the last twelve months as of Q4 2024, indicating a strong ability to control costs and deliver value from its sales. This aligns with the company’s focus on product effectiveness and value delivery mentioned in the Bullish Highlights section of the article.
The market has assigned Dynatrace a high valuation, as evidenced by its Price/Earnings (P/E) ratio of 89.62, and an adjusted P/E ratio of 87.41 for the same period. This high earnings multiple could suggest investor optimism about the company’s growth prospects, despite the transition to a consumption-based pricing model that may have temporarily obscured financial visibility.
InvestingPro Tips for Dynatrace reveal that the company holds more cash than debt on its balance sheet, which may provide financial flexibility and stability as it continues to scale its business and invest in application security. Additionally, analysts predict the company will be profitable this year, a positive indicator for potential investors.
For those interested in a deeper dive into Dynatrace’s financial health and future outlook, InvestingPro offers additional tips that can provide a more comprehensive analysis. As an incentive, readers can use the coupon code PRONEWS24 to get an additional 10% off a yearly or biyearly Pro and Pro+ subscription. There are currently 13 additional InvestingPro Tips available for Dynatrace at which could further inform investment decisions.
Full transcript – Dynatrace Holdings LLC (DT) Q1 2023:
Jake Roberge: [Call Starts Abruptly] coming, everyone today. Just to kick things off. My name is Jake Roberge. I’m the research analyst at William Blair that covers Dynatrace. And just for a full list of research disclosures, please visit our website at williamblair.com. But with that, I’d like to introduce Rick McConnell, CEO of Dynatrace. Thank you for joining us today.
Rick McConnell: My pleasure. Thanks. Good to be here, Jake.
Q – Jake Roberge: Yes. I guess just to kick things off, maybe if you could level set with people that may be newer to the story. Maybe talk about just a quick overview of the business, the markets that you’re addressing within observability, and just start with a high-level overview of the story?
Rick McConnell: Sure. So Dynatrace is in the business of assisting companies make their software work perfectly. That’s sort of the starting point of the story. We do that by participating in what is about a $50 billion business or market for observability, and observability is really targeted around using data types like logs, traces, metrics and these other elements to analyze software workflows and make them work better. So that’s how we target and do what we do. It turns out that in a cloud world, it is harder, not easier, to make software work perfectly. So what you have is an explosion of data, massive increase in its complexity and these workloads are harder to make work better. So this is what we do. We analyze these workflows in these data types to deliver software that works fundamentally much better than it would otherwise.
Jake Roberge: That’s helpful. And I guess, just to also touch on some of the recent dynamics. You’ve talked a lot about kind of the increasing rate of adoption for these large platform deals where companies are looking to consolidate a lot of observability workflow. So maybe talk about where the industry was before and what’s causing these platform consolidations, and then just what the move to these consolidated observability platforms does for you from a competitive positioning?
Rick McConnell: Well, to start, virtually every application has some observability solution. It just happens to be the case that many of them were internally developed using open source software or otherwise. And what’s happened is precisely as I said earlier, these workloads are getting harder and harder and harder to analyze and make work. So imagine in the case of a network operations center with 100 people staring at a sea of glass trying to figure out what’s broken, what’s working and then how do I make it work better. And then something goes wrong. And your first question is, oh my God, what broke? And then you start triaging where it broke and how to fix it. And this turns out to be an increasingly difficult problem to solve. Now observability of software used to be deployed largely departmentally, so an application team, an infrastructure team, would deploy observability solutions, usually dashboards. Dashboard is a visual mechanism to see is it red, yellow or green. Is my software working? Is it not working? Is it sort of working? And in the event that it went red, then you would try to triage out and figure out where it broke and how to fix it. And in many cases, this could take minutes, hours, days at times to get your software working again. What Dynatrace does is we use AI, not just generative AI, but predictive and causal AI that we’ve used for more than a decade, to automatically analyze workloads. And in automatically analyzing the workloads, we can deliver not just a dashboard of red, yellow, green, we will tell you precisely where the issue is in your software to enable a rapid reduction of the number of incidents, but also in the amount of time it takes to repair an incident once it occurs. It is this automated mechanism that really differentiates Dynatrace in our market.
Jake Roberge: That’s helpful. And then do you think platform consolidation is a theme that continues even as the macro improves? Or do you think that’s just the result of a tight kind of budgetary environment where people are saying, hey, we need to eliminate the 10 or 15 different monitoring tools and go on to these broader observability platforms?
Rick McConnell: I think it’s a trend. I mean I think this isn’t just something that’s just happening occasionally or temporarily. I think it’s durable. And the logic is that there are really three primary reasons for companies to look at sophisticated observability tools. One is around user experience. [Softwares down] users aren’t having a very good experience and you want to avoid that. Second one is productivity. If you have dozens of people sitting on a triage call that means they’re not innovating instead. And thirdly is around cost. If you have all of these different systems that you are trying to manually coalesce in a productive way to figure out what’s going on in your systems, then not only does it not work very well, but it’s very cumbersome and doesn’t lead to a rapid outcome. So for these reasons, organizations are beginning to centralize the decision around observability rather than go with the departmental approach that they had before. By centralizing the decision, it speaks directly to this platform-type approach where you want the best possible outcome. The best possible outcome comes from a completely integrated system that has one common data store, and that common data store brings together all of the observability data types into one data store that can be analyzed in Unison using AI. And by doing so, you get to the best possible outcome in the most rapid time possible. This is why the decision process at large organizations is beginning to converge, I would say, is continuing to converge to a more centralized structure where it’s the CIO, the CTO, CEX of some sort, he is now beginning to make increasingly the observability decision. And as they make that decision, that actually directly speaks to Dynatrace’s strength in the market.
Jake Roberge: Yes, that makes a lot of sense. And then you’ve been talking more about partners recently. How big of a role do GSIs or hyperscalers play in this moving forward in terms of being able to generate more leads for the platform, really start to actually lead deals versus just partnering with you on deals when customers are looking to consolidate so many different observability workloads?
Rick McConnell: Yes. For now, we have more than two-thirds of our deals are influenced, as we say, by partners. We have partners involved in the deal. They are originating about 30% or so of them. So I would love it to be the case that Accenture, Deloitte, Kyndryl or DXC, our primary GSI partners, actually originated more deals. But I’m actually pretty happy that they’re involved in the deals in the first place, because they do tend to accelerate deal closure and they also often can’t centralize the deal faster than we can, and they make it bigger. As last quarter, we reported our earnings in May that we closed 8 and 9-digit TCV deal, five-year deal with Accenture, for example. This deal probably would have been broken into multiple parts, taken a lot longer to close and been much more fragmented across geographies if we closed it directly versus closing it with Accenture. So this is an example of the leverage we can get, we believe, out of GSIs in particular.
Jake Roberge: That’s helpful. Maybe just transitioning over to AI because that’s the big topic in software land in these days.
Rick McConnell: Really.
Jake Roberge: Yes. Just a little bit.
Rick McConnell: I’ve noticed.
Jake Roberge: Maybe if you could talk about kind of what you’ve historically done with Davis AI and how that’s transitioning into the opportunity you’re seeing with Hypermodal AI, how that’s been supplemented with generative AI recently?
Rick McConnell: Sure. I thought you were going to ask me how generative AI is sucking all the oxygen out of the software spend in the market or something like that, which I’ve gotten asked today several times as well. Our view of AI is that it isn’t just generative AI. Generative AI is a productivity boost to be able to supplement any other techniques with natural language interface to broaden the use or access of the Dynatrace platform, which is fantastic. Love that. It brings the Dynatrace platform from SREs, for example, who know how to write scripts in Dynatrace, to a much, much broader array of end users who can now query the Dynatrace platform using our CoPilot solution, which is generative AI. This is pretty new, launched in the last quarter or so. But we think of AI from a Dynatrace perspective as Hypermodal AI, and Hypermodal AI actually includes three different AI techniques: causal AI, predictive AI and generative AI. In the case of causal and predictive AI, we’ve had those in the platform for well more than a decade. This is not new to Dynatrace. Causal AI is designed to address root cause analysis. Something goes wrong, what happened, where did it break, gives you a very precise answer as to where it broke so that it can be fixed rapidly. Meeting with the CIO of a large Australian bank, and he said, “I’m using Dynatrace to move my mean time to repair incidents from hours to minutes to seconds.” That’s my strategy, and we use causal AI to do that. Predictive AI takes causal one more level forward, which is to analyze billions of workflows over the course of – or billions of data types associated with those workflows over the course of time to anticipate where there’s going to be an issue and then help remediate the issue in advance of it becoming an incident. We had a case that we sometimes talk about with British Telecom, BT, where their expectation using Dynatrace was to consolidate a whole bunch of other tools and to reduce the number of incidents by 50% and to reduce the mean time to repair the incidents that remained by 90%. So back to my earlier comments around productivity, cost, user experience, imagine the benefits of user experience and productivity if you can actually reduce your incidents by 50% and the amount of time you spend working on incidents by 90%. This is a monumental advancement in reliability and automation of software.
Jake Roberge: That’s really helpful from a platform perspective. Maybe since you mentioned it and the lead up to that last question about a lot of investors wondering if AI is sucking the air out of the room, maybe you could talk about…
Rick McConnell: No, you can’t ask the question. I asked it first.
Jake Roberge: Maybe you could touch on that.
Rick McConnell: No. You’re supposed to answer that. I’m asking you?
Jake Roberge: What you’re hearing from customers and their spending priorities heading into the back half of this year?
Rick McConnell: Yes. I mean I can’t speak to it more generally. What I would say for Dynatrace is we haven’t seen that sort of impact in our business at this juncture.
Jake Roberge: Okay. That’s helpful. And then in terms of the expanding the market opportunity, I think the other interesting thing about generative AI is it’s another large workload that’s moving to the cloud that needs to be observed, monitored, secured. How are you thinking about the opportunity just from a workload perspective that generative AI could present to you over time?
Rick McConnell: It is a great question, because on the one hand, we use AI to execute our business and observability. On the other hand, AI actually results in more workloads and more workloads is more to observe. So the more applications there are, the more workloads there are, the more applications there are and infrastructure there is to then monitor and manage accordingly. So from our standpoint, generative AI is a terrific thing from a number of different angles, but one of them indeed, is an acceleration of workloads that need to be managed and overseen.
Jake Roberge: That makes sense. And then maybe shifting gears over to the go-to-market motion. It’s something you’ve been talking a little bit about. Maybe you could just talk through what’s changing with the go-to-market motion? How disruptive is it? You’ve talked about 30% of accounts being redistributed. So maybe just talk through a lot of those changes and how you see them trending throughout the year?
Rick McConnell: We had a new CRO, Dan Zugelder, began about 10 months ago and almost immediately, we began evaluating, okay, what do we want to adjust in our go-to-market to really scale this business materially? That’s really what we’re after. Dan came from VMware (NYSE:). He is used to scale. And so we’re thinking not just six months ahead, but a year ahead, two years ahead, three years ahead. How do you build a business that grows from where we are today at $1.5 billion or so of ARR to something substantially greater than that, given the market opportunity in front of us? Because we believe that a company of that magnitude is supportable by the market. And as we looked at it, our discovery was that the biggest opportunity is really at the top end of the market. This is where the TAM exists primarily in our view. And so the result of that is that, that’s where we wanted to increase emphasis. Now we have not fallen off the notion of the Global 15,000 as our target customer base. So it’s certainly not the case. We have just simply eliminated all of our reps who weren’t working at the middle part of the pyramid and moved them to the top end of the pyramid, but we have shifted some. And the result of that is this 30% notion of account switchover. What I would say about 30% account movement is that in a normal year, it’s maybe 15% to 20%. So this is – I wouldn’t compare 0% to 30%. It’s maybe a little bit higher, but it’s not radically higher. And furthermore, we discovered the typical strategic account executive, very top of the pyramid, typically had eight or nine accounts, but he or she would make their number on three of them or maybe four of them, and they wouldn’t really get to the other ones at the level of detail we could. But for strategic accounts, virtually all these accounts are doing more than $1 million a year for us in ARR today, and the vast, vast majority of them are maybe, maybe 20% deployed. So we view this as an enormous white space, and this is why we saw – one of the reasons we saw such momentum last quarter when we closed $18 million plus deals in one quarter. It was a record closure for the quarter. It included our first 9-digit TCV deal. It included our largest ever roughly 8-digit ACV new logo, a large airline and then many, many other accounts as well. So we believe that this sort of consolidation trend of platforms, end-to-end observability, et cetera, are driving this sort of momentum inertia in the market that we can take advantage of. We want to make sure that we have the capacity there to catch it. And so those are the changes we’ve made.
Jake Roberge: That makes a lot of sense. And I guess now that those territories have been realigned, like how have those changes been received by the go-to-market team. I know you’ve had some sales kickoffs recently, and so maybe talk about how the changes have actually been received in the field.
Rick McConnell: The sales kickoff that we just completed back in April was one of the best I’ve ever attended. I can say this because I didn’t preside over it. I participated in it, though. Just an amazing response. Our account execs are fired up. And part of it is some momentum coming off of the Q4 performance for sure, but part of it is what they see in the market opportunity as well. Huge market coming our way with significant differentiation around Dynatrace and our story and our solution in that market space. which I think the typical rep would say as, wow, this is a big opportunity for me to really succeed and grow this year.
Jake Roberge: That makes a lot of sense. Maybe just to take a step back, you and Jim have talked a lot about recently where pipeline is growing faster than ARR growth. There’s visibility into a potential acceleration in ARR at some point. Maybe you can just talk about what are the building blocks at? Is it all macro? Is it these platform consolidations? Is it partners? Maybe walk through some of the biggest building blocks that get you excited about that pipeline growth and the potential for acceleration down the road?
Rick McConnell: The biggest building block really, bar none, is the shift that’s occurring in the industry really around the necessity of software working well. And that is driving, I’d say, this increased consolidation trend of software to work better using observability capabilities. And as that builds in momentum, I think that’s really one of the biggest catalysts that we see. A second one is new products, the addition of log monitoring, log management, the addition of application security to our portfolio. These are elements that provide more traction for customers in adjacent spaces. And then thirdly, we’ve adjusted over the course of the past 15 months or so now our licensing approach to be what we call the Dynatrace Platform Subscription, DPS, we refer to it as, which basically moves us to an ELA, an Enterprise License Agreement type approach, where you simply make a commitment of dollars over the course of a one to three-year span, and then you just consume that over the course of time using that subscription model. It seems to be getting pretty substantial traction in terms of consumption by customers using that model relative to our prior model. We, in fact, are seeing consumption that is – consumption growth that’s double the rate of our prior pricing model in DPS customers. And so that lends us some pretty significant upside opportunity as we look to the future as well as we add more DPS customers to the installed base.
Jake Roberge: That’s helpful. And a common feedback point that I get from investors is just, hey, Dynatrace is a great company. It’s operating behind a really large market, but there are other large players in this market. And so I’m curious to get your take on how do you compete in a market where there’s other large players that people could adopt? And how do you eventually become one of those long-term winners?
Rick McConnell: There are other competitors in our market? I had noticed. It is important to us to continue to focus on differentiation based on our strengths. And where we win is at the larger account size, for all the reasons I’ve described earlier. Our competitive differentiation is really in three areas. One is in contextual analytics. By having a single common data store that captures all of these data types in one contextual data store, we then can provide a level of automation and capabilities that others simply can’t provide. Second is Hypermodal AI. We’ve talked about this, but the notion of AI analytics applied to those common data types gives us an advantage in the market by being able to get to answers, not just red, yellow, green status indicators or dashboards. And then third piece is in fact, automation itself. As I am privileged enough to talk to CIOs around the planet, their comment to me is not, how do you fix problems faster? Of course they want to do that. It is, how do I eliminate incidents altogether? And the – if you get to the point where you trust the answers on a platform like Dynatrace applied to these issues, then you can actually automate the solution. I’ll give you the simplest of examples. Let’s say before, maybe you were going to run out of capacity on a server farm sitting at AWS in Virginia. Why that would ever happen or should happen is beyond me, but it happens all the time. What if instead, we could predict based on your usage and flows that was going to happen, and then automate the solution by provisioning more capacity in real time to eliminate the issue from having happened? That is a very rudimentary example, admittedly. But there are so many others that I could give you that enable and speak well to the notion of automating activity to be able to prevent issues from happening in the first place. And this is what our customers want to see.
Jake Roberge: No, that makes sense. And then you’ve recently launched two pretty big new products with log management and application security. You recently talked about on the last quarter that you might be pushing out those $100 million ARR targets in the next year versus the prior expectation of fiscal 2025. Could you kind of clarify what you exactly meant on the push out of those targets? And then how does the trend of customers moving over to the new DPS pricing model, can it give you visibility and the ability to map consumption versus ARR?
Rick McConnell: It’s interesting, but – and maybe this is not intuitive, but by moving to DPS, it actually, upfront in ARR, gives us slightly less visibility as to how you expect to use in the portfolio. It used to be the case that we would license a number of host units for you to use with application performance monitoring or infrastructure monitoring, and we would license logs and application security independently. The feedback that I used to get from customers when I first began at Dynatrace two and a half years ago and I would go ask them, what do you really – I ask them all the same things. What do you like about Dynatrace? What do you not like about Dynatrace? To what do you like about Dynatrace, the feedback I would get from customers is, your product, your solution is awesome. It works great. It solves my issues. It does what it says it’s going to do. You drive enormous value and we have benefited substantially from its deployment. On the – what do we do not so well, it was, your licensing model is not great. And I think that was sort of the generous way of putting it. And it was because you had to provision all of these different applications separately. You had the contract for us to do log management, and then you contract to do AppSec, and then you contracted to do host units. It was very arduous. We put in place DPS to address this. And so doing, what we essentially enabled was just give us a spend commit. You can use it however you wish against a particular rate card. And that’s what we’ve done. But what it’s also done is it forces us to use consumption as a measure for deployment of logs and application security and other elements. And that is a retrospective measure as opposed to a prospective measure using ARR. And so this shift has – is part of the cause for us shifting out when we believe we hit some of these numbers in AppSec and log management. What I would say in each of these areas is customers have grown 100% year-over-year in each of the areas. The growth has grown substantially in terms of usage. So we feel good about both spaces, but it’s going to take us a little bit longer to get there on a consumption basis.
Jake Roberge: No, that makes sense. It’s a when, not an if, with 100% year-over-year growth, the DPS pricing model seeing – growing much faster than the other customers, it’s just a when, not an if. So most customers you’re landing with log management today are really – I know a lot of them are trialing just new workloads before moving over their existing workloads from those. There’s more legacy incumbents. So maybe talk about the subset of customers that have moved over, the whole farm, where they’re saying, “Hey, we’re not just giving you the new workloads. We’re also giving you the existing workloads.” How large of an uplift could that be for you? And then how long does it typically take a customer to get to that realization?
Rick McConnell: The answer to your last question of how long does it take, it varies wildly. It’s – the way we look at log management is, first of all, enormous market. I mean, take a look at many in the market, Splunk (NASDAQ:) and others, I mean it’s a giant existing market. Our view before I sort of level down to how does log management roll out is simply that the way the log management market has evolved relative to observability is that they’ve evolved relatively independently. That’s not going to continue, because it really doesn’t make any sense in the end. Those should converge. Why should they converge? It is because the contextual analytics that I talked about earlier coming out of the Dynatrace platform delivers better answers based on having all of the data types for observability in one place. So having logs, traces, metrics, behavioral analytics, real time, real user experience, those sorts of things all together delivers the best outcome in things like incident management and resolving issues. So that’s one of the reasons that we believe that all of this will converge. Frankly, irrespective of Dynatrace, and even if you listen to other vendors in the space, they’ll tell you the same thing. It makes sense to converge these data types. Now when you get to log management, the way we expect it to roll out is POC or trial, early rollout, later rollout and then migration of competitive workloads over. We have customers out of the 600 log customers at each of those four stages. And some of the larger ones have done major competitive takeouts already and are already running on a Dynatrace platform. Others are going to go at a varied rate to be making that transition. Ultimately, makes sense to us to have a converged observability inclusive of log management infrastructure and solutions running on the platform.
Jake Roberge: Yes, that makes sense. And then I know it actually – it takes some time to actually start impacting the marketplace, but have you seen any change maybe on the POC front just as a result of the recent acquisitions that have taken place? I know we’ve seen Splunk get acquired, Sumo Logic. Even on the APM side, we’ve seen New Relic (NYSE:) get acquired. So have you seen any impact to the competitive ecosystem following those acquisitions?
Rick McConnell: I would say certainly, in Splunk’s case, way too early to tell. It’s – there’s certainly some, I’d say, customer learns about what happens, but Splunk is very entrenched in customers. And so I think that takes a while to play out. And we’ll just have to watch it accordingly in terms of some of the other – some of the vendors in the space, and I do think it’s impacting the environment at the margin.
Jake Roberge: Yes. That’s helpful. Maybe last question of the day here. You’ve recently gone into application security. Maybe talk about that adoption has been trending and just the opportunity that you see ahead to expand into security?
Rick McConnell: Sure. Our application security business, I believe, can be a substantial business for us at Dynatrace. This is why we’re investing in it accordingly. Having said that, security is a pretty crowded space with lots of giants, and the strategy that we’ve deployed at Dynatrace is very specific, which is that we want to be competing in areas of application security where observability data makes a differential – has a differential impact. So in areas like the Log4j crisis, or vulnerability analytics, or runtime application protection, RAP, cloud SIMs, these areas are the areas that we’ll be investing in, in security, because these are areas where having access to the full suite of observability analytics and instrumentation makes all the difference in the world. And so that’s where we’re investing in. I think it can be a very dynamic and interesting space for Dynatrace as we look at it.
Jake Roberge: Yes. Great opportunity. Well, thanks, Rick. Appreciate you spending the time with us. Thank you, everyone in the room, for joining, and those joining over the webcast.
Rick McConnell: Thank you all. Appreciate it. Thank you.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.