Analytics teams should own outcomes, not outputs
Step 1 in building an impactful and empowered analytics team
There’s a handful of evergreen topics in analytics circles. How to build analytics teams and organize them for success. How to make analytics teams more proactive and measure their impact. How to balance answering ad-hoc questions against doing deeper, investigative analysis. How to grow your own and team’s influence within your organization. The most important qualities to look for when hiring.
I wrestled with these same questions when I was leading analytics at Lyft from 2014-2020. What I learned scaling the team during a period of hyper-growth is the answer is highly context-dependent. What’s needed at an early-stage startup where everyone knows each other is fundamentally different than what’s needed at a highly-matrix-ed, public company with multiple business lines.
The other (and more important) thing I learned is that you naturally arrive at the right answers if you focus on the right first-order problem.
Deliver outcomes, not outputs
One of my favorite mental models is the concept of outputs versus outcomes. In the context of analytics, outputs are the analyses, reports, and dashboards people build. Outcomes are the product decisions and business impact of those decisions.
Most analytics managers feel ownership, if not direct responsibility, over the analytics outputs their teams produce. They care about data quality, picking the right metrics, the technical soundness of analyses, the usability of charts on dashboards, answering questions with the correct SQL queries.
Fewer managers feel ownership over the business outcomes themselves. This group of people will debate product and business decisions with PMs and other leaders in the company. They have strong opinions about the end user and product experience, and they feel the weight of responsibility if the product misses a goal or OKR.
Adopting an ownership mindset over product and business outcomes is the key to building an empowered and influential analytics team.
The most successful analytics managers I know focus their energy and their team’s energy on delivering great business outcomes, rather than delivering analytics outputs. This doesn’t mean that analyses and dashboards don’t matter; to the contrary, you need to produce great outputs to drive business outcomes.
The difference is the starting orientation. Teams that strive to deliver outcomes as their north star create two important incentives:
They prioritize projects based on impact. People find meaning in doing work that is impactful, not just interesting. In fact, some of the most impactful work is often quite mundane1 .
They deliver the last mile. People go extra lengths to ensure that product insights are reflected in the product roadmap, that dashboards are used, and that the right decisions are made using the right data.
Second-order effects
It turns out that orienting on outcomes as the first-order problem tends to be incredibly clarifying for many second-order questions, like how to organize, how to be more proactive, how to grow the team’s influence, and how to hire.
Let’s reason about it.
In order to own the business outcomes as a data analyst, you need to deeply understand the business context first. You need to understand the product strategy and the day-to-day tactical decisions. Depending on the size your company, you may need to embed with the product team to achieve this.
Once you understand the product, you can begin to separate the important questions from the rest. You’re able to identify new questions nobody is asking. And as you adopt a mindset of improving the product, you start to ask the same questions as your product manager, which organically shifts the balance of work to be more proactive rather than reactive.
Sometimes the line between analytics and product gets blurry, because your best analysts will seem capable of being acting-PM. Product managers looking to scale their own impact may delegate parts of the roadmap to their analysts. When they are on PTO, the analyst might end up lead the team’s product discussions.
As a manager, your influence comes from the impact your team delivers in the organization. At a certain point, the product teams that work with analysts will start outperforming, and people will start seeing analytics as a strategic lever for the company. Product and engineering leadership will help make the case for growing your team, and you’ll get more headcount to scale your impact even further.
Over time, you start pattern-matching and realizing the most successful analysts on your team are product-minded and excited about having more ownership. You reward and promote them for their impact, and you start to screen for those qualities in your hiring process.
Unrealized potential
2 years ago, I wrote that “data teams have the greatest unrealized potential of any function.” I believe this is more true today than ever.
With a growing ecosystem of data practitioners armed with modern data technologies, we’ve mostly solved the talent and tooling problem.2 Today, the main thing that’s preventing analytics teams from having more impact is themselves. Will they accept the challenge?
I’ve been reminded time and time again of the business impact of a well-designed, highly-trafficked dashboard.
The modern data stack has democratized the technology part of the problem. 10 years ago, startups did not have the resources to up a simple end-to-end analytics pipeline. Today, one person could do this in a couple hours using off-the-shelf tools. Communities like LocallyOptimistic and dbt have significantly grown the base of data practitioners.
I have at points in my career argued exactly this point, but I have come to see some serious limits in this idea. The fundamental problem is that output accountability without decision-making authority is unstable.
Consider a simple case. A marketing DS manager reviews a launch plan for a product with a push messaging strategy that is extremely abusive; the median user will receive 10 push messages on launch day based on the message trigger logic and forecasted product adoption. She argues the case with her primary marketing partner, she argues the case with the PM, and everyone shrugs. "So what?" they say -- "we'll tune the messaging rules after launch if you can show that it's causing churn."
In the past I might have argued this was a failure "a truly effective analyst would have been able to convince people that this was an issue and to change their plan. If you failed to convince them, you need to consider how to take a different approach that might work better next time." However, I would now diagnose this as a problem with PM and GTM accountability. If they don't think an abusive use of the push notification system is a problem, there is no foundation for influence. No data scientist will win this argument because the PM is accountable for hitting her launch date and adoption targets. Someone else (maybe) is accountable for the push notification system unsub rate.
In other words, holding data staff accountable for outcomes only works in a case where their partners have good accountability models that are the same as the data scientist's. Maybe you have seen that more often than I have! But in my experience, there is constant tension between what a PM cares about and what a data scientist cares about, and that is a persistent barrier to an outcome accountability model.
I have found one useful bit of synthesis between these two perspectives. There is a spectrum of accountability between output and outcome, and the higher level someone is in an org, the more outcome-centric their accountability should be. So for an IC (anything less than principal IC) I prioritize output accountability because they don't own decisions about what to work on, or the decisions made by others that result from their work. For principal and IC managers, I expect outcomes BELOW their level reliably, and AT their level sometimes. In other words, a line manager data scientist should be able to convince IC PMs of important matters. But may or may not reliably get positive outcomes from peer PM managers. And they should only be accountable for securing good outcomes when influencing UP their own reporting chain, one step. But they should not be accountable for influencing "UP and ACROSS" the org. That is the job of someone above them in their own management chain who has the necessary context and peer relationships across the org to reset the accountability of other staff to an appropriate level.