In November 2018, over 600 people attended the Flat Earth International Conference in Denver, CO. Proponents of this group argue that the scientific consensus of a “round” Earth is unnecessarily complicated, and it contradicts visual data from our eyes—that the world appears entirely flat. These data points are often difficult to explain, especially to a non-expert: buildings which are visible which they claim should not be visible, or why balloons or helicopters are able to float and land from the same spot without being impacted by the Earth’s rotation.
Without an expert in the room, it can be difficult, if not impossible, to counter these claims with real data. In some cases, the answer to these claims are yet another theory, which then must be validated with different data.
This can leave anyone feeling paralyzed: without a foundation in debunking Flat Earth claims, or without the necessary knowledge in Physics, you can easily find yourself suddenly in the uncomfortable position of questioning whether or not the Flat Earther is making reasonable claims.
This paralysis points to a fundamental problem with how we often approach knowledge, data, and reasoning in the world, and, more specifically, in business. On the one hand, it’s a known fact that humans are subject to confirmation bias—and therefore, we are advised to remain skeptical or neutral unless we are presented with objective evidence.
On the other hand, by taking skepticism too far, we risk falling prey to paralysis: letting our fear of accepting an idea without proof overrule even our most basic understanding of the world.
While the Flat Earth example is extreme, it demonstrates the importance of this tenuous balance between being data-driven to fight against unfounded ideas while also preventing data-dependency where decisions, beliefs, and knowledge cannot exist without a comprehensive accounting of all relevant data on an issue.
In some settings, for example, being overly data driven might be warranted: The military might be slow to adopt new weapons technology without extensive field testing on its safety and reliability, or the Food and Drug Administration (FDA) may not approve new medicines without meeting a standard of randomized controlled trials.
But in business, that same caution can become a liability.
In the early 2000s, Sears was still a retail giant. But as e-commerce emerged, they hesitated. There wasn’t enough historical data to prove consumers would abandon malls and catalogs for websites, so they focused on store operations and cost-cutting, assuming the model that built their empire would keep it intact.
Meanwhile, companies like Amazon and Walmart started building robust digital infrastructures, investing in fulfillment, UX, and omnichannel experiences, even when the short-term data didn’t yet justify it. By the time consumer behavior caught up and e-commerce exploded, Sears had lost its footing, and the window to catch up had closed.
In this case, their reliance on historical success blinded them to early indicators of change.
Data alone didn’t fail them, rather, a narrow interpretation of what data could tell them did. This scenario raises a broader question: how can you tell if your organization has crossed the line from being data-driven to data-dependent?
I’ve identified a few telltale symptoms.
It often starts with a creeping sense of inertia. Teams spend significant time and effort collecting data, building dashboards, and producing reports, yet, despite the visibility, nothing moves.
At the same time, confidence in the data starts to erode. Metrics feel inconsistent. Stakeholders question the source or the methodology, even after major investments in infrastructure and analytics tools.
And finally, when new strategies are proposed, the default response is hesitation via waiting for stronger proof, more validation, better alignment. The result is a culture that feels busy, but stuck. Teams are working hard, but decisions aren’t getting made.
Spotting these patterns is important because they often signal deeper issues in how decisions are made. Making progress requires clarity around how strategy is set, how data is used, and how much risk the organization is willing to accept. These are the foundational shifts that move teams forward.
Throughout our work with enterprise organizations, one of the most common problems we find is teams paralyzed by data.
The signs of this problem are usually subtle: in many cases, teams are excited and open to discussing strategy and goals, but inevitably the result of those discussions comes back to collecting data, gathering additional information, or increasing the number of perspectives represented.
In all of these cases, the underlying problem is a dependency on data and information, rather than having a strategy or goal driving data collection or information gathering.
To fix this, start by keeping discussions focused on action.
If more information or data is needed, align on the actions assuming certain information or data comes in–including the action to be taken if the data or information are inconclusive. More generally, ensure you have a vision prior to collecting data or information, even if the vision is preliminary or subject to change.
For example, a director tasked with identifying opportunities to integrate Artificial Intelligence/AI should have a vision for how these new technologies should be able to help their team’s operations, prior to holding extensive workshops to brainstorm ways AI can help the team.
In this example, a data-dependent director would immediately gather team managers and SMEs in multiple brainstorms to discuss their thoughts and ideas on integrating AI into day-to-day work. The risk here should be apparent: brainstorm participants are now burdened with both interpreting the request itself (i.e. the meaning of “integrating AI”) and with delivering information on the topic (i.e. where they see opportunities in their workflow). The results are likely to be much too disjointed, and the discussion risks being derailed into a variety of tangents. The information collected, while valuable, will likely be too overwhelming to be useful–requiring a second, third, or more rounds of brainstorming.
Contrast this to a data-driven director, who would identify specific value-adds from AI tools, prior to brainstorming. In this case, they might identify interactively synthesizing information and proposing options as two potential value-add areas. They would then brainstorm with team members on a more specific question: what parts of your day-to-day work would benefit from being able to quickly synthesize information or quickly generate options? This would create more tailored, explicit answers–resulting in data (in this case, information from the teams) which is driven by a vision, and thus can be actioned upon within a broader strategic AI playbook.
Too often, we think of “data” as being a crystal ball. We set ourselves up for disappointment by pursuing the collection and analysis of data as somehow freeing us up from the “old days” of trusting our gut or relying on historical experience. Yet as organizations over the past decade have learned, despite more and more investment in large amounts of data, increased clarity does not necessarily follow.
In fact, during our conversations with many clients and prospects, one of the most common perceptions they have is the unreliability of their own data–despite their investment into data sources, platforms, and advanced measurement. Often, we enter into these engagements looking for inconsistencies in data; instead, we often end up finding inconsistencies in practice.
For example, consider the example of a large retailer with multiple teams sharing the same central data lake. Major questions surrounded the integrity of their data, due to multiple instances of different teams calculating different values for critical KPIs (such as average order value).
However, as our team dug deeper into the utilization of this data, we uncovered a simple but dangerous issue: different teams utilized their own dashboards, with each team creating independent calculations of the same KPI using different foundational metrics.
In this case, if Average Order Value is total revenue divided by number of orders, some teams excluded those orders and revenue which were eventually returned, while others did not. This caused not only inconsistencies in data, but pulled valuable time from data science and data infrastructure teams to debug, diagnose, and deliberate on what was otherwise a straightforward issue.
In this case, a data-dependent functional team (and its leaders) passively consume their data, only focused on using measurements as a tool to achieve some end goal, rather than engaging actively with measurement as a practice within the team.
Further, the organization, prioritizing too much flexibility (and what they might perceive as transparency), under staffs and under resources its data infrastructure team out of the belief that more data is always better: the priority is to allow functional teams to leverage too much data (without the requisite expertise), rather than empowering the infrastructure team to drive governance strategy and ownership over the data.
Our recommendations included:
By embracing measurement as an active, strategic practice, large organizations can fully realize the value of the data and technology investments they have made over the past decades.
The scientific method is a relatively conservative process. Established paradigms of knowledge should not simply be tossed aside without sufficient evidence, nor should new treatments simply be tossed into the public without standards and review.
However, industry does not move at the speed of knowledge, and the two are very different fields. Industry is meant to innovate–to try new methods, adapt to changing demands, and ultimately retain their clients and customers against ever-present competition.
Experimentation as a practice is meant to reflect this idea: industry should embrace experimentation with new ideas and innovation, and continue to do so over time. However, experimentation and research are not cheap–and companies know this.
As a result, the majority of large organizations operate in practice more conservatively, meaning, they continue to do what has worked in the past, and pivot when circumstances force their hand. This would be fine, but the conservative approach in practice is in conflict with what we know to be true about the business world–which is that it must continue to innovate.
This tension between conserving what works and experimenting with the unknown, while a normal and healthy part of any large organization, can cause friction if not managed appropriately. The most important part of managing this tension within a large company is for leaders to align and communicate transparently about their own risk aversion, with respect to accountability for unsuccessful ideas.
Team leads, subject matter experts, and would-be innovators prefer to stay put unless they can prove that their new ideas are likely to work. Most of us would rather allow the system to continue as it is, as opposed to risking our livelihood on a failed idea.
This is perfectly normal, and is perfectly reasonable. However, it is also a reality which requires good leadership. A conservative culture alone is not a problem: a conservative practice paired with the pressure to innovate causes toxicity.
Too often, leaders fall into the same trap as the teams they manage: wanting to innovate, but not wanting to accept the risk which comes along with it.
The difference is that functional teams are usually very sensitive to the desires of their leaders: the feeling of leadership frustration over lack of innovation, combined with the reality of heavy accountability for failed projects, causes teams to adapt in ways that are counterproductive. In this case, one of these maladaptive adaptations is to lean on data as a crutch: to spin wheels in pursuit of innovation to appease leadership desire, while staying put unless the data speaks clearly in order to appease the reality of corporate accountability.
If the budget does not allow for heavy investment into projects that will fail, then it is unrealistic to pressure teams into making large, bold moves quickly–since bold moves require bold failures (and thus bold investment without any return). Instead, focus on targeted improvements: if a team is overwhelmed by volume, empower them to experiment with different methods of consolidating work–even if it means temporary fluctuations in their overall KPIs.
Identifying specific, targeted improvements achievable without large-scale overhauls—whether identified internally, or in partnership with a trusted third-party– can be an important part of a broader innovation strategy, especially when budgets are tight and risk tolerance is low. If you find yourself on the wrong side of data dependency, consider revisiting your innovation strategy to resolve the tension between the desire for change and the realities of your organization’s risk tolerance.
At Blazer, we partner with ambitious brands and leaders that refuse to settle for less.
Blazer is the consulting firm that helps brands prove what drives customer value, and builds teams that deliver on it.
16 W. Martin St., 9th Floor, Raleigh, NC 27601
16 W. Martin St., 9th Floor, Raleigh, NC 27601