How data analysts spend their time day-to-day is changing as they incorporate AI into their workflows. Tasks that used to take days or weeks, such as profiling a new dataset, cleaning messy data, or generating a prototype visualization, can now happen in minutes.

The promise of AI for data is that it can free analysts from rote computational tasks so they can focus on those that benefit most from human judgment: interpreting results, stress-testing assumptions, figuring out what a finding means for the business. However, there’s been little discussion of the role of human judgment before AI enters the chat.

AI has transformed analysis. It hasn’t yet done the same for the inquiry needed before analysis is executed — deciding what question is worth asking, defining your terms precisely, and understanding what you'd do differently depending on what the data says. 

Trusted, impactful data analyses only happen when teams ask the right questions. In late 2025, only 10% of data practitioners reported being confident in AI-generated insights. While that number has likely shifted, there’s still a lack of trust in the results of AI-powered analysis. What happens when AI produces technically correct answers to the wrong questions? The query ran cleanly, the chart looked right, the numbers added up — but the analysis still missed the point, because the question wasn't quite right to begin with.

Good analysis starts with good questions 

Good analysis starts with good questions, which is why the best analysts take time to deeply understand their data and start analysis from a place of clarity. The data analysis process is well-defined, but inquiry is often more fluid and less formal.

When data analysis was slow and expensive, questions and hypotheses were naturally pressure tested. Analyses passed through multiple hands before reaching stakeholders and bad questions or assumptions got caught in review and refined through collaboration. The delay between asking and answering created natural space for reconsideration.

AI makes it easy to bypass that space.

Additionally, as AI democratizes access to data, stakeholders across the organization are now asking questions of data directly. While this gives stakeholders the ability to be more data-driven in their daily work, it also means more questions are being asked by people with less context to effectively evaluate the answers, uncertainty, and limitations.

How to improve inquiry 

Now that AI has accelerated the execution part of data analysis, it’s important for analysts, and really anyone asking questions about their data, to invest in solid inquiry before simply firing off a bunch of prompts. Inquiry isn't just intuition. It's a learnable practice with identifiable components. Before any significant analysis begins, three questions are worth asking: If the data proves me wrong, will I know it? If the answer comes back differently than I expect, will anything change? And can everyone who reads this result agree on what the key terms mean? If the answer to any of those is no, the question isn't ready yet.

Is this question falsifiable? A falsifiable question is one where the data could, in principle, tell you you're wrong. The opposite, a question structured to confirm what you already believe, isn't really analysis. It's validation. This shows up more than most analysts would like to admit: "Can you show me the impact of our recent campaign?" assumes impact exists and asks only for a narrative to explain it. "Did our recent campaign have an impact?" is a different question entirely. AI is particularly susceptible here to these nuances. Ask it to show you impact and it will find it, whether or not it's real.

Is this question decision-relevant? A decision-relevant question is one where a different answer would produce a different action. If the data shows the opposite of what you expect, would anything change: a budget, a product decision, a strategy? If not, the question probably isn't ready yet. 

Because AI makes analysis so cheap to produce, the volume of low-decision-relevance work is only going to increase. Decision relevance is the filter that keeps inquiry focused on what actually matters.

Are the key terms defined precisely? Every key term in a question should have a single, agreed-upon definition that will be calculated and interpreted consistently. "Active users," "revenue," "churn" mean different things to different teams, and AI will resolve that ambiguity silently by using whatever definition is most available in the data. 

These three criteria matter individually. But their organizational implications are what make them urgent, especially when an AI agent can operationalize a poorly defined metric across an entire organization in seconds.

What this means for data teams

Data teams are uniquely positioned to help operationalize good inquiry within their organizations. Their first priority should be to model good inquiry practices by making inquiry an explicit part of the data analysis process. Documenting good inquiry habits and sharing these practices internally can help create a repeatable standard for both analysts and stakeholders.

The second priority is to help stakeholders improve their own inquiry and analysis skills. Self-serve analytics exposed more stakeholders to data, and AI has widened the aperture even further. As more people across the organization gain direct access to data through AI tools, data teams have an opportunity to raise the quality of the questions being asked organization-wide. The key is that this is accomplished through collaboration and shared learning — not gatekeeping.

Lastly, both data analysts and stakeholders need to develop rigor around verifying AI’s output. Good inquiry is an important input for quality analysis, but it’s not the only factor at play. AI makes mistakes. An aesthetically attractive chart isn't the same as a correct one. Anyone using AI for analysis needs to validate the output for accuracy. 

Conclusion

AI is only going to get faster and more capable. The volume of data that AI analyzes will increase, and the bar for producing a polished-looking result will keep dropping. As it does, the importance of asking good questions that yield trusted, decision-relevant answers will only increase.

While there's a lot we don't know about the future of AI-powered data analysis, what is clear is that data teams that invest in building a practice and culture of rigorous inquiry now will be better set up to succeed in the future.