AI is having its Promethean moment, transforming how we work, how we learn, how we communicate, even how we think.

And while AI’s power — its ability to code, to draft and summarize documents, to answer open-ended questions in seconds — is awesome to behold, AI is also imperfect. It makes mistakes. When it doesn’t “know” the answer, it often makes it up and hopes you don’t notice.

In human terms, AI lies.

AI’s mendacity will surely improve over time but also persist because, as the CEO of Anthropic says, “we do not understand how our own AI creations work.” This is not a simple bug to fix.

This “quirk” of AI is especially concerning when analyzing data. If AI is unpredictable, how can you trust it to make business-critical decisions? While some may be willing to ignore the risks and use AI chatbots for (often incorrect) black box answers to analytical queries, we wanted our approach to be centered around interpretable, verifiable answers.

AI’s unpredictability does not mean that it is useless; instead, like the fire of Prometheus, it must be wielded carefully. We cannot guarantee AI’s correctness, but we can make the derivation of AI answers easier to follow, and to make it more obvious when AI goes “off the rails.” AI tools must encourage scrutiny; they must help users to interpret, verify, and correct AI answers. AI shouldn’t simply give the answer; it must show its work.

Observable Canvases make data analysis visual, and that’s exactly what’s needed to see through AI’s lies and harness it safely.

At each step — whether writing SQL or using visual operations to transform data — data is summarized visually so you can see the distribution along each column and browse sample rows. Pervasive visual summaries make it easier to inspect the data and ensure that the desired end chart or metric is valid by understanding how it was derived.

The visual nature of canvases facilitates not only cross-functional collaboration amongst data teams and their stakeholders, but also collaboration with AI.

A human-centric approach to AI

The role of AI in canvases is to help users make decisions informed by data. We favor a human-in-the-loop approach where AI assists users, and is supervised by users, rather than automating decision-making.

To further our human-centric approach, we developed the following principles:

AI “sees” the same canvas you do. To make its behavior more predictable, and to ensure a consistent perspective, AI sees the same data — the same column summaries, the same sample rows — you see. And it likewise sees the same contents of the canvas, and what’s in the viewport, and the like. Everything on the canvas has a textual equivalent that can be included as context to AI.

AI uses using the same tools you do. There are no actions that can only be performed through AI; AI has no “extrajudicial authority”; anything AI can do, you can do by hand, too.

AI favors visual operations over “opaque” blobs of code. The visual design of canvases makes analysis more easily read and manipulated, so AI should generate visual operations whenever possible. AI may still generate code, but only when built-in visual tools don’t suffice. We’re building an extensive library of chart types and data transformations so that AI can produce more scrutable responses.

AI should be easy to undo and redo. AI will inevitably make mistakes; you may want to revise a prompt, try again, or even abandon AI for a manual approach. It should be easy to get back to earlier prompts and tweak them. AI-generated results should be easily undoable.

AI should work step-by-step. AI should break problems down into multiple steps so that the chain of thought is easier to follow, rather than trying a “Hail Mary” to answer a complex question in a single step. It should be obvious when AI is actively working, and when it adds content to the canvas. AI should be interruptible; when it goes wrong, you can quickly stop it.

AI is lightweight and iterative. AI should be immediately accessible, but also unobtrusive. AI should be fast to invoke, and fast to put away. It shouldn’t distract or beg for attention. It should not eat up precious screen real estate (leaving more space for charts). AI should only ask for attention when performing actions requested by the user.

We see AI helping in the following ways:

AI accelerates tedious, unfamiliar, or underspecified tasks. AI can quickly create queries and charts using short, natural language prompts. This saves time because you don’t have to first consult the relevant documentation (or a colleague if no documentation exists) for your data warehouse schema, for your data warehouse’s SQL dialect, or the like. AI can also translate vague questions (such as how has product usage changed?) into quantitative metrics that are a starting point for analysis.

AI teaches you how to use canvases. AI is the ultimate in-product help because it tailors its responses to your specific needs. AI can demonstrate how to use canvas features either in isolation or in combination. And when best practices are unfamiliar or tedious — such as adding annotations to explain, summarize, or capture context — AI can both demonstrate and automate.

AI is a creative partner. If you’re looking for inspiration, such as common industry metrics, ask AI for ideas. AI can also serve as a proactive investigator, noticing patterns or anomalies in the data and suggesting closer examination.

Try it today

We’re excited to introduce AI in Observable Canvases, and help you radically accelerate data analysis while retaining confidence and trust in the veracity of your insights. If you’d like access to canvases, please sign up now. We’re currently enabling early access for data teams, with general availability coming soon.