our blog

How to Run a 2 Hour AI Discovery Workshop That Delivers Results

Team collaborating in an AI discovery workshop, reviewing data and prioritising projects

Discovery workshops should be one of the simplest parts of starting an AI initiative, yet many become unproductive. Teams spend hours in idea generation mode, fill sticky notes with “could do” features, then walk out with no decision and no clearer path forward. The intention is good, but the outcome isn’t.

At Studio Graphene, we’ve found the opposite approach works better: start small, stay structured and focus entirely on problems, not ideas. The goal of a 2 hour discovery workshop isn’t creativity. It’s clarity. You’re trying to determine whether AI can meaningfully solve a real problem, whether the data exists to support it and what the team should do next - nothing more.

The first step is capturing real pain points. What slows people down. What gets repeated endlessly. What causes delays. Focusing on real problems keeps the conversation honest and avoids the “feature wishlist” trap. Once you know what the problems are, the rest becomes easier.

The next step is a quick data readiness check. And here’s the part most teams misunderstand: data is the lifeblood of AI, but big datasets don’t guarantee better outcomes. In fact, they can often slow you down, create noise and hide the signals that actually matter. Teams assume they need huge, pristine datasets, but in early discovery you just need to know if the right data exists in any usable form. This is a light touch assessment, not a deep audit.

Once you understand the problems and the data behind them, move to simple scoring and risk tiering. Keep it high, medium, low. Avoid complex formulas - you’re trying to guide a decision, not run a science experiment. Which option has meaningful impact. Which is feasible. Which carries manageable risk. This step isn’t about perfection, it’s about visibility and shared understanding.

And then the most important step: choose one. One idea. One direction to test. One candidate to take forward quickly. A long list feels productive, but focus is what creates momentum. The rest can sit on a backlog until you have evidence from the first experiment.

The strength of this approach comes from its discipline. Tight facilitation keeps the session grounded. Structured steps reduce opinion based debates. Within two hours, you can get everyone aligned around a single, high confidence starting point - something that can be tested in days, not months.

At Studio Graphene, we’ve found this method can really help teams get started with AI. It works because it’s practical, rooted in real data and real problems, and gives people confidence - the decisions are transparent and the next steps are clear. It’s about replacing AI over expectation with steady, measurable progress, not prescribing a one size fits all approach.

A well run discovery workshop is about choosing a direction you can test, learn from and scale. Start small, stay grounded, keep it structured - and let evidence guide the rest. With the right approach, two hours is more than enough to find a high ROI starting point and build momentum for your wider AI programme.

spread the word, spread the word, spread the word, spread the word,
spread the word, spread the word, spread the word, spread the word,
AI agent monitoring workflow activity with human oversight dashboard
AI

Running Agentic AI Safely at Scale

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard
AI

When Does Agentic AI Become Commercially Meaningful?

AI agent consolidating updates across teams while humans review insights in a digital platform
AI

Designing Agentic AI for Multi-Team Collaboration

Illustration of AI agent managing dashboard data while humans review insights
AI

How to Integrate Agentic AI into Your Digital Platform

Illustration representing structured experimentation with custom AI agents, showing controlled workflows, human checkpoints and gradual autonomy.
AI

Early Steps to Building Custom AI Agents

Running Agentic AI Safely at Scale

AI agent monitoring workflow activity with human oversight dashboard
AI

Running Agentic AI Safely at Scale

When Does Agentic AI Become Commercially Meaningful?

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard
AI

When Does Agentic AI Become Commercially Meaningful?

Designing Agentic AI for Multi-Team Collaboration

AI agent consolidating updates across teams while humans review insights in a digital platform
AI

Designing Agentic AI for Multi-Team Collaboration

How to Integrate Agentic AI into Your Digital Platform

Illustration of AI agent managing dashboard data while humans review insights
AI

How to Integrate Agentic AI into Your Digital Platform

Early Steps to Building Custom AI Agents

Illustration representing structured experimentation with custom AI agents, showing controlled workflows, human checkpoints and gradual autonomy.
AI

Early Steps to Building Custom AI Agents

Running Agentic AI Safely at Scale

AI agent monitoring workflow activity with human oversight dashboard

When Does Agentic AI Become Commercially Meaningful?

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard

Designing Agentic AI for Multi-Team Collaboration

AI agent consolidating updates across teams while humans review insights in a digital platform

How to Integrate Agentic AI into Your Digital Platform

Illustration of AI agent managing dashboard data while humans review insights

Early Steps to Building Custom AI Agents

Illustration representing structured experimentation with custom AI agents, showing controlled workflows, human checkpoints and gradual autonomy.