Agile ♡ AI · Technical

Your Iteration Speed Is Growing. Your Process Isn't.

If one technical product lead can prototype, iterate, and ship in hours, a two-week inspect-and-adapt cycle is too coarse to catch problems early. Here's how cadence, ceremonies, and metrics need to adjust.

Kaspar EdingApril 20266 min read

The Scrum Guide has always allowed sprints as short as one day and encouraged continuous delivery within each sprint. Sprints are an inspect-and-adapt cadence, not a delivery cadence.

But in most organisations, two-week sprints became the de facto delivery cadence — not because the framework prescribed it, but because human implementation capacity made it the practical floor. Teams evolved from monthly releases, and two weeks felt fast.

AI removes that floor. And when you remove the floor, you need to recalibrate everything above it.

The sprint cadence question

My direct assessment:

  • Monthly sprints are usually too slow for any AI-first software work.
  • Two-week sprints may still work in regulated, highly interdependent, or enterprise-heavy environments — or when a team is still learning to work with AI safely.
  • Small AI-enabled product teams generally benefit from a rolling weekly cadence or continuous flow, returning to sprints for planning and governance, not delivery.

Two-week sprints still earn their place when many people must coordinate across teams, when external dependencies are strong, when governance approval is structurally slow, or when the organisation needs predictable review windows. Don't abandon them for the sake of it.

But avoid defaulting to them when one or a few technical product creators can ship continuously, stakeholders can review asynchronously, deployment is automated, and the cost of waiting exceeds the value of batching.

Too many teams are still using Scrum as a delivery calendar, not as a learning system. AI gives us the tools to use it the way it was always designed.

The daily scrum: from status to decision

The 15-minute daily sync was always a planning event for the next 24 hours — not a status round-robin. The round-robin is an antipattern Scrum never prescribed. But it's dominant in practice.

In AI-first teams the daily conversation shifts toward:

  • What changed in production, evals, tests, or customer signals?
  • What is blocked by a decision, not a task?
  • Which agent-generated work needs human review today?
  • Are we accumulating hidden quality or architecture debt?

In practice: a written update feed by default, automated tooling providing real-time visibility. Working hours go to slow, judgment-first work — reviewing generated output, correcting guardrails, confirming what's ready to ship, defining the next experiments. Live syncs only for blockers, dependencies, and decisions that can't be resolved async.

The metrics mismatch

The Scrum Guide focuses on delivering valuable increments — not velocity or story points. But many implementations became obsessed with those proxies. AI makes this worse: as AI inflates raw output, velocity becomes actively misleading. A team generating twice as many tickets isn't delivering twice the value; it may be generating twice the technical debt.

Stop optimising for
Completed ticket count
Story points
Sprint burn-down
Man-hours spent
Start measuring
Lead time: idea → validated production change
Change failure rate
User outcome improvement
Experiment learning rate
Rework rate on AI-generated artifacts

The underlying shift: optimise for validated throughput, not raw output. The question is not how many things shipped, but how many hypotheses were tested — and what was learned.

A practical cadence model

A strong default for AI-enabled teams past early adoption:

Continuous
Delivery. Increment production. AI agents work asynchronously. Stewards review and approve by risk level.
Weekly
Prioritisation review (30–60 min). What matters now. Commitment sync. Rolling sprint if needed.
Weekly
Stakeholder review. Show meaningful changes and user learnings. Async-first, live when the decision warrants it.
Bi-weekly / Monthly
Retrospective. Focus on quality, speed, agent workflow health, and organisational friction — not output counts.
Monthly
Outcome review. Impact assessment. What did we actually validate?
Quarterly
Strategy and architecture review. Major priorities, platform choices, technical debt themes, governance health.
Cadence Purpose
ContinuousDelivery — increment production, Steward review by risk level
WeeklyPrioritisation review (30–60 min) + stakeholder review
MonthlyOutcome review + retrospective — impact, not output counts
QuarterlyStrategy and architecture review — major priorities, platform choices, technical debt

The key principle: the Scrum Guide was already trying to separate delivery cadence from inspect-and-adapt cadence. AI makes that separation practical for more teams. Keep the loops; make them faster where the constraint allows it.