The Scrum Guide has always allowed sprints as short as one day and encouraged continuous delivery within each sprint. Sprints are an inspect-and-adapt cadence, not a delivery cadence.
But in most organisations, two-week sprints became the de facto delivery cadence — not because the framework prescribed it, but because human implementation capacity made it the practical floor. Teams evolved from monthly releases, and two weeks felt fast.
AI removes that floor. And when you remove the floor, you need to recalibrate everything above it.
The sprint cadence question
My direct assessment:
- Monthly sprints are usually too slow for any AI-first software work.
- Two-week sprints may still work in regulated, highly interdependent, or enterprise-heavy environments — or when a team is still learning to work with AI safely.
- Small AI-enabled product teams generally benefit from a rolling weekly cadence or continuous flow, returning to sprints for planning and governance, not delivery.
Two-week sprints still earn their place when many people must coordinate across teams, when external dependencies are strong, when governance approval is structurally slow, or when the organisation needs predictable review windows. Don't abandon them for the sake of it.
But avoid defaulting to them when one or a few technical product creators can ship continuously, stakeholders can review asynchronously, deployment is automated, and the cost of waiting exceeds the value of batching.
Too many teams are still using Scrum as a delivery calendar, not as a learning system. AI gives us the tools to use it the way it was always designed.
The daily scrum: from status to decision
The 15-minute daily sync was always a planning event for the next 24 hours — not a status round-robin. The round-robin is an antipattern Scrum never prescribed. But it's dominant in practice.
In AI-first teams the daily conversation shifts toward:
- What changed in production, evals, tests, or customer signals?
- What is blocked by a decision, not a task?
- Which agent-generated work needs human review today?
- Are we accumulating hidden quality or architecture debt?
In practice: a written update feed by default, automated tooling providing real-time visibility. Working hours go to slow, judgment-first work — reviewing generated output, correcting guardrails, confirming what's ready to ship, defining the next experiments. Live syncs only for blockers, dependencies, and decisions that can't be resolved async.
The metrics mismatch
The Scrum Guide focuses on delivering valuable increments — not velocity or story points. But many implementations became obsessed with those proxies. AI makes this worse: as AI inflates raw output, velocity becomes actively misleading. A team generating twice as many tickets isn't delivering twice the value; it may be generating twice the technical debt.
The underlying shift: optimise for validated throughput, not raw output. The question is not how many things shipped, but how many hypotheses were tested — and what was learned.
A practical cadence model
A strong default for AI-enabled teams past early adoption:
The key principle: the Scrum Guide was already trying to separate delivery cadence from inspect-and-adapt cadence. AI makes that separation practical for more teams. Keep the loops; make them faster where the constraint allows it.
AI ♡ Agile — Article Series
- The Agile Coach Is Back (start here)
- How Roles Change in an AI-First Scrum Team
- → Your Iteration Speed Is Growing. Your Process Isn't. (this article)
- Treat Scope Differently in the AI Era
- AI Demands Adjustment on Your Scrum — Full Guide