PSM-AI Essentials Human Review, Bias, and Definition of Done

Study PSM-AI Essentials Human Review, Bias, and Definition of Done: key concepts, common traps, and exam decision cues.

Responsible AI use in Scrum means humans stay accountable for decisions, outputs, and quality. The exam often tests whether you can connect AI review expectations to the team’s real quality controls, including the Definition of Done.

Review obligations

Area Stronger expectation
Accuracy check important claims before acting
Bias and fairness inspect for skewed assumptions or harmful patterns
Definition of Done include review steps when AI-generated output affects the Increment
Accountability the team remains accountable, not the tool

When the Definition of Done should change

Situation Stronger response
AI drafts content that will be published or shipped add an explicit human review step if one is not already implied
AI helps with internal brainstorming only formal Done changes may not be needed
AI-generated work affects customer-facing behavior tighten review, traceability, and acceptance checks
AI output influences a release decision require named human accountability before acting

What this looks like in practice

If AI contributes code, test cases, documentation, or analysis that affects the Increment, the team may need explicit review rules in its quality system. The stronger answer does not assume AI-generated work is automatically trustworthy just because it was generated fast.

    flowchart LR
	    A["AI-generated output"] --> B["Does it affect the Increment or a release decision?"]
	    B --> C["Apply explicit human review and quality checks"]
	    C --> D["Meets Definition of Done?"]
	    D --> E["Only then treat it as usable"]

Example

AI drafts release notes that omit a known limitation. If the team publishes them without review, the issue is not just poor communication. It is a failure of human accountability and quality control.

Exam scenario

A team uses AI to draft test cases and release notes. The Scrum Master argues that because the team members did not write the first draft themselves, those outputs do not belong in the existing Definition of Done. The stronger answer is the opposite: if the output affects the Increment, the team still owns the quality threshold and may need to make the review step more explicit.

Common pitfalls

  • Assuming the tool vendor owns the risk once a team uses AI.
  • Treating human review as optional for low-effort outputs.
  • Using AI-generated work while leaving the Definition of Done unchanged.
  • Confusing bias awareness with bias elimination.

Sample Exam Question

When AI-generated output is used in work that affects the Increment, what is the strongest Scrum response?

A. Ensure review and quality controls still satisfy the team’s Definition of Done
B. Accept the output if it saves enough Sprint capacity
C. Shift accountability to the tool because the team did not write the first draft
D. Avoid any AI use because Definition of Done cannot apply to generated work

Best answer: A

Why: Scrum keeps accountability and quality within the team, so AI use must still fit the team’s real Done expectations.

Why the others are weaker: B, C, and D either weaken accountability or overreact in ways the exam does not support.

Revised on Monday, April 27, 2026