Study CAPM Quality Reviews and Control Artifacts: key concepts, common traps, and exam decision cues.
Predictive control depends on evidence, not optimism. CAPM often rewards the answer that checks the right artifact, reads the right quality signal, and documents the right follow-up instead of treating project control as a vague status conversation.
In predictive work, quality is not only about testing at the end. It is part of monitoring and controlling throughout delivery. The project team needs evidence that deliverables meet requirements, acceptance criteria, and defined quality metrics. If that evidence is weak, the project manager should not rely on confidence or verbal reassurance. The project manager should look at the actual record of inspections, reviews, test results, defects, and corrective actions.
That is why CAPM often tests quality together with control artifacts. Quality results are not decorative. They drive decisions about issue follow-up, risk exposure, corrective action, and sometimes even change evaluation if the solution approach itself must be modified.
CAPM still expects the basic distinction:
| Concept | Main purpose | Typical signal |
|---|---|---|
| Quality assurance (QA) | Improve the process used to create deliverables | Audits, process reviews, adherence to standards |
| Quality control (QC) | Evaluate the actual output against requirements | Inspections, testing, defect records, acceptance results |
QA asks whether the team is following a sound way of working. QC asks whether the output is acceptable. CAPM questions often reward the candidate who recognizes that a repeated defect pattern is not just an isolated output problem. It may also signal a process weakness.
A quality management plan usually gives the team the rules for how quality will be managed, checked, and reported. CAPM does not normally ask for advanced plan authoring, but it often expects you to know that the plan can define:
This matters because the strongest answer in a scenario is often the one that follows the planned quality approach instead of improvising a new control method midstream.
Predictive projects use multiple control records because different signals require different treatment.
| Artifact | Main question it answers | Typical use |
|---|---|---|
| Issue log | What active problem needs action now? | Record ownership, status, priority, and resolution path |
| Risk register | What uncertainty could affect the project? | Track probability, impact, owner, response, and monitoring |
| Change log | What requested change is pending, approved, or rejected? | Preserve traceability over formal requests and decisions |
| Quality records | Does the work meet required standards? | Capture inspection, testing, and acceptance evidence |
| Status or variance report | What performance signals need visibility? | Summarize schedule, cost, quality, or delivery status |
| Lessons learned register | What should the team repeat, change, or avoid? | Capture insight during the project, not only at closure |
The exam often uses traps where candidates collapse these records into one generic note. CAPM usually rewards keeping them distinct.
The control mindset is not “collect records and move on.” It is “use evidence to choose the next action.”
flowchart LR
A["Quality evidence or control signal"] --> B["Choose the right artifact"]
B --> C["Assign owner and response"]
C --> D["Track status and results"]
D --> E["Adjust process, work, or escalation if needed"]
Strong predictive control usually looks like this:
Weak control usually does the opposite. The team notices defects but does not document them clearly, tracks everything in one vague spreadsheet, or waits until the end of the project to capture learning.
If a supplier misses a committed delivery date, that is usually an issue because it is happening now. If a new regulatory requirement may affect a later release, that is usually a risk until it becomes an actual current problem. If a stakeholder asks to add a new dashboard, that belongs in the change log and impact-analysis path. If an inspection finds repeated defects, those findings belong in quality records, and the associated active problem may also appear in the issue log.
This is why artifact choice matters. The record is not just a filing decision. It shapes how the team follows up.
Inspection results show that data-validation defects are appearing in every test cycle. A weak response is to tell the team to be more careful next time. A stronger response is to:
CAPM usually rewards this structured response because it uses evidence rather than opinion.
When CAPM gives you several developments at once, ask:
That sequence usually leads to the strongest control answer.
Scenario: A predictive project faces four developments at once: a supplier has already missed a committed delivery date, a future compliance risk has been identified but has not yet occurred, a stakeholder requests an added feature, and repeated defects are found during quality inspection. The project manager wants traceable control rather than one vague status note.
Question: How should the team record those four developments?
Best answer: A
Explanation: CAPM usually rewards disciplined control. The missed delivery belongs in the issue log, the possible future compliance problem belongs in the risk register, the feature request belongs in the change log and approval path, and the inspection findings belong in quality records with any associated corrective action tracked appropriately. Those items are related, but they are not the same thing.
Why the other options are weaker: