Metal Detector / X-ray Verification Check Log + Test Piece Control Register (with templates)

If you’re searching for a metal detector verification check log template (or an X-ray verification check sheet) you’re probably in one of two situations:

  1. you’re tightening controls ahead of an audit, or
  2. something went wrong on the line and you don’t want a repeat.

Either way, the goal is the same: prove your detection step is working today, on this product, on this line, with records that hold up under scrutiny.

This post gives you a practical, BRCGS-style evidence pack you can implement quickly:

  • a Verification Check Log (metal detector and/or X-ray)
  • a Test Piece Control Register (so test pieces don’t become the foreign body)
  • a simple, one-page control procedure you can train in minutes

Download: Verification Check Log + Test Piece Control Register.


Verification vs validation (plain English)

These words get thrown around like everyone was born knowing them. They weren’t.

Validation answers: “Can the system detect what we say it can detect?”
This is the commissioning/capability proof. It’s linked to risk assessment, product effect, packaging format, settings, and acceptance criteria.

Verification answers: “Is it still working correctly, right now?”
This is your routine check regime: start-up checks, interval checks, changeovers, post-cleaning, post-maintenance, end-of-run checks, and documented actions when something fails.

Auditors love verification because it shows ongoing control, not a one-time setup.


What auditors usually want to see (the “evidence pack”)

You don’t need theatrics. You need clean, consistent proof:

  • Defined check frequency (and the reason for it, usually risk-based)
  • A consistent method (who checks, how, and where it’s recorded)
  • Defined test pieces / challenge pieces (what they are, what they represent)
  • Clear pass/fail criteria and escalation steps
  • Reject confirmation (reject happened and reject was controlled)
  • Test piece control (issue/return, storage, condition, replacement)
  • Review and sign-off (supervisor/QA checks the checks)

If any of those are missing, you tend to see the same nonconformances: incomplete logs, “tick-box” checks, uncontrolled test pieces, and weak corrective action trails.


The Verification Check Log: what to record (and why)

A verification log should do two things:

  1. make it easy for operators to do the right thing fast
  2. make it hard for auditors to find holes

What a good log includes

Record these as standard fields:

  • Line / asset ID / location (no “which Line 3?” confusion)
  • Product / SKU group / pack format (because sensitivity can change)
  • Machine type: metal detector or X-ray (and model if you like)
  • Settings profile name (so you can prove the correct recipe was used)
  • Check type: start-up / hourly / changeover / post-clean / post-maintenance / end-of-run
  • Test pieces used:
    • Metal: typically ferrous / non-ferrous / stainless steel
    • X-ray: site-appropriate challenge pieces (product/pack dependent)
  • Results: pass/fail for each challenge
  • Reject confirmation: did it actually reject, and was the reject secured?
  • Corrective action (only needed if fail, but it must be explicit)
  • Sign-off: operator + supervisor/QA review

Suggested check points (typical pattern)

Keep this risk-based and site-specific, but most plants end up with a rhythm like:

  • start-up / after start of shift
  • at defined intervals during run
  • after changeover or settings change
  • after cleaning / sanitation
  • after maintenance or downtime
  • end of run

The important part is not the exact frequency. It’s that the frequency is justified, consistently applied, and reviewed.


Test pieces: the uncomfortable truth

Test pieces are vital. They are also a foreign body risk if they’re not controlled.

Most “lost test piece” incidents aren’t dramatic. They’re mundane:

  • someone puts it in a pocket “for a second”
  • it sits on a machine ledge and falls into an open product zone
  • it gets wrapped in a cloth during cleaning
  • it migrates into a toolbox during maintenance
  • it gets damaged, cracks, or degrades and nobody notices

So yes: you need test pieces. But you also need a system for them.


The Test Piece Control Register: the part most sites underbuild

This is where quick-win ROI lives. Many sites log verification checks but can’t convincingly prove test piece control.

What the register should contain

Give each test piece a unique identity and a boring, reliable paper trail:

  • Unique ID (serial / batch / internal ID)
  • Type (Fe / non-Fe / stainless / X-ray challenge)
  • Spec (size/material or site-defined description)
  • Assigned line / zone
  • Storage location (controlled, labelled)
  • Issued to / returned by (name + time/date)
  • Condition checks (wear, damage, contamination, deformation)
  • Replacement date / retired date
  • Missing test piece escalation (recorded outcome)
  • Disposal record (how it was taken out of service)

This turns test pieces from “mystery objects that exist somewhere” into controlled tools with accountability.


A simple Test Piece Control Procedure (one page, actually usable)

Write this as a one-page SOP and train it as a toolbox talk. Keep it blunt.

1) Storage

  • Store test pieces in a designated, labelled, controlled location
  • Keep the location close enough that checks are easy, but controlled enough that pieces don’t wander
  • Limit access to authorised roles

2) Issue / return

  • Test pieces are signed out and signed back in
  • No exceptions “just this once”
  • If a piece isn’t returned, it triggers the missing test piece procedure immediately

3) Use on the line

  • No pockets. No toolboxes. No “I’ll put it here for a second.”
  • Use a holder/pouch or a defined method so pieces aren’t free-floating
  • Perform the check the same way every time

4) Count-back + condition check

  • Immediately after use: confirm all pieces are present
  • Quick condition check: damage, wear, deformation, contamination
  • Record any issues and remove damaged pieces from service

5) Missing test piece escalation

This is the one everyone hopes they’ll never need. That’s why you must write it down.


What to do if a verification check fails (a decision tree that protects you)

When a verification check fails, the correct response is boring and disciplined.

Immediate actions

  • Stop the line (or stop affected product flow)
  • Hold product back to the last known good check (define this rule in your SOP)
  • Notify QA/technical authority per your escalation rules

Investigate and correct

  • Confirm correct settings/profile is loaded
  • Re-run challenge checks (repeatability matters)
  • Check reject device functionality and physical setup
  • Inspect belt speed, product presentation, pack format changes
  • If maintenance/cleaning occurred, review what changed

Release or disposal decision

Only after:

  • the system passes verification again, and
  • you have a documented decision trail for product since last good check

The win here is not “passing the check again”. The win is that you can prove control over potentially affected product.


Reject handling and confirmation (the silent audit trap)

A lot of sites record “Rejected: Yes” and think they’re done.

Auditors often probe:

  • Is the reject bin secured (tamper control)?
  • Is access controlled?
  • Is rejected product clearly segregated and accounted for?
  • Can you reconcile output vs rejects where applicable?
  • Is there a documented route for investigation and disposition?

Your verification log should include a simple reject confirmation tick plus a field for corrective action if reject handling is compromised.


Common nonconformances (and how this template prevents them)

“Checks recorded but not credible.”
Fix: log check type + time + product group + settings profile + signatures.

“Test pieces uncontrolled / no unique ID / no storage control.”
Fix: register + issue/return + condition checks + storage location.

“Fail occurred, but corrective action is vague.”
Fix: corrective action field prompts specifics (hold product, investigate cause, re-test, escalate, release decision).

“Reject device not verified.”
Fix: explicit reject confirmation field.


One-week implementation plan (realistic, not heroic)

Day 1: Map detection points (CCP/OPRP), lines, and product groups
Day 2: Agree verification frequencies based on risk assessment
Day 3: Identify and label test pieces; create unique IDs; set storage locations
Day 4: Launch the log + register on one pilot line
Day 5: Toolbox talk training (operators + supervisors)
Day 6: QA review routine: daily check of completed logs + exceptions
Day 7: Mini mock-audit: spot check logs, storage, sign-outs, escalation records

Then scale line-by-line.


Conclusion

Detection isn’t a “set it and forget it” feature — it’s a verification system made of people, process, and proof. The fastest way to strengthen that system (and make audits less stressful) is to standardise two things: how you record routine verification checks, and how you control test pieces so they never become the foreign body you’re trying to prevent.

Use the Verification Check Log to prove the machine is performing at the right moments (start-up, changeover, interval checks, post-clean, post-maintenance). Use the Test Piece Control Register to prove test pieces are identified, issued, returned, inspected for condition, and escalated if anything goes missing. Together, those two records create an “auditor-evidence pack” that’s simple for operators to follow and hard for auditors to poke holes in.

Download the templates, pilot them on one line for a week, and then roll them out line-by-line. You’ll tighten foreign body control, improve accountability, and build the kind of routine discipline that prevents incidents before they happen.


FAQs

How often should metal detector verification checks be done?

Set the frequency based on your risk assessment, but most sites use a routine like start-up, at defined intervals during the run, at changeovers, after cleaning, after maintenance/downtime, and end-of-run. The key is consistency and documented justification — not picking a number and hoping it passes audit.

What test pieces are required for metal detector checks?

Most metal detection verification checks use three test pieces: ferrous, non-ferrous, and stainless steel. Stainless is often the hardest to detect, which is why it’s usually included. The exact sizes/specs should be based on your line capability and product/pack effect.

Do X-ray machines need routine verification checks too?

Yes. X-ray detection should have routine verification just like metal detection — with checks scheduled at defined times and recorded in a log. The challenge pieces used for X-ray are typically product- and pack-dependent, so your method should match your validation and risk assessment.

What’s the difference between validation and verification for metal detection and X-ray?

Validation proves the system can detect what you claim (capability, commissioning, change control). Verification proves it’s still working correctly on the day — using routine checks, records, and defined escalation when something fails.

How do you control test pieces so they don’t become a foreign body risk?

Treat test pieces as controlled tools: unique ID, controlled storage location, sign-out/sign-in, condition checks, and a missing test piece escalation procedure. The Test Piece Control Register should show who had the test pieces, when they were used, when they were returned, and what happened if anything was damaged or missing.

What should you do if a verification check fails?

Stop (or control) the process, hold product back to the last known good check, escalate to QA/technical authority, investigate the cause (settings/profile, product presentation, reject device, post-maintenance changes), and only release product when you’ve restored verified control and recorded the decision trail.

What should you do if a test piece is missing?

Trigger your missing test piece escalation immediately: stop/hold affected product, search using a defined method, document the outcome, and involve QA for the disposition decision. Don’t “carry on and look later” — that’s how minor problems become major incidents.

What records should an auditor expect for detection verification?

At minimum: a completed verification check log (with times, results, corrective actions, and sign-off) and evidence of test piece control (register, issue/return records, storage, condition checks). Many sites also keep a simple failure incident form for trend analysis and CAPA.

How long should we keep verification and test piece control records?

Follow your site document retention policy and any customer/standard requirements. As a practical rule, keep records long enough to cover audit cycles, investigation needs, and product shelf-life considerations where relevant.

Who should sign off the verification logs?

Operators can complete routine checks, but sign-off should show supervisory and/or QA review at a defined frequency (e.g., daily review of logs, immediate review of exceptions). Auditors look for evidence that records are reviewed, not just completed.