Skip to main content
The classify node decides which downstream branch each page belongs to. Reliable routing depends on choosing the right mode (Rules vs AI), keeping categories cleanly separated, and using the confidence threshold as a quality dial.

1. Pick the right mode for the signal you have

Each mode answers a different question:
  • Rules answer “does this page contain text X (or sit on page N)?”. Cheap, deterministic, perfect when you can describe each category in one or two phrases that are reliably present.
  • AI answers “which reference document does this page resemble?”. Slower, billed per page (plus a one-time per-page embedding fee when you upload references), perfect when categories vary in wording but share visual or semantic structure.
Reach for Rules first. Move to AI when:
  • Wording varies too much for a fixed phrase (e.g. invoices from many vendors).
  • You need to match by layout or visual structure, not text.
  • You can supply 3 to 10 representative reference documents per category in a forward trigger.

2. Make categories mutually exclusive

A page should match exactly one category. When categories overlap, the same document falls through to whichever rule comes first, and small tweaks to one rule break others.
// Effective: each category has a unique anchor
- Invoice         → contains "Invoice #" AND contains "Amount Due"
- Statement       → contains "Statement of Account" AND contains "Period"
- Remittance      → contains "Remittance Advice"

// Problematic: criteria collide
- Invoice         → contains "Amount"
- Statement       → contains "Account"
- Remittance      → contains "Payment"
If two categories genuinely look similar, write the rule for the harder one first or add a disambiguating phrase to both.

3. Write criteria the way you’d brief a human reviewer

Think of each rule’s criteria as instructions to a colleague: “Look for X, Y, and Z to identify this type of document.” Specific anchors that a reviewer would notice in a few seconds tend to be the same anchors a rule can match cleanly.
// Effective
- Declaration page (insurance) → contains "Declarations" AND contains "Policy Number"
- Explanation of Benefits      → contains "Explanation of Benefits" OR contains "EOB"

// Problematic
- Declaration page → contains "policy"
- Explanation of Benefits → contains "benefits"
The first version uses headings the document is required to print. The second matches half the documents in a typical pile.

4. Use enough categories, not too many

Too few categories: a single bucket hides important differences and forces downstream nodes to re-classify. Too many categories: rules collide and the AI mode burns credits comparing against irrelevant references. Practical guideline: define a category whenever the downstream pipeline behavior differs. If two document types go through the same extract schema and the same delivery, fold them into one category. Always include an explicit fallback (Rules) or rely on the other output (AI) so you never have to handle “didn’t match any rule” implicitly.

5. Tune the AI confidence threshold to your error tolerance

The default threshold is 0.7. Raise it when a wrong route is more expensive than a missed route; lower it when missing documents is the bigger cost. A useful workflow for tuning:
  1. Run a sample batch with the default threshold and look at the results in run detail.
  2. Note pages routed to the wrong category and pages dropped to other that you wish had matched.
  3. If wrong-category errors dominate, raise the threshold (e.g. to 0.8).
  4. If too many pages fall through to other, lower it (e.g. to 0.6) or add references to the categories that are missing matches.
  5. Iterate; treat the threshold as a configuration, not a constant.
The confidence score is most informative on the boundary. If most of your pages score above 0.9, the threshold barely matters; if most are clustered around 0.60.75, small threshold changes flip a lot of routes.

6. Match references to real-world variety in AI mode

AI mode compares each page to the reference documents you upload via the target pipe’s forward trigger. Quality of references determines accuracy.
  • Provide 3 to 10 references per category, drawn from real production samples, not pristine examples.
  • Cover variants in vendor, language, layout, scan quality, and orientation.
  • Re-add references when you onboard a new sender or a sender changes their template.
A single template page won’t recognize every variant; a varied set generalizes better than a “perfect” one.

7. Add a fallback path for the other output

Pages that don’t match any category in AI mode go to the other output. Don’t dead-end this: route it somewhere actionable, for example:
  • A review node so a human classifies and adds a new reference.
  • A separate pipe for unknown documents so they don’t pollute your main run history.
  • An alert via HTTP action so the team knows volume of unknown-type pages is growing.
Treat the other output as a feedback signal; it’s where you learn what your reference set is missing.

Common pitfalls

Rules and AI are different modes; only one is active at a time. If you find yourself wanting both, run a Rules classify first to handle the easy cases, then a second classify in AI mode on the other branch.
One reference document captures one layout, not the variety you’ll see in production. Add 3 to 10 representative samples per category, including the messy ones.
When you add or remove categories, the score distribution shifts. Re-tune the confidence threshold against a fresh sample batch instead of trusting the old setting.
Rules evaluate in order; the first match wins. If two rules can match the same page, the page goes to the rule defined first, regardless of which one is “more correct”. Check your category criteria pairwise for overlaps.
Unmatched pages disappear silently if other is dangling. Always wire it to a review, a fallback pipe, or at minimum an alerting HTTP action.
Classifying upstream of extract lets you use a focused schema per type. Classifying downstream means you’ve already paid full extract cost on every document; consider re-ordering the pipeline.

Classify action

Configuration reference for the classify node

Conditional routing

Patterns for branching pipelines on document type

Forward trigger

Set up reference documents for AI similarity classification

Review action

Send unmatched pages to a human for handling