A teardown of feature marketing vs verifiable logic.
Most operators don’t get burned by choosing the wrong tool. They get burned by trusting the wrong claims.
Every vendor says they reduce fraud, improve accuracy or lift conversions.
The problem isn’t the claim.
It’s the logic behind it.
When you inspect the numbers, most outcomes fall apart the same way — in the same order.
Here’s how to see it.
The problem isn’t the claim.
It’s the logic behind it.
1. Methodology → The part most vendors hide
Every claim could rest on one thing: how it was measured.
If a vendor says they “cut fraud by 80%,” you need more than a percentage.
You need to understand the standards they used to claim that number.
A real methodology should answer five simple questions:
- Definition → What counted as fraud, accuracy, risk, or “improvement”?
- Inclusion rules → Which applications were included, excluded or filtered out?
- Timeframe → Was the data over 30 days, 6 months, or a full cycle?
- Comparison logic → What period or cohort did they compare against?
- Decision criteria → What thresholds or logic engines triggered the result?
If a vendor can’t show this, the outcome isn’t proof. It’s decoration.
Industry context proves why this matters. NMHC’s Pulse Survey found 93.3% of PMCs experienced rental-application fraud in the past 12 months (NMHC Pulse Survey).
Fraud is real.
But definitions and measurement methods vary wildly across portfolios.
That variation is exactly why methodology comes first.
If the rules are squishy, the claim collapses.
2. Baseline → Compared to what?
Percentages mean nothing without a starting point.
A vendor can claim “fraud dropped 50%,” but if the baseline rate was already 2% — that’s not a meaningful improvement.
It’s a rounding error marketed as a victory.
You need to know:
- What was the fraud rate before?
- How was it measured?
- Was it the operator’s own baseline or someone else’s?
- Did the starting point change during the test period?
The NMHC reported that among operators who saw increases, fraud rose by an average of 40.4% year-over-year (NMHC Full Report).
That alone shows how variable baselines really are.
Percentages mean nothing without a starting point.
Without a baseline, every percentage is theater.
3. Cohort Match → Was the data even from your world?
This is where many vendor claims quietly inflate.
Vendors regularly use performance data from:
- Student housing to sell to conventional
- Luxury lease-ups to sell to Class B
- Single-family portfolios to sell into multifamily
- One region to sell into another with totally different risk patterns
Cohorts matter.
Fraud rates, approval behavior, income volatility and household makeup all shift by:
- property type
- geography
- applicant pool
- rent levels
- seasonality
If the cohort doesn’t match, the outcome isn’t transferable.
It’s borrowed credibility.
4. Attribution → Did the vendor actually cause the result?
Even if the outcome is real, it still may not belong to the vendor.
Operators change a lot during a measurement window:
- Policies
- Income thresholds
- Staffing
- Delinquency rules
- Concessions
- Marketing spend
- Rent levels
- Screening criteria
Any one of these can drive a change.
If multiple shifted, it’s almost impossible to isolate what actually moved the metric.
You need to know:
- What else changed?
- When did it change?
- How do we separate product impact from operations impact?
Without attribution, vendors get credit for wins they didn’t create.
5. Audit Trail → Can you defend the claim?
This is where reality collides with compliance.
Even if a claim is true, you still have to defend it to ownership, regulators or legal teams.
You need:
- Logs
- Timestamps
- Decisions
- Documentation
- Evidence
- Repeatability
CFPB and HUD have both signaled the same principle:
If you can’t explain your decision, you can’t defend it.
Most vendor claims don’t come with evidence. → They come with graphics.
But audits don’t judge graphics. → They judge proof.
The real issue:
→ Vendors market features.
→ Operators need defensibility.
If an outcome can’t be explained, compared, reproduced or verified, it isn’t a real result.
It’s just a headline.
Headlines don’t hold up when a dispute, audit or lawsuit lands on your desk.
The operator’s inspection checklist
When a vendor presents a result, walk through these in order:
- Methodology → How exactly did you get the number?
- Baseline → What was the starting point?
- Cohort match → Was this measured on a portfolio like mine?
- Attribution → What caused the change?
- Audit trail → Can you prove it if challenged?
Miss one and the claim is fragile.
Miss two and the claim is fiction.
The best vendors don’t fear these questions.
They welcome them.
Real outcomes survive inspection.
Marketing doesn’t.
