Why Your OEE Numbers Are Lying to You
OEE is supposed to be the single number that tells you how well your production is really performing. Availability × Performance × Quality. Simple, elegant, and — in most factories I’ve walked through — completely wrong.
Not wrong because the formula is bad. The formula is fine. It’s wrong because the data feeding the formula is garbage. And when you make decisions based on garbage data that looks reliable, that’s worse than having no data at all. At least with no data, you know you’re guessing.
I’ve spent 25+ years in manufacturing, much of it in electronics and EMS environments where margins are tight and every percentage point of efficiency matters. And I can tell you: most factories are running on OEE numbers that would make them feel very differently about their operation if they were accurate.
Let’s pull apart why.
The Three Ways OEE Data Goes Wrong
OEE has three components. Each one is an opportunity for your data to quietly lie to you.
Availability: The Generous Rounding Problem
Availability measures planned production time minus downtime. Simple enough — until you ask who’s recording the downtime and how.
In most factories, downtime is recorded manually. An operator fills in a sheet, updates a spreadsheet, or logs it at the end of the shift. And here’s the thing about humans recording their own downtime: they round. Always.
A 23-minute changeover becomes “about 20 minutes.” A 7-minute jam becomes “5 minutes.” That 12-minute wait for material? Never recorded at all because “it wasn’t really downtime — I was doing other stuff.” Three micro-stops of 90 seconds each? Not worth writing down.
Individually, each rounding is trivial. Collectively, across a shift, across a week, across a production line? You can lose 15-20% of your actual downtime into the gaps between what happened and what got written down.
And it’s not dishonesty. It’s human nature. Nobody remembers the exact minute a stoppage started and ended while they’re busy fixing it. By the time they sit down to record it, the details have blurred.
The result: Your availability number is almost certainly higher than reality. That 88% you’re reporting? It might be 78%.
Performance: The Invisible Speed Loss
Performance compares actual output to theoretical maximum output. It should catch speed losses — machines running slower than they should be.
But what’s “theoretical maximum”? In most factories, it’s whatever someone set it to when the machine was installed. Or whatever the nameplate says. Or whatever the last engineer decided was “realistic.” I’ve seen ideal cycle times that haven’t been updated in a decade despite process changes, tooling wear, and product mix shifts.
If your ideal cycle time is wrong, your performance number is meaningless. Set it too high and performance looks terrible even when the line is running well. Set it too low — which is far more common, because nobody wants to explain why performance is under target — and you get a comfortable number that hides real losses.
Then there are the speed losses nobody measures. The machine that’s been running at 90% of rated speed for so long that everyone thinks that is full speed. The operator who slows the line because the downstream process can’t keep up (a bottleneck nobody’s acknowledged). The subtle degradation that happens so gradually nobody notices.
The result: Performance numbers are only as good as your ideal cycle time baseline. If that baseline is soft, everything calculated from it is fiction.
Quality: The End-of-Line Blind Spot
Quality in OEE terms means first pass yield — good parts divided by total parts. Should be straightforward. But quality measurement in most factories happens at the end of the line, not during the process.
That means you know that you made 47 defective boards. But do you know when the defect started? Was it the first unit of the batch or the last 47? If a process parameter drifted, at what point did it cross the threshold? How many “good” parts are actually marginal and might fail in the field?
End-of-line quality data tells you the score. It doesn’t tell you the story. And it’s the story that helps you prevent the next defect.
There’s also the rework problem. Many factories don’t count reworked units as defects because “they got fixed.” But rework consumes time, materials, and labour. A board that went through reflow twice isn’t the same as one that passed first time — even if both end up in the shipping box. If your OEE quality number doesn’t capture rework, it’s flattering you.
The result: Quality numbers look better than they should because they miss rework, can’t pinpoint when defects start, and only capture what gets formally rejected.
The Spreadsheet OEE Trap
Here’s where it gets really interesting. You know those three dodgy data inputs? Now imagine feeding them into an Excel spreadsheet at the end of each shift.
Someone walks to a computer, opens the OEE tracker, types in their best recollection of the shift’s events, and the spreadsheet dutifully multiplies the three percentages together. Out comes a number — let’s say 76%. Management reviews it. Everyone nods. It goes into the weekly report. Month-end trends are plotted. Improvement targets are set.
All based on data that was:
- Rounded at the point of observation
- Recalled from memory hours later
- Typed into a spreadsheet that can’t validate it
- Compared against baselines that may not be current
- Missing micro-stops, minor speed losses, and rework
The spreadsheet calculates perfectly. It just calculates the wrong numbers perfectly.
And because the number looks precise — 76.3%, not “about three quarters” — it carries an authority it hasn’t earned. People trust numbers with decimal places. They shouldn’t, when the inputs are estimated to the nearest 10 minutes.
What Good OEE Data Actually Looks Like
The fix isn’t a better spreadsheet. It’s better data collection.
Real-time, automated data collection changes OEE from a lagging indicator to a leading one. Instead of finding out yesterday’s OEE at tomorrow’s morning meeting, you know right now. And “right now” is the only time you can actually do something about it.
What changes with automated collection:
- Downtime is captured to the second. No rounding, no forgetting, no “it wasn’t worth recording.” Machine stopped? Logged. Machine started? Logged. Every micro-stop, every changeover, every unexplained pause.
- Speed losses become visible. When you’re measuring actual cycle times continuously against a validated ideal, you see every slow cycle. Not just the obvious breakdowns — the subtle degradation that costs you 5% and nobody notices.
- Quality data flows from the process, not from the end of the line. In-process inspection, SPC data, test results — all feeding back in real time. You don’t find out about drift at the end of the batch. You find out when it starts.
- Categorisation is immediate. Operators tag downtime reasons when they happen, not from memory at the end of the shift. Touchscreens at the line make it quick — tap a reason code, done.
The difference isn’t incremental. Factories that move from manual to automated OEE tracking typically see their reported OEE drop by 10-15 points initially. Not because performance got worse — because they’re finally seeing reality. That initial drop is uncomfortable but incredibly valuable. You can’t improve what you can’t accurately measure.
The Real-Time vs End-of-Shift Difference
Let me paint you a picture of the same production issue in two factories.
Factory A — End-of-shift OEE:
The reflow oven temperature drifts slightly at 10:15 AM. Boards continue to run. Some pass visual inspection, some don’t. At 3:30 PM, the quality team reviews the day’s rejects and notices a higher-than-usual failure rate. They check the reflow profile logs (if they have them). Next morning’s meeting discusses it. An engineer investigates. The problem is identified and fixed by lunchtime the next day. Total impact: a day and a half of reduced yield, maybe 200+ boards affected.
Factory B — Real-time OEE:
The reflow oven temperature drifts at 10:15 AM. The monitoring system flags an SPC rule violation within minutes. The line supervisor gets an alert. The engineer checks the profile data on their tablet while walking to the line. Problem identified and corrected by 10:45 AM. Total impact: 30 minutes, maybe 15 boards affected.
Same factory, same equipment, same problem. Radically different outcome. That’s not a technology difference — it’s an information timing difference.
Where to Start (Without Boiling the Ocean)
You don’t need a million-dollar MES system to get better OEE data. Here’s a practical path:
Level 1: Fix Your Baselines
Before automating anything, validate your ideal cycle times and planned production times. Run time studies. Compare nameplate ratings to actual capability. Update the numbers. This alone — changing nothing about how you collect data — will give you a more honest OEE.
Cost: Time only. A few days of engineering time.
Level 2: Structured Digital Collection
Replace paper logs and end-of-shift spreadsheets with simple web-based forms at the line. Operators log downtime events as they happen, with reason codes and timestamps. Even if it’s still manual input, real-time entry is dramatically more accurate than end-of-shift recall.
Cost: A few thousand dollars for a basic web application and some tablets.
Level 3: Semi-Automated Collection
Connect machine signals to your data system. PLC outputs, sensor data, counter signals — even simple I/O signals that tell you “machine running” vs “machine stopped.” Layer human context (downtime reasons, quality observations) on top of automatic machine data.
Cost: Varies, but for a single line, often $10-30K including hardware, software, and integration.
Level 4: Full IIoT Integration
Machines, sensors, quality systems, and ERP all feeding a unified data platform. Real-time dashboards. Automated alerts. Predictive analytics. Your OEE isn’t just accurate — it’s actionable in real time.
Cost: Significant, but the ROI is typically under 12 months for factories with meaningful production volume.
Most factories should start at Level 1 or 2. The jump from manual spreadsheets to structured digital collection — even without automation — can improve OEE data accuracy by 20-30%. That’s a massive improvement for minimal investment.
The Numbers You Should Actually Watch
Here’s a controversial opinion from someone who’s implemented OEE systems across dozens of factories: the OEE number itself is overrated.
A single percentage doesn’t tell you what to fix. 72% OEE — is that a downtime problem? A speed problem? A quality problem? The aggregate number triggers meetings. The components trigger action.
What matters more than OEE:
- Top 5 downtime reasons by duration — this tells you where to focus improvement efforts
- Trend direction — is each component getting better or worse week over week?
- Variance between shifts — if Day shift gets 80% and Night shift gets 65%, you have a people/process issue, not an equipment issue
- First pass yield by product — some products are harder to make. Aggregate quality numbers hide product-specific issues.
- Mean time between failures (MTBF) — is equipment reliability improving or degrading?
OEE is a conversation starter. The components and their drivers are where improvement actually lives.
Related Reading
- First Pass Yield: The Metric That Tells You Everything — if your OEE quality component is suspect, your FPY data probably is too
- Stop Fighting Fires: Guide to Preventive Maintenance — equipment reliability directly feeds your OEE availability number
- Spreadsheet Cost Calculator — quantify what manual data management is costing your operation
Getting Honest With Your Data
The hardest part of improving OEE isn’t technology. It’s culture.
When you start collecting accurate data, the numbers get worse before they get better. That 85% OEE that made everyone comfortable might become 67%. That’s a difficult message to deliver, especially if bonuses or targets are tied to OEE numbers.
Smart factories treat the initial drop as a win. You haven’t gotten worse — you’ve gotten honest. And honesty is the only foundation that supports real improvement.
The alternative is comfortable lies. OEE numbers that look good in reports but don’t reflect reality. Improvement programs that chase phantom gains. Targets that are already being met — on paper — while the shop floor tells a different story.
I’ve seen both approaches. The factories that get honest with their data improve. The ones that protect their numbers don’t.
Ready to Find Out the Truth?
If you suspect your OEE numbers aren’t telling you the full story, you’re probably right. The question is whether you want to keep the comfortable fiction or find out what’s really happening on your production floor.
Start simple. Validate your baselines. Move data collection closer to real-time. Compare what the spreadsheet says with what the floor says. The gap between those two stories is your opportunity.
Want to understand what better data could look like for your operation? Or curious about how to move from spreadsheet OEE to something you can actually trust? Let’s talk. We’ve been building manufacturing data systems for a long time, and the conversation always starts the same way: “What’s really going on out there?”
Because the first step to improving your OEE isn’t a new formula. It’s better data. And the first step to better data is admitting that what you’ve got right now probably isn’t as good as you think.