Blog

Why Crude Oil Measurement Errors Happen (And How to Stop Them Before Settlement)

March 25, 2026 · 6 min read

Crude oil measurement errors don't announce themselves. They hide in data exports, sit quietly in spreadsheets, and surface weeks later as settlement disputes — after statements have been issued, payments have been calculated, and everyone thought the month was closed. By then, fixing them means re-running numbers, issuing corrections, and having uncomfortable conversations with shippers.

The fix isn't better spreadsheet formulas. It's catching errors as data flows in, before they compound into settlement problems. Here are the five most common measurement errors in crude oil gathering and what each one actually looks like in practice.

1. Duplicate Ticket Entries

Duplicates are the most common measurement error and the easiest to miss. They happen when data arrives from multiple sources with overlapping time ranges — a flow computer export covers March 1–15, and the next export covers March 10–20, creating duplicate entries for five days of transactions. They also appear when an operator manually enters a run ticket that was already imported electronically, or during system migrations where historical data overlaps with live feeds.

The impact scales with volume. A single undetected duplicate on a high-throughput LACT unit running 2,000 barrels per day inflates that shipper's settlement by $140,000 at $70/bbl. Even on smaller connections, a duplicate that slips through to settlement means issuing a correction next month — which creates reconciliation headaches in the subsequent period.

What makes duplicates dangerous is that they look like legitimate data. The ticket numbers may differ (one from SCADA, one manually entered), the timestamps may be slightly offset, and the volumes may not be exactly identical due to rounding. Simple "exact match" deduplication misses these near-duplicates entirely.

2. Volume Shorts and Longs

A short occurs when the volume measured at the delivery point is less than the volume measured at the receipt point — beyond what pipeline loss allowance (PLA) covers. A long is the reverse: more oil appears at delivery than was received. Both trigger disputes because they directly affect how much shippers get paid.

Small variances within PLA tolerance are expected in every gathering system. The problem is detecting when a variance crosses the threshold from "normal system loss" to "something is wrong." Common causes include meter drift between provings — where a receipt meter slowly reads 0.1% high while the delivery meter drifts 0.1% low, compounding into a 0.2% discrepancy — inconsistent temperature correction factors applied at receipt vs. delivery, and BS&W measurement differences where the receipt sampler and delivery sampler disagree on water content.

In a manual process, shorts and longs typically aren't detected until someone reconciles the full month's volumes after close. By then, the operator is retroactively investigating a variance that may span dozens of transactions across multiple LACT connections. Automated tolerance monitoring flags each transaction as it's imported, catching the drift within days instead of weeks.

3. Unknown Operator or Facility Codes

Every transaction in a gathering system is tagged with codes that identify the shipper, the receipt point, the delivery point, and the product type. When a new well comes online, an operator changes their business entity, or a facility code gets updated in the flow computer but not in the settlement system, the result is transactions that can't be mapped to a contract.

Unknown codes don't cause a wrong settlement — they cause a missing one. The volume sits in an unallocated bucket until someone manually identifies the shipper and receipt point, looks up the correct gathering agreement, and assigns the volume. During month-end close, these orphaned transactions are the ones that hold up the entire settlement cycle while the measurement team tracks down the source.

The fix is straightforward but requires discipline: validate every incoming code against a master facility and operator table as data is imported. Flag unknown codes immediately so they can be resolved the same day, not discovered during month-end.

4. Quality Anomalies

Quality measurements — primarily BS&W (basic sediment and water) and API gravity — directly affect the net value of every barrel. A sudden BS&W spike from 0.5% to 3.0% could indicate a well problem, a sampler malfunction, or a data entry error. Each scenario has a very different settlement implication, and without automated flagging, they all look the same in the raw data.

API gravity anomalies are subtler but equally impactful. If a receipt point consistently measures 38° API and a batch comes in at 32°, that six-degree difference changes the temperature correction factor and the per-barrel value. Is it a different crude stream being commingled? A bad gravity reading from a malfunctioning hydrometer? A data transcription error where someone typed 32 instead of 38?

Statistical quality checks compare each reading against the historical profile for that receipt point. When a value falls outside two standard deviations, it gets flagged for review — not rejected automatically, but surfaced for a human decision before it flows into settlement calculations.

5. Timing Gaps and Sequence Breaks

Gathering systems produce transactions continuously, but data arrives in batches — daily exports, weekly uploads, or manual entry at irregular intervals. When a batch is late, skipped, or partially imported, it creates a gap in the transaction sequence. March 1–7 is complete, March 8–10 is missing, March 11–15 arrives on schedule. The missing three days might be a data export failure, a flow computer outage, or simply a file that got stuck in someone's inbox.

Timing gaps are especially problematic near month-end boundaries. Transactions that straddle the reporting period cutoff — a run ticket timestamped at 11:58 PM on March 31 that doesn't arrive in the system until April 2 — can end up in the wrong settlement period. The shipper sees different volumes than the operator, and the resulting dispute takes hours to untangle.

Completeness checks compare the expected transaction cadence for each receipt point against what's actually been imported. If a LACT unit that normally produces 8–12 tickets per day suddenly shows zero for two consecutive days, that's an alert — not something to discover during month-end reconciliation.

Catching Errors Before They Become Disputes

The pattern across all five error types is the same: the error is detectable at the point data enters the system, but most gathering operations don't run validation until after month-end close. That delay turns a fixable data issue into a settlement dispute.

Automated validation shifts detection upstream. As each data batch is imported, the system checks for duplicates (fuzzy matching on timestamps, volumes, and ticket identifiers), tolerance violations (receipt vs. delivery variance beyond PLA), unknown codes (against the master facility table), quality outliers (statistical deviation from historical profiles), and missing data (gaps in expected transaction sequences).

Flagged items go into a review queue the same day they're imported. The measurement team resolves them while the data is fresh — while they can still call the field and ask about that unusual BS&W reading or check whether the missing export file is stuck on the flow computer. By the time month-end close arrives, the data is already clean. Reconciliation becomes confirmation, not investigation.

Tired of finding errors after settlement closes?

COYOTE Measurement validates every transaction as it's imported — flagging duplicates, shorts, longs, quality anomalies, and timing gaps before they reach settlement. See how automated error detection works for your gathering operation.

Schedule a Demo