If you're a field operator running crude oil through LACT units, the data you capture at the point of custody transfer is the single most important input in the entire settlement process. Every volume, every quality reading, every timestamp you record — or that the flow computer records on your behalf — becomes the basis for how money changes hands between producers, gatherers, and purchasers.
Most field operators know their LACT units inside and out. They can tell you when a sampler is acting up, when a meter is drifting, or when a BS&W probe needs cleaning. But fewer operators think about what happens to their data after it leaves the field. And that's where things go wrong.
This guide walks through the LACT data that matters most, the errors that cost operators and their companies real money, and how modern measurement software catches problems that spreadsheets and manual processes miss.
A Day in the Life of LACT Data
Every custody transfer at a LACT unit generates a transaction record. Whether the crude is being loaded onto a truck, pushed into a pipeline, or batched at a terminal, the flow computer captures a set of readings that define the transfer:
- Batch or ticket number — the unique identifier that ties this transfer to a specific shipper and contract
- Start and end timestamps — when the transfer began and ended, used for allocation and scheduling
- Gross standard volume (GSV) — the total volume corrected to 60°F standard conditions
- Net standard volume (NSV) — GSV minus BS&W (basic sediment and water)
- API gravity — the density measurement that determines crude quality and pricing tier
- BS&W percentage — sediment and water content, typically required below 1% for pipeline-quality crude
- Observed temperature — the temperature at the meter, used for volume correction calculations
- Meter factor — the calibration correction applied to the raw meter pulse count
This data gets exported from the flow computer as a TransLog or Microload file — a text-based record that contains every transaction for a given period. In theory, the data flows from the field to the back office intact. In practice, that's where the problems start.
The Five Errors That Cost Real Money
Most LACT data errors don't happen at the unit itself — the flow computer records what it sees accurately. The errors happen in the space between the field and settlement: during download, transcription, and manual processing. Here are the five that show up most often.
1. Transposed Digits
An operator reads 1,247.3 barrels from the flow computer display and writes 1,274.3 on the field ticket. Or types 31.2 API gravity instead of 32.1 into a spreadsheet. A single transposition on a 1,000-barrel batch at $70/barrel can mean a $1,890 error — and when it's one of dozens of tickets processed that day, nobody catches it until settlement.
2. Skipped or Duplicated Transactions
When TransLog files are downloaded and processed manually, it's easy to miss a transaction or accidentally enter one twice. A skipped batch creates a volume short at settlement — the gatherer shows fewer barrels received than the producer shipped. A duplicate creates a long. Either way, it triggers a dispute that takes hours to investigate.
3. Wrong Date or Timestamp
A batch that actually transferred at 11:55 PM on March 31st gets recorded as April 1st. Now it's allocated to the wrong settlement period. For a high-volume facility processing thousands of barrels daily, a single misallocated batch can throw off an entire month-end close.
4. Stale Meter Factor
When a LACT meter gets recalibrated, the meter factor changes. If the new factor doesn't get entered into the flow computer promptly — or if an operator records transactions using the old factor — every batch between recalibration and correction carries an error. At high throughput, even a 0.1% meter factor discrepancy adds up to thousands of dollars per month.
5. BS&W Anomalies Ignored
A BS&W reading spikes from the normal 0.3% to 2.1% on a single batch. The flow computer records it faithfully. But if nobody reviews the data before settlement, that batch gets settled at its recorded quality — and the purchaser either rejects the batch after the fact or demands a price adjustment. Catching anomalies like this early gives the operator time to resample, investigate the cause, or flag the batch for separate handling.
Why These Errors Persist
The common thread in all five errors is the gap between what the flow computer records and what reaches the settlement system. In a manual workflow, that gap is filled by human steps: downloading files, reading displays, writing tickets, typing into spreadsheets, copying between systems.
Each step is an opportunity for error. And the errors compound. A transposed digit in the field becomes a volume discrepancy in reconciliation, which becomes a dispute at settlement, which requires investigation back to the original ticket. The real cost of manual settlement isn't just the labor — it's the error resolution overhead that follows.
Field operators aren't the problem. They're working with tools and processes that create unnecessary risk. The fix isn't more careful humans — it's removing the manual steps that introduce errors in the first place.
How Software Catches What Humans Miss
Modern measurement software eliminates the gap between field data and settlement by ingesting TransLog and Microload files directly. No manual transcription. No spreadsheet intermediary. The flow computer's raw output goes straight into the system.
But automated ingestion is only the starting point. What makes the difference is automated validation — the software checks every transaction against a set of rules the moment it's imported:
- Duplicate detection: Has this batch number already been recorded? Flag it immediately.
- Volume range checks: Is this GSV within the expected range for this LACT unit? A 10,000-barrel batch on a unit that normally processes 500 per transfer gets flagged.
- Quality anomalies: Did BS&W jump from 0.3% to 2.1%? Did API gravity shift by more than 2 degrees between consecutive batches? Flag it.
- Timestamp validation: Does this batch overlap with another? Does it fall outside the expected settlement period? Does the gap between batches exceed normal turnaround time?
- Meter factor tracking: Has the meter factor changed since the last calibration record? Are transactions using the expected factor?
Every flag gets surfaced immediately — not at month-end, not during reconciliation, but at the moment the data enters the system. That gives field operators and measurement techs a window to investigate while the information is fresh: go back to the unit, check the flow computer, pull the original TransLog, verify the reading.
The result is data that's been validated against known patterns before it ever reaches volume reconciliation. Fewer disputes. Faster month-end close. Less time investigating errors that could have been caught on day one.
What Field Operators Can Do Today
Even before your company adopts automated measurement software, there are habits that reduce LACT data errors at the source:
- Never manually transcribe what you can download. If the flow computer exports a TransLog file, use the file — don't retype the numbers.
- Verify meter factor after every calibration. Confirm the new factor is programmed into the flow computer and note the change in your field log.
- Check BS&W before every batch settles. If a reading looks abnormal, resample before the ticket is finalized.
- Record timestamps consistently. If your facility spans a time zone boundary or your flow computers use UTC, know the offset and apply it consistently.
- Keep field tickets and TransLog files together. When an investigation happens months later, having both the handwritten ticket and the electronic file makes root-cause analysis possible.
These practices don't eliminate the risks of manual processing, but they reduce the surface area for errors until the right software infrastructure is in place.
The Bottom Line
Field operators are the first link in the crude oil settlement chain. The data they capture — or that their LACT units capture — determines everything downstream. When that data is accurate and validated, settlement is straightforward. When it's not, the entire organization pays for it in disputes, delays, and lost revenue.
The fix isn't asking operators to be more careful. It's giving them tools that eliminate manual transcription, validate data at the point of capture, and surface problems when they're still easy to fix. That's what COYOTE Measurement was built to do.
Related reading: