Blog

How TransLog and Microload Data Works in Crude Oil Measurement

March 11, 2026 · 7 min read

If you operate a crude oil gathering system, you've dealt with TransLog and Microload files — even if you don't call them by name. They're the text files your LACT units and terminal automation systems generate every time a custody transfer happens. They contain the raw transaction data that everything downstream depends on: volumes, quality readings, timestamps, meter factors.

And yet, at most midstream operations, these files are handled manually. Someone downloads them from a terminal, opens them in a text editor or Excel, copies the relevant numbers into a spreadsheet, and hopes nothing gets transposed along the way. It's a process that's been "good enough" for decades — until it isn't.

This post explains what TransLog and Microload files actually contain, why they matter for accurate settlement, and how automated parsing eliminates the riskiest part of your measurement workflow.

What Is a TransLog File?

A TransLog file is a text-based export generated by a LACT unit's flow computer or terminal automation system. Every time crude oil passes through a custody transfer point — whether it's being loaded onto a truck, pushed through a pipeline, or batched at a terminal — the flow computer records a transaction.

Each transaction in a TransLog typically includes:

  • Batch or ticket number — the unique identifier for the transfer event
  • Start and end timestamps — when the transfer began and completed
  • Gross standard volume (GSV) — total volume corrected to standard temperature and pressure
  • Net standard volume (NSV) — GSV minus BS&W (sediment and water)
  • API gravity — the density/quality measurement of the crude
  • BS&W percentage — basic sediment and water content
  • Temperature — observed temperature at the meter
  • Meter factor — the calibration correction applied to the raw meter reading
  • Meter ID or bay number — which physical meter recorded the transaction

The file format is plain text — typically pipe-delimited, comma-delimited, or fixed-width. Formats vary by flow computer manufacturer and terminal configuration, but the data points are largely the same. TransLog files are the standard way to move custody transfer data from the field into back-office systems.

What Is a Microload File?

Microload files serve a similar purpose but are typically generated by Micro Motion or Emerson terminal automation systems. Where TransLog files tend to capture broader batch-level transaction records, Microload files are often more granular — recording individual load events at truck or rail terminals.

A Microload file usually contains:

  • Load number and carrier ID — which truck or rail car, which carrier
  • Gross and net volumes — same GSV/NSV breakdown as TransLog
  • Quality data — API gravity, BS&W, sometimes sulfur content
  • Timestamps — load start, load end, sometimes arm connect/disconnect
  • Bay or arm identifier — which physical loading point was used
  • Seal numbers — for chain-of-custody verification on trucks

The key difference isn't what data they capture — it's the format. Microload files have their own field layouts, naming conventions, and delimiters. Software that can parse TransLog files won't necessarily handle Microload files correctly, and vice versa. That's why native support for both formats matters.

Why These Files Matter for Settlement

TransLog and Microload files are the foundation of crude oil settlement. Every dollar amount on a settlement statement traces back to a transaction in one of these files. The volume, the quality adjustment, the meter factor — they all originate here.

When the data in these files flows cleanly into your accounting system, settlement is straightforward: volumes match, quality adjustments are consistent, and variances are explainable. When the data doesn't flow cleanly — when it's manually re-keyed, partially imported, or reformatted through a chain of spreadsheets — problems emerge:

  • Transposition errors — a volume of 1,523.4 becomes 1,532.4, and no one catches it until the producer's numbers don't match
  • Missing transactions — a row gets skipped during copy-paste, and an entire load disappears from reconciliation
  • Format misinterpretation — a date field reads as MM/DD but gets parsed as DD/MM, shifting transactions to wrong days
  • Stale data — files sit on a terminal for days before someone downloads and processes them, delaying variance detection

Each of these errors becomes a potential measurement dispute at month-end. And because the error was introduced during data handling rather than measurement itself, it's harder to diagnose — the meter was right, but the number that reached the spreadsheet wasn't.

The Manual Workflow (And Where It Breaks)

At most gathering operations, TransLog and Microload data follows a workflow that looks something like this:

  1. A field tech or terminal operator exports the file from the flow computer — usually via USB drive, FTP, or email
  2. Someone in the back office opens the file in a text editor or imports it into Excel
  3. They manually map the relevant columns to their settlement spreadsheet — copying volumes, quality data, and timestamps
  4. They apply corrections (temperature, meter factor, BS&W deductions) via spreadsheet formulas
  5. The corrected data feeds into a monthly reconciliation spreadsheet
  6. Variances are investigated manually by cross-referencing the original files

Every step in this chain is an opportunity for error, delay, or data loss. The most dangerous step is #3 — the manual mapping — because it's where the structured data in the file gets flattened into a spreadsheet that has no awareness of the original format.

If something goes wrong at step #6, you have to trace backward through all five previous steps to figure out whether the issue is a measurement problem or a data-handling problem. That investigation alone can take hours.

How Automated Parsing Changes the Equation

Automated parsing replaces steps 2 through 4 entirely. You upload the TransLog or Microload TXT file, and the software handles the rest:

  • Format detection — the parser identifies whether it's a TransLog or Microload file and applies the correct field mapping
  • Data extraction — every transaction is pulled from the file with its complete field set: volumes, quality, timestamps, meter IDs
  • Validation — each value is checked against expected ranges and formats. An API gravity of 450 instead of 45.0? Flagged immediately. A timestamp from 2019 in a 2026 file? Flagged.
  • Duplicate detection — if the same transaction appears in multiple files (common when exports overlap), the system catches it before it inflates volumes
  • Anomaly alerting — missing sequences, unusual volume spikes, quality readings outside normal bounds — all surfaced before settlement, not during it

The result is that the data in your settlement system matches the data in the original file exactly. There's no transcription step, no reformatting, no chance for a human to accidentally introduce an error. And if a value does look wrong, you can trace it directly back to the source file and the specific line it came from.

COYOTE Measurement includes native parsers for both TransLog and Microload formats. Upload the file, review the flagged items, approve, and the data flows directly into volume reconciliation — no spreadsheet intermediary.

What to Look for in Parsing Software

If you're evaluating measurement software and TransLog/Microload ingestion is important to your operation, here's what matters:

  • Native format support — the software should handle your specific file formats without requiring you to pre-process or reformat them. If you have to convert to CSV first, you're reintroducing manual risk.
  • Validation rules — range checks, format checks, sequence checks, and duplicate detection should happen automatically on upload, not as a separate manual review step.
  • Audit trail from file to settlement — every value in your settlement statement should trace back to a specific line in a specific source file. If that chain breaks, you lose the ability to diagnose disputes quickly.
  • Multi-format flexibility — gathering operations often deal with multiple terminal systems, flow computer brands, and export formats. Software that only handles one format creates a bottleneck for the others.

The goal is simple: get the data from the field into your settlement system without anyone having to re-type, reformat, or reinterpret it. The file already has the right numbers. The software's job is to keep them right.

Getting Started

If your team is still manually processing TransLog or Microload files, the fastest way to reduce errors and speed up settlement is to automate that ingestion step. It's typically the single highest-ROI improvement a gathering operator can make — not because it's complex, but because it eliminates the most error-prone part of the entire workflow.

Start by inventorying which file formats your terminals generate, how many files you process per month, and how much time your team spends on manual data entry. That gives you a clear picture of the time and risk you can eliminate.

Ready to stop re-keying TransLog and Microload data?

COYOTE Measurement parses both formats natively — upload the file, review flagged items, and the data flows straight into reconciliation. No spreadsheets, no transcription errors.

Schedule a Demo