The Official Blog of Brooks Instrument

Mass Flow Meter Accuracy | Understanding Factors For Measurement

Written by Admin | Mar 3, 2026 9:34:36 PM
Selecting the right mass flow controller (MFC) or mass flow meter (MFM) starts with understanding accuracy—yet accuracy specifications can vary widely between manufacturers and are often expressed using different terminology. This can make meaningful comparisons difficult.

The Three Building Blocks of
Mass Flow Accuracy

To simplify the topic, MFC and MFM accuracy can be broken down into three fundamental building blocks:

1. Calibration and Measurement Capability (CMC)

2. Repeatability

3. Linearity

The first element, CMC, relates to the equipment and process used to test devices, while repeatability and linearity are related to the device itself.

Together, these elements combine to establish the complete accuracy statement, not just for MFCs and MFMs, but for all flow measurement devices.

Calibration and Measurement Capability (CMC)

CMC describes how closely a calibration method represents “truth,” or absolute accuracy. Because no calibration system is perfect, CMC uncertainty is always greater than zero.

CMC accounts for:

  • The inherent inaccuracies of the calibration system components
  • Statistical variation that occurs during calibration

The following links provide more information on CMC:

https://www.isobudgets.com/know-cmc-uncertainty/

https://www.brooksinstrument.com/en/blog/17025-accreditation

Why CMC Matters
Even the most repeatable and linear mass flow measurement device cannot be more accurate than the calibration system used to verify it. CMC sets the lower limit of achievable device accuracy.

Repeatability

Repeatability describes a device’s ability to produce the same measurement output repeatedly when measuring the exact same flow rate under identical conditions.

For example, if an MFC is commanded to deliver the same flow rate multiple times in succession - without changing process conditions - the spread of the resulting measurements reflects the instrument’s repeatability.

Good repeatability indicates stable and consistent performance, which is critical in precision flow control applications.

Repeatability is the foundation of accuracy and is often more critical for reliable process control and trending.

Good accuracy means good repeatability.

 Good Repeatability Does Not Necessarily Mean Good Accuracy.  Poor Repeatability Means Poor Accuracy.

Good repeatability is especially critical in:

  • Semiconductor processing
  • Thin Film Deposition
  • Analytical and Laboratory Applications 

Linearity

ISA Standard S51.1 (Process Instrumentation Terminology) defines linearity as the deviation of an instrument's calibration curve from a perfect straight line between its zero and span (i.e. Full Scale Flow), essentially measuring how much its output signal varies from a true linear relationship. In today’s digital signal processing world there are two elements to good linearity:

  • The first is the linearity of the raw analog sensor output.

  • The second is the suitability of the curve fit used for the final flow measurement output versus flow rate.

 All mass flow controllers and meters exhibit some degree of nonlinearity across their operating range.

 
Linearity vs. Repeatability
A device can be highly repeatable but have poorly linearity—or vice versa. True accuracy requires both. 

How Accuracy is Calculated

Each of the three elements contributes to the overall specified device accuracy. When combined, they form the total accuracy specification:

Accuracy = CMC + Linearity + Repeatability

Some manufacturers separate the CMC from the device accuracy and only include linearity and repeatability. Understanding which of these components are included in a manufacturer’s stated accuracy is essential when comparing devices.

Understanding Accuracy Terminology

In addition to the accuracy components themselves, the language used in accuracy specifications is also important. Most manufacturers specify flow measurement accuracy in terms of percent of full scale or percent of rate.

Percent of Full Scale versus Percent of Rate

Accuracy is commonly defined using one or both of the following terms:

% of Full Scale (%FS): The error is a fixed percentage of the meter's maximum flow capacity versus the output. This is less accurate at low flows.

Example: For a meter with a 100 LPM (liter/minute) full scale flow rate with ±1.0% FS accuracy, the error is always ±1.0 LPM, whether flowing at 50 LPM or 100 LPM. At 50 LPM, +/-1.0 LPM translates to +/-2% of rate.


% of Reading (%Rd, %RD, %Rdg):
This is also known as % of rate (% rate) or % of setpoint (% s.p.). The error is a percentage of the actual measured flow rate versus the output (or setpoint). This is more accurate across the entire range and is common in demanding applications.

Example: For a 100 LPM (liter/minute) ±1% Rd means a reading of 100 LPM could be off by ±1 LPM (99-101 GPM) and a reading of 50 LPM could be off by 1% of the 50 LPM which is ±0.5 LPM (49.5-50.5 LPM).

Many manufacturers use a split accuracy statement. This means that one statement applies for part of the device range and a second statement applies for a different part of the device range. For example, the Brooks SLA Series specifies a split statement:

±0.9% of Setpoint (20–100% Full Scale)
±0.18% of Full Scale (<20% Full Scale)

 This means tighter accuracy is maintained over most of the operating range, with a different specification applied at lower flow rates. 

When comparing MFCs, it is essential to confirm:

  1. Which reference is used
  2. What flow range the specification applies to
  3. Which accuracy elements are included

Making Accuracy Comparisons

Some manufacturers include all three accuracy elements in their specifications, while others do not. Understanding what is—and is not—included is critical when evaluating accuracy and comparing devices across suppliers.

Beyond published accuracy specifications, real-world process accuracy is also influenced by:

  • Long-term stability
  • Gas correction factors
  • Temperature and pressure coefficients
  • Differences between calibration and process conditions
 
Common Comparison Mistake
Comparing “±1% of full scale” from one manufacturer to “±1% of rate” from another without understanding what’s included can lead to incorrect device selection. 

For a printable chart of the plot above, click here.

Summary

Understanding mass flow controller and mass flow meter accuracy requires more than reading a single number on a datasheet. However, you can make informed, apples-to-apples comparisons and select the right device for your application by evaluating:

  • Calibration and Measurement Capability
  • Repeatability
  • Linearity
  • Accuracy terminology and operating range

For help selecting the most accurate mass flow controller or mass flow meter for your process, contact your local Brooks Instrument representative.

FAQs

What is mass flow controller accuracy?

Mass flow controller accuracy is a measure of how closely the device’s output matches the true flow value, accounting for calibration uncertainty, repeatability, and linearity.

What does CMC mean in calibration?

CMC (Calibration and Measurement Capability) quantifies the uncertainty of the calibration system itself and defines the best achievable accuracy during calibration.

Do all manufacturers calculate accuracy the same way?

No. Some include CMC, repeatability, and linearity, while others exclude one or more components. Always verify what is included in an accuracy specification.

Why does accuracy change at low flow rates?

At lower flow rates, signal-to-noise ratios decrease and nonlinear effects become more significant, often resulting in wider accuracy bands.