The objective of this article is to explain the trade-off between analog-to-digital (A/D) sampling speed and accuracy, and the difference between accuracy and resolution.
One specification important to almost every data acquisition application is measurement accuracy. Studies show that data acquisition equipment users feel that the most important aspect of such systems is the accuracy and integrity of the data.
Unfortunately, accuracy is often misunderstood or misinterpreted by users, and often exploited by instrument manufacturers. This confusion has developed because of the many ways that accuracy can be stated, and the tendency to equate resolution and accuracy in digital instruments.
It is important to consider the effect of an A/D converter’s sampling speed on accuracy: more speed usually results in more random noise in your data. In using a faster instrument, you may not be able to recover the accuracy required for your application through either analog or digital filtering.
And don’t confuse accuracy and resolution. The resolution you buy in an A/D may not contribute anything to the usefulness of your data.

The Importance of Accuracy
In power utility applications, the effect of measurement accuracy is clear and dramatic. For example, in a fossil fuel power plant, about 60 parameters are required to calculate boiler efficiency, turbine heat rate, operator controllable losses, and net plant heat rate (a measure of fuel efficiency in a power plant).
In a coal-fired power unit, an increase in accuracy from 1.1% to 0.1% in measuring main stream temperature results in a decrease of uncertainty in the net plant heat rate of 3.44%. Just a few percentage points decrease in uncertainty can result in annual savings of hundreds of thousands of dollars in coal costs for a typical 500-megawatt plant.
Although your application may not realize such dramatic results, your understanding of the effects of accuracy does affect your bottom line. For your measurements and the conclusions drawn from them to be reliable in the long run, they must be traceable to some standard. Even if their usefulness is confined to a short time span, accuracy is important so you can have confidence in the decisions you make based on those measurements. If there are significant random and systematic errors in your data, then you may well have wasted the time taken to set up and make the measurements.
Accuracy, Speed, and Noise
Accuracy and speed tend to be at odds in electrical measurements. The faster you measure with a particular A/D converter, the more noise becomes a significant part of the final reading. In general, the introduction of noise suppression limits the useful speed of the instrument.
Noise emanates from components of the A/D itself, as well as from external sources such as electromagnetic interference (EMI) radiated from power mains. Sources of noise in a measurement instrument can be broadly classified as conducted noise or radiated noise. These broad classifications are useful because different suppression techniques are used to reduce the effects of each noise type.
Conducted noise travels into the instrument by way of sensor leads, power lines and other physical paths leading into the instrument. Conducted noise is eliminated by filtering. A filter may be additional circuitry or it may be incorporated into the design of the A/D itself, as is the case with a dual-slope integrating A/D.
A basic assumption when designing a filter is that the noise frequency can be distinguished from the frequency of the signal to be measured. A low-pass filter is commonly used to eliminate high-frequency noise. The introduction of a low-pass filter effectively limits the useful speed of the instrument because any signal changing faster than the cut-off frequency of the filter is significantly attenuated.
The low-pass filter may be a stand-alone circuit and may be inherent in the design of the A/D. For instance, the basic design of the dual-slope A/D converter allows it to easily eliminate “periodic” noise. Often it is assumed that the primary source of conducted noise is at power line frequencies. In this case, the noise is eliminated by time averaging (integrating) over an integral multiple of 1/50 or 1/60 second periods, depending upon which country you live in.
The average value of the periodic noise over this period is zero. Note that this also limits the A/D speed by requiring one measurement to consume at least one power-line cycle of time. As a practical matter, useful noise reduction often requires the integration period to be several power-line cycles.
Accuracy and Resolution
A common belief, often encouraged by instrument manufacturers, is that resolution and accuracy are the same.
There is a strong tendency to believe this because the resolution of an instrument can be expressed in the same units as accuracy and the resolution specification always looks better than the accuracy specification.
Consider the following statement by a major instrument supplier: “. . .A 12-bit system delivers accuracy of one part in 4096, and while 32-bit resolution is more accurate, there are few applications that need to be accurate to one part in 4,294,967,296.” This statement seems to imply a stronger relationship between resolution and accuracy than there really is.
It is true that 212 = 4096, but the accuracy of an A/D having this number of bit cannot be one part in 4096. In fact, you’ll find a measurement uncertainty 20 times greater than the resolution to be more typical after all error components are summed. One important reason that this is true is because the calibration process always leaves at least one bit of error.
Sometimes you may only be interested in the repeatability of the measurements. In the controlling process, for instance, an operator may know from experience that when the displayed value is 3.87, the process is producing an excellent product, but if this value varies more than 0.10 from 3.87, then the resulting product is not acceptable. In this instance, it is important that the operator be sure that when a change of 0.10 occurs, it is due to the process and not to the instrument’s temperature drift, time drift, or short-term repeatability.
Now suppose that a new instrument is put in place or that the whole process needs to be replicated elsewhere. In this case, the absolute accuracy is important. If the instrument has reasonable accuracy, then you are assured of measurements that are not only repeatable from measurement to measurement but also from instrument to instrument.
The absolute accuracy (∆V, see figure below) is important when the measurements are to be used to compare measurements with other processes or standards.
If the data is to provide useful comparisons, it must be traceable to some standard, e.g. the standards maintained by the National Institute of Standards & Technology (NIST).
The graph below shows the effect of averaging over a large number of samples. This type of filtering reduces the noise component to a small amount but cannot improve the basic accuracy of the instrument.

The effect of averaging readings in order to reduce noise in measurements.
Assumptions:
- N measurements are taken sequentially.
- The measured parameter is a DC voltage source; the value of which does not change over N measurements.
- The noise present has a normal distribution.
This graph shows “displayed” values (Vdisplayed) versus time. The X’s in this diagram indicate the displayed value at each measurement. The dashed horizontal lines are chosen such that there is a 99.9% chance that the displayed values during this sequence of measurements will fall between them. This is the effective resolution of the instrument in the presence of noise.
Now suppose that instead of displaying raw readings, we display the average of N consecutive readings. The upper and lower curved solid lines show how the bounds of the displayed values tighten up as N increases; in other words, the effective resolution is better. These lines approach a fixed display value (VA) as N grows larger.
VA will differ from the NIST defined value of the voltage being measured (VNIST) by a certain amount, ∆V, that depends upon the instrument, the magnitude of the input, and other factors such as time and temperature drift. ∆V is inherent in the A/D and no amount of filtering will remove it. Resolution has improved, but only at the expense of speed since each displayed value now takes the time required to do N consecutive measurements.
∆V (the difference between VA and VNIST) is the basic inaccuracy of the instrument. Two characteristics of ∆V that are of most interest are its stability (over time and temperature) and absolute accuracy.
The Effect of Noise Filtering on Accuracy
A usable reading requires both the elimination of noise and a measurement that has the accuracy required by the application. While noise can be reduced to an acceptable level using either hardware or software filtering, accuracy is achieved by careful analog design utilizing a judicious choice of components.
There is a tendency to expect filtering to achieve the required accuracy of the measurement. You might choose a fast but less accurate A/D converter so that the speed is available when necessary. In this case, when more accurate measurements are required, users tend to simply average several readings. This doesn’t work!
The figure above shows that it certainly reduces errors due to random noise, but the reading still reflects the basic inaccuracy inherent in the measurement hardware. As always, it is necessary to determine the accuracy required for your application, and then ensure that your hardware does the job.
Glossary
Accuracy: Accuracy is an expression relating the difference between an indicated value and an accepted standard (the “true value”). In the case of thermocouples, the accepted standards are the DC voltage standard, and thermocouple reference tables maintained by various standards groups, such as BIPM (France), DIN (Germany), NIST (US), and NRC (Canada).
Calibration accuracy: This is the accuracy of the instrument immediately following calibration before any conditions change. It is often called 24-hour accuracy.
Total instrument accuracy: Total instrument accuracy is a statement of the maximum operating error that could be expected under worst-case conditions. All known error terms, and effects of drift with time and ambient temperature are incorporated. This is distinguished from a run-of-the-mill accuracy specification because it is complete with all relevant contributions to measurement error, rather than simply stating the calibration accuracy.
Fluke has made a concerted effort to create a clear, concise statement of accuracy. For all Fluke temperature measurement devices, for instance, accuracy is published as a maximum instrument error in degrees Celsius or Fahrenheit for a usable range of operating conditions. All time and temperature-dependent error terms that affect a data acquisition instrument during usage are considered in this specification.
Repeatability: Repeatability is an expression quantifying an instrument’s ability to reproduce a reading of the same signal under the same conditions at different times. This assumes each reading is made within a relatively short time span, say, the 24-hour specification period. Factors that affect repeatability are noise inherent in the design of the analog-to-digital converter, and hysteresis from sources such as dielectric absorption of capacitors.
Resolution: Resolution is the incremental input signal that yields the smallest distinguishable reading or output. In digital instruments, this is the least significant digit.