Category Archives: Data Acqusition Accuracy

Quick Lesson in Non-Linearity

As its name implies, non-linearity is the difference between the graph of the input measurement versus actual voltage and the straight line of an “ideal” measurement. The non-linearity error is composed of two components, integral non-linearity (INL) and differential non linearity (DNL). Of the two, integral non-linearity is typically the specification of importance in most data acquisition (DAQ) systems.

INL is the maximum deviation between the ideal output of a DAC and the actual output level (after offset and gain errors have been removed).

INL is the maximum deviation between the ideal output of a DAC and the actual output level (after offset and gain errors have been removed).

INL: The specification is commonly provided in “bits” and describes the maximum error contribution due to the deviation of the voltage versus reading curve from a straight line. Though a somewhat difficult concept to describe textually, INL is easily described graphically and is depicted in Figure 4. Depending on the type of A/D converter used, the INL specification can range from less than 1 LSB to many, or even tens, of LSBs.

DNL: Differential non-linearity describes the “jitter” between the input voltage differential required for the A/D converter to increase (or decrease) by one bit. The output of an ideal A/D converter will increment (or decrement) one LSB each time the input voltage increases (or decreases) by an amount exactly equal to the system resolution.

DNL is the deviation between two analog values corresponding to adjacent input digital values.

DNL is the deviation between two analog values corresponding to adjacent input digital values.

For example, in a 24-bit system with a 10-volt input range, the resolution per bit is 0.596 microvolt. Real A/D converters, however, are not ideal and the voltage change required to increase or decrease the digital output varies. DNL is typically ±1 LSB or less. A DNL specification greater than ±1 LSB indicates it is possible for there to be “missing” codes. Though not as problematic as a non-monotonic D/A converter, A/D missing codes do compromise measurement accuracy.

Check out UEI’s Master Class Videos on Youtube.

Linearity and Noise Errors in DAQ & Control Systems – Part 3

Non-Linearity:
As its name implies, non-linearity is the difference between the graph of the input measurement versus actual voltage and the straight line of an ideal measurement. The non-linearity error is composed of two components, integral non-linearity (INL) and differential non linearity (DNL). Of the two, integral non-linearity is typically the specification of importance in most DAQ systems. The INL specification is commonly provided in “bits” and describes the maximum error contribution due to the deviation of the voltage versus reading curve from a straight line.Integral Non-Linearity Though a somewhat difficult concept to describe textually, INL is easily described graphically and is depicted in the Figure. Depending on the type of A/D converter used, the INL specification can range from less than 1 LSB to many, or even tens of LSBs.

Differential non-linearity describes the “jitter” between the input voltage differential required for the A/D converter to increase (or decrease) by one bit. The output of an ideal A/D converter will increment (or decrement) one LSB each time the input voltage increases (or decreases) by an amount exactly equal to the system resolution. For example, in a 24-bit system with a 10-volt input range, the resolution per bit is 0.596 microvolt. Real A/D converters, however, are not ideal and the voltage change required to increase or decrease the digital output varies.
DNL is typically ±1 LSB or less. A DNL specification greater than ±1 LSB indicates it is possible for there to be “missing” codes. Though not as problematic as a non-monotonic D/A converter, A/D missing codes can compromise measurement accuracy.

Noise:
Noise is an ever-present error in all DAQ systems. Much of the noise in most systems is generated externally to the DAQ system and “picked-up” in the cabling and field wiring. However, every DAQ system has inherent noise as well. Noise is commonly measured by shorting the inputs at the board or device connector and acquiring a series of samples. An ideal system response would be a constant zero reading. In almost all systems, however, the reading will bounce around over a number of readings. The magnitude of the “bounce” is the inherent noise. The noise specification can be provided in either bits or volts, and as peak-to-peak or Root Mean Square (RMS). The key consideration with noise is to factor it into the overall error calculations. Note that a 16-bit input system with 3 bits RMS of noise is not going to provide much better than 13-bit accuracy. The three least significant bits will be dominated by noise and will contain very little useful information unless many samples are taken and the noise is averaged out.

Calculate the Total Error:
To determine overall system error, simply add the offset, linearity, gain, and noise errors together. Though it can be argued that it is unlikely all three of the offset, linearity and gain errors will contribute in the same direction, it is certainly risky to assume they will not.

Max Error = Input Offset + Gain Error + Non-Linearity Error + Noise

One final note… in most systems, Input Offset, Gain Error, and Non-Linearity all vary over time, and in particular, over temperature. If you require a very accurate measurement and your DAQ system will be subject to extreme temperature fluctuations, be sure to consider the errors caused by temperature change in your calculations.

Click here to view Part 1 or Part 2 and let us know if you have any questions.

UEIDAQ.com | Facebook | LinkedIn | Youtube | Twitter

Data Acquisition Sample Rate Considerations

figure-6

Always be certain to examine your analog input systems carefully and determine whether the sample rate specification really meets your needs. Many multi-channel DAQ input boards use a multiplexer connected to a single A/D converter. Most data sheets will specify the total sample rate of the board or system and leave you to calculate the “per channel” sample rate. Take for example a100 kilosample per second (kS/s), 8-channel, analog-to-digital (A/D) board. It will most certainly sample one channel at 100 kS/s. But if two or more channels are used, the 100 kS/s may be shared and sampled at 50 kS/s (max) each. Similarly, five channels may be sampled at 20 kS/s each. If the data sheet does not specify the sample rate as “per-channel”, assume that the sample rate must be divided among all of the channels sampled.
This becomes important when two or more input signals contain widely varying frequency content. For example, an automotive test system may need to monitor vibration at 20 kS/s and temperature at 1 S/s. If the analog input only samples at a single rate, the system will be forced to sample temperature at 20 kS/s and will waste a great deal of memory/disk space with the 19,999 temperature S/s that aren’t needed. Some systems, including all of UEI’s “Cube” based products, allow inputs to be sampled at different rates, while products from many vendors do not.
Another sampling rate concern is the need to sample fast enough, or provide filtering to prevent aliasing. If signals included in the input signal contain frequencies higher than the sample rate, there is the risk of aliasing errors. Without going into the mathematics of aliasing, lets just say that these higher frequency signals will manifest themselves as a low frequency error. The accompanying Figure provides a graphical representation of the aliasing phenomenon. A visual example of aliasing can be seen in video where the blades of a helicopter or the spokes of a wheel appear to be moving slowly and/or backwards. In the movies it doesn’t matter, but if the same phenomenon appears in the measured input signal, it’s a critical error!
UEIDAQ.com | Facebook | LinkedIn | Youtube | Twitter