Category Archives: Data Acquisition

Is Your VME-based I/O System Going EOL?

Reflective Memory in VME chassis has long been the standard in real-time DAQ and embedded control systems. However, due to the age of most VME systems, as well as the recent consolidation of vendors in the market, many VME users will soon face EOL (end-of-life) issues.

Learn more about moving on from obsolete VME.

Storage Tank Monitoring

The recent spill of methylcyclohexane methanol into the Elk River in West Virginia serves as a reminder that our older storage tank systems need frequent inspections and infrastructure updates. Better monitoring systems can reduce the impact of such spills on our natural resources and the people who depend on them. Fortunately, the distributed nature of United Electronic Industries’ I/O platforms are well suited for monitoring temperatures, pressures, strain and thermal expansion in large storage tank facilities.

Storage tanks require rugged monitoring systems like that of UEI's GigE Cube.

Storage tanks require rugged monitoring systems like that of UEI’s GigE Cube.

UEI’s platforms are perfectly suited for a variety of applications, including spent nuclear fuel rod containment tanks, petroleum and gas refinery tanks, wastewater treatment chambers, and fluid separation basins that require industrial-strength monitoring systems.

Our PowerDNA Cube, for example, effectively collects measurements taken by thermocouples, pressure transducers and various strain gauges and load cells. Stand-alone data recorder/logger functionality allows data to be stored locally, ensuring no data is lost even if the network is temporarily disabled. The new teaming/bonding support on our new GigE Cube provides for simple connections to redundant networks as required. Equally important, our 10-year availability guarantee ensures that environmentally sensitive tank monitoring systems are easy to maintain and monitor over many, many years.

Real-World Temperature Measurements for Data Aquisition

Temperature is almost certainly the most commonly measured phenomenon in data acquisition. Whether the application is deep beneath the sea, in a data center, on an automobile, or in deep outer space, temperature plays a key role in many systems. The most common temperature sensors are the Thermocouple, the RTD (Resistance Temperature Detector), the Thermistor, and the Semiconductor temperature sensor. Entire books have been written regarding temperature measurement and an in-depth coverage is far beyond the scope of this blog, but we offer the following in Three Parts, which should provide enough information for most users in most applications.

Which Sensor to Use?
In many cases, more than one of the temperature sensor types would provide the required results. However, considering only the following factors will almost always point to a clear favorite for a given application. These factors are:
• Accuracy / Sensitivity
• Temperature Measurement Range
• Cost
• System Simplicity

Table 2 provides a quick overview of the four most popular temperature sensors.

Table 2. Comparing Various Temperature Sensor Parameters. Click image to enlarge.

Table 2. Comparing Various Temperature Sensor Parameters

The Thermocouple (a.k.a. TC) is the workhorse of the temperature measurement world. It offers an excellent blend of accuracy, wide temperature measuring range, affordability, and can be measured with simple inputs. The RTD offers exceptional accuracy, repeatability, and a wide measurement range, but is fairly expensive and somewhat complex to use. Interestingly, thermistors range from very inexpensive, low-accuracy devices all the way to very expensive, high-accuracy units. The thermistor measures temperature over a fairly limited range and is somewhat complex to use. Finally, the semiconductor sensor offers reasonable accuracy, a limited measurement range, and can be monitored with simple systems. Semiconductor sensors are also very inexpensive.

UEIDAQ.com | Facebook | LinkedIn | Youtube

Strung Out on String Pots?

String pots (i.e., potentiometers) are designed to measure linear displacement. They are typically lower cost than LVDTs and can offer much longer measurement distances. As their name implies, the basis for string pot is a string or cable, and a potentiometer. Basically, the string and a spring are attached to the wiper screw of the potentiometer and as the string is pulled, the potentiometer resistance changes.

How a String Potentiometer Works

How a String Potentiometer Works

The string pot provides a calibration factor that describes what displacement is represented by a percentage of resistance change. As a simple variable resistance device, with a linear output, most string pots are interfaced to standard A/D boards.

The most common connection configuration connects a voltage reference to the one side of the string pot with the other side connected to ground. The “wiper” is then connected to an A/D input channel. With the string completely retracted, the measured voltage will be equal to either reference voltage or zero. With the string completely extended, the voltage measured will be the opposite (either zero or the reference voltage). At any intermediate string extension, the voltage measured will be proportional to the percentage of string “out”.

Be sure your voltage reference has the output current capacity to drive the string pot resistance. Your measurement will be in error by the same percentage as any voltage reference error. In some cases, it may be beneficial to drive the string pot with a higher capacity, lower accuracy voltage source.

Should you require higher accuracy than the voltage source provides, try dedicating an A/D channel to measure the voltage source. This makes the system virtually immune to errors in the voltage source. Note that string pots are single ended, isolated devices. When connecting a string pot to a differential input, be sure to connect the string pot/reference ground and the A/D channel’s low or “-“ input. Failing to make this connection in some way will likely cause unreliable and even “odd” behavior as the input “-“ terminal floats in and out of the input amplifier’s common mode range.

Need engineering help configuring your I/O? Contact the experts at UEI.

UEIDAQ.com | Facebook | LinkedIn | Youtube

What’s the difference between a Synchro and Resolver?

A few readers have asked us to explain the difference between a Synchro and Resolver…

Synchros and Resolvers aren’t all that different from electric motors. They share the same rotor, stator, and shaft components. The primary difference between a synchro and a resolver is a synchro has three stator windings installed at 120 degree offsets, while the resolver has two stator windings installed at 90 degree angles.

Resolver Rotor Excitation and Response

Resolver Rotor Excitation and Response

To monitor rotation with a synchro or resolver, the data acquisition system needs to provide an AC excitation signal and an analog input capable of digitizing the corresponding AC output. Though it is possible to create such a system using standard analog input and output devices, it is a fairly complicated process, so most engineers opt for a dedicated synchro/resolver interface. These DAQ products not only provide appropriate signal conditioning, they also do most of the calculations required to turn the analog input into rotational information. It always a good idea to check the software support of any synchro/resolver interface to ensure that it does provide results in a format you can use.

Most synchro/resolvers require an excitation of roughly 26 Vrms at frequencies of either 60 or 400 Hz. It is important to check the requirements of the actual device you are using. Some units require 120 Vrms (and provide correspondingly large outputs…be careful.) Also, some synchro/resolver devices, and in particular those used in applications where rotational speed is high, require higher excitation frequencies, though you will seldom see a system requiring anything higher than a few kilohertz.

Finally, some synchro/resolver interfaces such as UEI’s DNx-AI-256 provide the ability to use the excitation outputs as simulated synchro/resolver signals. This capability is very helpful in developing aircraft or ground vehicle simulators as well as for providing a way to test and calibrate synchro/ resolver interfaces without requiring installation of an actual hardware.

Note: In some applications the synchro/resolver excitation is provided by the DUT itself. In such cases, it is important to make sure that your DAQ interface is capable of synchronizing to the external excitation. This is typically accomplished by using an additional analog input channel.

Grappling with Sampling?

Simultaneous sampling is somewhat of a misnomer. Samples can never truly be simultaneous as there’s always a certain skew between samples. However, sampling skews can generally be reduced to levels low enough that they are considered insignificant to the application. The error or skew between samples is commonly referred to as the aperture uncertainty, and is typically measured in nano-seconds (ns). As an example, the 4-channel, 250 kHz DNA-AI-205 offers a maximum aperture uncertainty of 30 ns.

There are two common ways to achieve simultaneous sampling. SimultaneousSampling1The first is to simply place a separate analog-to-digital (A/D) converter on each channel. They may all be triggered by the same signal and will thus sample the channels simultaneously.

The second is to place a device called a sample & hold (S&H) or track & hold (T&H) on each input. In “sample” mode, the device behaves like a simple unity gain amplifier. That is, whatever signal is provided on the input is also provided at the output. SimultaneousSampling2However, when commanded to “hold”, the S&H effectively freezes its output at that instant and maintains that output voltage until released back into sample mode. Once the inputs have been placed into hold mode, the multiplexed A/D system samples the desired channels. The signals it samples will all have been “held” at the same time and so the A/D readings will be of simultaneous samples. The second way to provide simultaneous sampling is to provide an independent A/D converter on each channel. Either system should provide good results.

Click here for more information about Sampling Rates: How Fast is Fast Enough?

UEIDAQ.com | Facebook | LinkedIn | Youtube | Twitter

The Twists of Strain Gauge Measurements – Part 1

The Strain Gauge is one of the most commonly measured devices in data acquisition systems. It’s used to determine how much an object expands, contracts, or twists. Strain is also frequently measured as an intermediate means to measure stress, where stress is the force required to induce a strain.

Perhaps the most common examples of this translated measurement are load cells, where the strain of a well-characterized metallic bar is measured, though the actual output scale factor of the cell is in units of force (e.g. pounds or newtons). The stress/strain relationship is already known in most commercial materials, making the conversion from strain into stress a straight-forward mathematical calculation. Making matters easier still is that for virtually all metals, the relationship between stress and strain, when the stress is applied in pure tension or compression, is linear. This linear relationship is referred to as Hooke’s law, while the actual coefficient that describes the relationship is commonly referred to as either the modulus of elasticity, or Young’s modulus.\

Strain Gauge Figure

Whether stress or strain is the actual measurement of interest, the mechanics of the strain gauge and the electronics required to make the measurement are virtually identical. To create a simple strain gauge, you need only attach a length of wire to the object being strained. If attached in-line with the strain as the object lengthens under tension, the wire too is lengthened. As the wire length increases so does its resistance. On the other hand, if the strained object is compressed, the length of the wire decreases, and there is a corresponding change in the wire’s resistance. Measure the resistance change and you have an indication of the strain changes of your object. Of course, the scale factor needed to convert the resistance change into strain would have to be determined somehow, and it would not be a trivial process. Also, the resistance change for a small strain change would be miniscule, making the measurement a difficult one.

Today’s strain gauge manufacturers have solved both the scale factor and, to a certain extent, the magnitude of resistance change issues. To increase the output (resistance change) per unit of strain, today’s strain gauges are typically created by placing multiple “wires” in a zig-zag configuration (see Figure). A strain gauge with 10 zigs and 10 zags would effectively increase the output scale factor by a factor of 20 over the single wire example. For a simple application, all you need to do is align the strain gauge so the “long” elements are in parallel to the direction of strain to measure, and affix the gauge with an appropriate adhesive.

As the substrate is stretched in tension, the resistance of the gauge increases. As the gauge is relaxed, the resistance decreases. Strain can thus be deduced by measuring the resistance of the strain gauge. Simple, right? Stay tuned. In Part 2 we’ll discuss some of the complexities involved.

UEIDAQ.com | Facebook | LinkedIn | Youtube

Data Acquisition Sample Rate Considerations

figure-6

Always be certain to examine your analog input systems carefully and determine whether the sample rate specification really meets your needs. Many multi-channel DAQ input boards use a multiplexer connected to a single A/D converter. Most data sheets will specify the total sample rate of the board or system and leave you to calculate the “per channel” sample rate. Take for example a100 kilosample per second (kS/s), 8-channel, analog-to-digital (A/D) board. It will most certainly sample one channel at 100 kS/s. But if two or more channels are used, the 100 kS/s may be shared and sampled at 50 kS/s (max) each. Similarly, five channels may be sampled at 20 kS/s each. If the data sheet does not specify the sample rate as “per-channel”, assume that the sample rate must be divided among all of the channels sampled.
This becomes important when two or more input signals contain widely varying frequency content. For example, an automotive test system may need to monitor vibration at 20 kS/s and temperature at 1 S/s. If the analog input only samples at a single rate, the system will be forced to sample temperature at 20 kS/s and will waste a great deal of memory/disk space with the 19,999 temperature S/s that aren’t needed. Some systems, including all of UEI’s “Cube” based products, allow inputs to be sampled at different rates, while products from many vendors do not.
Another sampling rate concern is the need to sample fast enough, or provide filtering to prevent aliasing. If signals included in the input signal contain frequencies higher than the sample rate, there is the risk of aliasing errors. Without going into the mathematics of aliasing, lets just say that these higher frequency signals will manifest themselves as a low frequency error. The accompanying Figure provides a graphical representation of the aliasing phenomenon. A visual example of aliasing can be seen in video where the blades of a helicopter or the spokes of a wheel appear to be moving slowly and/or backwards. In the movies it doesn’t matter, but if the same phenomenon appears in the measured input signal, it’s a critical error!
UEIDAQ.com | Facebook | LinkedIn | Youtube | Twitter