News Test systems
How do you test the testing process? Automated cross-checking of the test program
Test equipment capability should tell you whether a piece of equipment is capable of performing certain tests or measurements within the framework of its specifications. Unlike simple mechanical components, where only a caliper is needed for measuring the diameter of a turned part and the parameters for test equipment capability are simple and clear, electronic tests using an in-circuit tester are far more complex. Because in addition to the actual testing device, the test system, the adapter and the test program itself have a great effect on the test equipment capability and thus need to be considered together.
But these calculations are highly complex since they also mutually influence each other. Whether the test system, the adapter and the test program provide the desired testing depth and fault coverage in a stable manner depends on many parameters. Test systems are configurable, i.e. various settings can be changed to achieve the measuring result. Typically a program generator uses a bill of materials and connection list to perform a circuit analysis and use this to create a test program with all the necessary parameters such as stimulus, guard points, integration times, delays and Kelvin measurements. Since the result is not always as successful as desired, so-called debugging is used to modify the measurement, i.e. adapt and change the parameters of the automatically generated test program and until the desired measurement value is provided in stable form. These manipulations can certainly be used to force a measurement value without actually having measured anything reasonable.
One fatal consequence of this is that a component is measured even though it is incorrectly placed or even missing! This means that the expected fault coverage is correct only in theory. How does one prevent this? By manipulating each individual component, i.e. “unsoldering” it or replacing it with components having different values and verifying each change made using the test program to determine what is actually being detected. Even for a small assembly with 100 components this quickly becomes a time-consuming and error-prone exercise.
Isn’t it better to use “incorrect components” on the assemblies to provoke a fault in order then to recognize whether the complex “test system-adapter-test program” is capable of detecting these faults?
Which solution seems to make sense here?
Digitaltest has now developed a procedure that allows other components to be inserted in series or parallel with the measured component during the measurement so that the nominal value of the test object can be changed. If an additional resistor is inserted in parallel with the resistor to be measured, the measuring result should be lower, or the reverse if we do this using a capacitor. Now if you insert a whole string of components and each time take a measurement and evaluate it, this should also be reflected in the change to the measurement value. If not, then we have this famous “ticket” and must assume that a fault in this component is not being detected. Now we have the opportunity to change the parameters for this measurement such that faults are detected or to take the measurement completely out of the test and replace it with other means.
Under the name “FailSim” this procedure can now be implemented in our test systems. A new board with a string of resistors and capacitors is plugged into the new AMU05 (Analog Measurement Unit) and can be inserted into the measuring bus in series or parallel for each measurement. Processing software compares the recorded measurement values and evaluates them.
The result is a clear confirmation of stable and reliable measurements which are also capable of finding defects.