In today’s electronics industry, testing and verification have a crucial part in every step of the design cycle, all the way from technology and concept verification to version updates after product release. In general, we can define testing as
- the scientific process of achieving an error-free system/subsystem implementation and
- demonstrating that the system can fulfill the required functionality with the expected performance by accurately modeling its operating conditions.
Testing is also used heavily for quality improvement by preventing early failures of products with a detailed analysis of manufacturing lines and operating conditions. This not only leads to quality improvement but also provides an effective means of building a good reputation and high customer satisfaction. When effectively used, testing may also give a competitive edge to companies through continuous inspection of design and manufacturing processes. Therefore, many companies invest fortunes to build an efficient infrastructure for testing the performance of their products and consider testing as an integral part of the entire design flow.
At the very early stages of a design process, a considerable portion of testing consists of technology and concept verification, which most often includes analytical and/or numerical modeling of systems/subsystems, in addition to experimental demonstrations. In the later stages, the testing process may become very detailed and cumbersome, depending on the complexity of the design and the depth of the required analysis. On the other hand, testing should not introduce excessive engineering costs, as it does not add any functionality to the product. This brings up the idea of “design for test,” which basically means considering the testing and testability of a system/subsystem starting from the concept generation step of a design cycle. An engineer should always keep in mind that no matter how impressive a design is, it would be meaningless if one cannot test and quantify its performance. Because of this reason, an essential part of every project, and hence your project, is the test setup to be used for verification of the requirements. The system/subsystem(s) should be designed by considering efficient ways of testing. In addition, the test setup should be designed so that statistical analysis of performance requirements can be conducted reliably.
Testing can be divided into functional and non-functional testing. In functional testing, a design is examined to determine if it can perform certain tasks or not. On the other hand, non-functional testing covers a wide range including performance, mechanical stress and stability, compliance, compatibility, and reliability testing. Throughout this course, you will mostly do performance testing and provide detailed analyses of your design’s performance limits, robustness, and sensitivity to inputs and environmental conditions.
This document provides you with an outline to help you prepare a well-organized and systematic test procedure and presents an example test procedure for a sample project. Please note that the procedures explained here are only general guidelines. The details of your own test procedures will depend on many factors such as the project definition, design architecture, requirements, company objectives, and component selection.
Project Example
Consider that you are asked to construct a circuit that measures the distance between a light sensor and a light source of your choice. Suppose you choose a Light Dependent Resistor (LDR) as your light sensor. LDR is basically a photoresistor whose resistance value depends on the light intensity incident on the component. Note that determining the light source is also crucial since the light intensity on LDR changes with distance.
Stages in Testing
During your design, you will conduct many different types of tests. Below you can find a summary of the types of tests you may perform and important recommendations to follow for each of them. Although each type has specific items to consider, some common points apply to all.
Measurement devices and calibration
You should consider what to measure and how to measure in every test and its intermediate steps. Choose appropriate measurement equipment and make sure that each equipment is calibrated. If not sure, determine a calibration approach, calibrate your equipment and justify how reliable it is. For the LDR example, determining the way to measure the actual distance between the light source and your LDR is needed. Consider ways to justify that your particular measurement equipment gives you a correct distance measurement.
Ground truth
Obtaining a trusted reference, also known as the ground truth, for success/failure quantification is the starting point of all tests. In our example, the ground truth is the actual distance between your light source and the LDR that you can measure with a ruler, laser meter etc. Note that the ground truth is measurable!
Technology and concept verification
The design process starts with concept generation. After you develop an idea for the solution, do some theoretical analysis and/or modeling to verify if it works in a simulation environment. This is extremely useful when you have a relatively complex system to design. For the LDR example, a good modeling approach is a circuit simulation in spice. Once theoretically verified, test your idea experimentally by physically constructing your system. At this stage, do not pay too much attention to performance. Just try to get a working system showing that your idea has the potential to work. Note that this corresponds to a functional test. Later, you can tune or modify your design to meet the performance requirements under actual operating conditions.
Component testing
You will use lots of different components from different vendors to implement your ideas. When selecting your components, you should refer to the datasheets and/or application notes as your primary source of information. However, you should not trust vendor data or external sources until you develop a certain level of trust. Because, in case of any failure, it is your product and your company at stake, not your vendor! Also, remember that components do not always have nominal performance. They can even be out of documented minimum/maximum values because of manufacturing tolerances. Therefore, it is a very good practice to measure, characterize, and calibrate your components before you use them in your circuits. You may even have to repeat testing of some components before every test. Battery voltage test is a good example of this since the voltage drops in time.
When selecting your components, it is also important to determine their input and output sensitivities. Try to quantify the input and output resolutions of your components. Your components should be able to distinguish between different input levels that are meaningful for your operating conditions. Always consider the sources of noise and noise levels in the environment. Furthermore, the difference between the output levels for different input levels should be adequate, as your output will most probably be the input of another component or subsystem. Note that this is also closely related to the resolution of your measurement equipment. This will later have an impact on your subsystem and system sensitivity analysis.
Software testing
Throughout this course, you will do a lot of coding. When testing your codes, make sure that you take various aspects into account. The compatibility of different subblocks is crucial for the overall performance of your program. Try to quantify the execution time and allocated resources for each subblock and minimize the bottlenecks. Throw an error statement whenever you can since this would not only allow you to isolate the problems in your system later but also enhance the overall quality of your product. If you have a hardware/software interface, make sure that the timing and voltage levels of the signals are compatible. Check the operation of your hardware/software interface and its error statements with wrong input/data since your system will surely encounter such cases in real life.
Requirement testing
The first step to see whether your design fulfills the required tasks or not is to clearly define the functional and performance requirements of the system/subsystem to be tested. Make sure that your performance requirements are quantifiable and measurable. In our example, the functional requirement is accurately measuring the distance between the sensor and the light source. This translates to a performance requirement of, for example, being able to measure distances up to 1 m with a resolution of 5 cm. In other words, as long as the system can make a distance measurement, it satisfies the functional requirement. However, the above example does not satisfy performance requirements if measuring distances up to 2 m or a resolution accuracy of 2 cm is needed.
Describe the physical environment during the test. Make sure that the physical environment is as close to the actual operating conditions of your system as possible. Some questions to consider in our example are: What is a possible application of your distance measurement device based on light detection? Where will you operate this device? What is the natural light intensity in that environment? Is it always bright or dim? Does the light intensity in the environment change during operation hours? Your answers to these questions will help you determine what kind of light source to use in your setup, pick the location where you run your tests, and model the environmental error sources for your sensitivity analysis to disturbances.
Performance testing
Meeting the bare minimum for requirements is a must but achieving more can put your product one step ahead of all other competitors in the world. It is always good to know the capabilities of your design and identify the aspects that make you different from others. For example, your LDR-based measurement system can be capable of finding the distance with a 5 cm resolution up to 1.5 m instead of 1 m. Such an extra performance can bring you a competitive edge in the market. So why not quantify and document it? When conducting performance testing, try to determine whether your design has the same performance for all input combinations or not. If not, determine the performance regions in the input range that you want to measure the performance. For instance, your system can have a resolution of 5 cm between 1 m and 1.5 m but a resolution of 3 cm between 0.5 m and 1 m.
Sensitivity analysis
Sensitivity analysis can be divided into two: sensitivity of your system/subsystem to input parameters and to environmental conditions.
For input sensitivity analysis, first perform experiments to find out the input-output relationship (linear, logarithmic, exponential, etc.) for each input parameter and the output of the system. Note that, for an accurate analysis, choosing appropriate input steps is very important as they are directly related to the input-output relationship. Thus, it is also important to have an initial guess about the system/subsystem behavior. For instance, although fewer data points may be adequate to quantify a linear relationship, you will need more data points and hence, smaller steps for an exponential one. Always keep the possible nonlinear system behavior in mind and determine your parameter steps accordingly.
Most of your experiments will be performed in a noisy environment. Therefore, you will always have a statistical behavior rather than a deterministic one. So, you should collect a statistically meaningful amount of data and apply appropriate analysis methods to assess your results. The “Engineering Statistics” presentation by Prof. Emre Özkan is an extremely useful source for learning the basics of statistical evaluation of the test results (Please see the course schedule for the lecture date).
Another dimension of the sensitivity analysis is the quantification of the robustness of your design in different environmental conditions. To determine the parameters that your system performance is sensitive to, first, identify the candidate parameters and find the range in which you will test them. You should be able to provide the required justifications according to your design. In our example, light intensity in the environment (e.g., day or night conditions) will affect how well your LDR will sense light intensity from your light source. Another source of disturbance that may impact your LDR is the ambient temperature. For the environmental light intensity, your design specifications may be operating the device indoors with artificial lighting around 50 lux and at room temperatures ranging between 15-30°C. After you determine the parameter and the feasible ranges, run experiments to see how sensitive your system to these parameters is. Again, pay attention to your test steps.
Tasks for Demonstrations
During the design process, you are supposed to perform several demonstrations to the design coordinators to show how your system/subsystem works. Considering the aforementioned recommendations, for each demo, you are expected to follow the steps outlined below.
Before the Demo
- Prepare a “test sheet” where you provide an in-detail description of your test scenario with the required justifications. According to the above analysis, determine the test environment/equipment and how you will do all the required measurements. This will set your test scenario.
- Run your tests, collect measurement data and generate a test table with the number of trials, resolution steps for each parameter, expected performance, actual performance, and the error in performance. Do not forget to include your calibration results.
- Analyze your data and make deductions regarding your tests.
During the Demo
- You are expected to have empty columns in your test sheet for trials to be conducted with your design coordinator during the demo. The values you will record during the demo will generate extra error values that will augment your test.
After the Demo
- Analyze system/subsystem performance and sensitivity to parameters based on the measured errors. Consider different means of visualization (plots, diagrams, etc.) to document and present your results.