Calibration
In measurement technology and metrology, calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, a device generating the quantity to be measured such as a voltage, a sound tone, or a physical artifact, such as a meter ruler.
The outcome of the comparison can result in one of the following:
- no significant error being noted on the device under test
- a significant error being noted but no adjustment made
- an adjustment made to correct the error to an acceptable level
The calibration standard is normally traceable to a national or international standard held by a metrology body.
BIPM Definition
The formal definition of calibration by the International Bureau of Weights and Measures is the following: "Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."This definition states that the calibration process is purely a comparison, but introduces the concept of measurement uncertainty in relating the accuracies of the device under test and the standard.
Modern calibration processes
The increasing need for known accuracy and uncertainty and the need to have consistent and comparable standards internationally has led to the establishment of national laboratories. In many countries a National Metrology Institute will exist which will maintain primary standards of measurement which will be used to provide traceability to customer's instruments by calibration.The NMI supports the metrological infrastructure in that country by establishing an unbroken chain, from the top level of standards to an instrument used for measurement. Examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. Since the Mutual Recognition Agreement was signed it is now straightforward to take traceability from any participating NMI and it is no longer necessary for a company to obtain traceability for measurements from the NMI of the country in which it is situated, such as the National Physical Laboratory in the UK.
Quality of calibration
To improve the quality of the calibration and have the results accepted by outside organizations it is desirable for the calibration and subsequent measurements to be "traceable" to the internationally defined measurement units. Establishing traceability is accomplished by a formal comparison to a standard which is directly or indirectly related to national standards, international standards, or certified reference materials. This may be done by national standards laboratories operated by the government or by private firms offering metrology services.Quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO 9000 and ISO 17025 standards require that these traceable actions are to a high level and set out how they can be quantified.
To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a stated confidence level. This is evaluated through careful uncertainty analysis.
Some times a DFS is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the technical assistance of a calibration technician.
Measuring devices and instruments are categorized according to the physical quantities they are designed to measure. These vary internationally, e.g., NIST 150-2G in the U.S. and NABL-141 in India. Together, these standards cover instruments that measure various physical quantities such as electromagnetic radiation, sound, time and frequency, ionizing radiation, light, mechanical quantities, and, thermodynamic or thermal properties. The standard instrument for each test device varies accordingly, e.g., a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration.
Instrument calibration prompts
Calibration may be required for the following reasons:- a new instrument
- after an instrument has been repaired or modified
- moving from one location to another location
- when a specified time period has elapsed
- when a specified usage has elapsed
- before and/or after a critical measurement
- after an event, for example
- * after an instrument has been exposed to a shock, vibration, or physical damage, which might potentially have compromised the integrity of its calibration
- * sudden changes in weather
- whenever observations appear questionable or instrument indications do not match the output of surrogate instruments
- as specified by a requirement, e.g., customer specification, instrument manufacturer recommendation.
Basic calibration process
Purpose and scope
The calibration process begins with the design of the measuring instrument that needs to be calibrated. The design has to be able to "hold a calibration" through its calibration interval. In other words, the design has to be capable of measurements that are "within engineering tolerance" when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the actual measuring instruments performing as expected.Basically, the purpose of calibration is for maintaining the quality of measurement as well as to ensure the proper working of particular instrument.
Intervals
The exact mechanism for assigning tolerance values varies by country and as per the industry type. The measuring of equipment is manufacturer generally assigns the measurement tolerance, suggests a calibration interval and specifies the environmental range of use and storage. The using organization generally assigns the actual calibration interval, which is dependent on this specific measuring equipment's likely usage level. The assignment of calibration intervals can be a formal process based on the results of previous calibrations. The standards themselves are not clear on recommended CI values:Standards required and accuracy
The next step is defining the calibration process. The selection of a standard or standards is the most visible part of the calibration process. Ideally, the standard has less than 1/4 of the measurement uncertainty of the device being calibrated. When this goal is met, the accumulated measurement uncertainty of all of the standards involved is considered to be insignificant when the final measurement is also made with the 4:1 ratio. This ratio was probably first formalized in Handbook 52 that accompanied MIL-STD-45662A, an early US Department of Defense metrology program specification. It was 10:1 from its inception in the 1950s until the 1970s, when advancing technology made 10:1 impossible for most electronic measurements.Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test equipment being calibrated can be just as accurate as the working standard. If the accuracy ratio is less than 4:1, then the calibration tolerance can be reduced to compensate. When 1:1 is reached, only an exact match between the standard and the device being calibrated is a completely correct calibration. Another common method for dealing with this capability mismatch is to reduce the accuracy of the device being calibrated.
For example, a gauge with 3% manufacturer-stated accuracy can be changed to 4% so that a 1% accuracy standard can be used at 4:1. If the gauge is used in an application requiring 16% accuracy, having the gauge accuracy reduced to 4% will not affect the accuracy of the final measurements. This is called a limited calibration. But if the final measurement requires 10% accuracy, then the 3% gauge never can be better than 3.3:1. Then perhaps adjusting the calibration tolerance for the gauge would be a better solution. If the calibration is performed at 100 units, the 1% standard would actually be anywhere between 99 and 101 units. The acceptable values of calibrations where the test equipment is at the 4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103 units would remove the potential contribution of all of the standards and preserve a 3.3:1 ratio. Continuing, a further change to the acceptable range to 98 to 102 restores more than a 4:1 final ratio.
This is a simplified example. The mathematics of the example can be challenged. It is important that whatever thinking guided this process in an actual calibration be recorded and accessible. Informality contributes to tolerance stacks and other difficult to diagnose post calibration problems.
Also in the example above, ideally the calibration value of 100 units would be the best point in the gauge's range to perform a single-point calibration. It may be the manufacturer's recommendation or it may be the way similar devices are already being calibrated. Multiple point calibrations are also used. Depending on the device, a zero unit state, the absence of the phenomenon being measured, may also be a calibration point. Or zero may be resettable by the user-there are several variations possible. Again, the points to use during calibration should be recorded.
There may be specific connection techniques between the standard and the device being calibrated that may influence the calibration. For example, in electronic calibrations involving analog phenomena, the impedance of the cable connections can directly influence the result.