ERRORS IN MEASUREMENTS
While performing physical measurements, it is important to note that the measurements obtained are not completely accurate, as they are associated with uncertainty. Thus, in order to analyse the measurement data, we need to understand the nature of errors associated with the measurements.
Therefore, it is imperative to investigate the causes or sources of these errors in measurement systems and find out ways for their subsequent elimination. Two broad categories of errors in measurement have been identified: systematic and random errors.
Systematic or Controllable Errors
A systematic error is a type of error that deviates by a fixed amount from the true value of measurement. These types of errors are controllable in both their magnitude and their direction, and can be assessed and minimized if efforts are made to analyse them. In order to assess them, it is important to know all the sources of such errors, and if their algebraic sum is significant with respect to the manufacturing tolerance, necessary allowance should be provided to the measured size of the workpiece. Examples of such errors include measurement of length using a metre scale, measurement of current with inaccurately calibrated ammeters, etc. When the systematic errors obtained are minimum, the measurement is said to be extremely accurate. It is difficult to identify systematic errors, and statistical analysis cannot be performed. In addition, systematic errors cannot be eliminated by taking a large number of readings and then averaging them out. These errors are reproducible inaccuracies that are consistently in the same direction.
Minimization of systematic errors increases the accuracy of measurement. The following are the reasons for their occurrence:
1. Calibration errors
2. Ambient conditions
3. Deformation of workpiece
4. Avoidable errors
Calibration Errors
A small amount of variation from the nominal value will be present in the actual length standards, as in slip gauges and engraved scales. Inertia of the instrument and its hysteresis effects do not allow the instrument to translate with true fidelity. Hysteresis is defined as the difference between the indications of the measuring instrument when the value of the quantity is measured in both the ascending and descending orders. These variations have positive significance for higher-order accuracy achievement. Calibration curves are used to minimize such variations. Inadequate amplification of the instrument also affects the accuracy.
Ambient Conditions
It is essential to maintain the ambient conditions at internationally accepted values of standard temperature (20 ºC) and pressure (760mmHg) conditions. A small difference of 10mmHg can cause errors in the measured size of the component. The most significant ambient condition affecting the accuracy of measurement is temperature. An increase in temperature of 1ºC results in an increase in the length of C25 steel by 0.3µm, and this is substantial when precision measurement is required. In order to obtain error-free results, a correction factor for temperature has to be provided. Therefore, in case of measurements using strain gauges, temperature compensation is provided to obtain accurate results. Relative humidity, thermal gradients, vibrations, and CO2 content of the air affect the refractive index of the atmosphere. Thermal expansion occurs due to heat radiation from different sources such as lights, sunlight, and body temperature of operators.
Deformation of Workpiece
Any elastic body, when subjected to a load, undergoes elastic deformation. The stylus pressure applied during measurement affects the accuracy of measurement. Due to a definite stylus pressure, elastic deformation of the work-piece and deflection of the work-piece shape may occur, as shown in Fig. 1.4. The magnitude of deformation depends on the applied load, area of contact, and mechanical properties of the material of the given workpiece. Therefore, during comparative measurement, one has to ensure that the applied measuring loads are same.
Avoidable Errors
These include the following:
Datum errors Datum error is the difference between the true value of the quantity being measured and the indicated value, with due regard to the sign of each. When the instrument is used under specified conditions and a physical quantity is presented to it for the purpose of verifying the setting, the indication error is referred to as the datum error.
Reading errors
These errors occur due to the mistakes committed by the observer while noting down the values of the quantity being measured. Digital readout devices, which are increasingly being used for display purposes, eliminate or minimize most of the reading errors usually made by the observer. Errors due to parallax effect Parallax errors occur when the sight is not perpendicular to the instrument scale or the observer reads the instrument from an angle. Instruments having a scale and a pointer are normally associated with this type of error. The presence of a mirror behind the pointer or indicator virtually eliminates the occurrence of this type of error.
Effect of misalignment These occur due to the inherent inaccuracies present in the measuring instruments. These errors may also be due to improper use, handling, or selection of the instrument. Wear on the micrometer anvils or anvil faces not being perpendicular to the axis results in misalignment, leading to inaccurate measurements. If the alignment is not proper, sometimes sine and cosine errors also contribute to the inaccuracies of the measurement.
Zero errors
When no measurement is being carried out, the reading on the scale of the instrument should be zero. A zero error is defined as that value when the initial value of a physical quantity indicated by the measuring instrument is a non-zero value when it should have actually been zero. For example, a voltmeter might read 1V even when it is not under any electromagnetic influence. This voltmeter indicates 1V more than the true value for all subsequent measurements made. This error is constant for all the values measured using the same instrument. A constant error affects all measurements in a measuring process by the same amount or by an amount proportional to the magnitude of the quantity being measured. For example, in a planimeter, which is used to measure irregular areas, a constant error might occur because of an error in the scale used in the construction of standard or, sometimes, when an incorrect conversion factor is used in conversion between the units embodied by the scale and those in which the results of the measurements are expressed.
Therefore, in order to find out and eliminate any systematic error, it is required to calibrate the measuring instrument before conducting an experiment. Calibration reveals the presence of any systematic error in the measuring instrument.
Random Errors
Random errors provide a measure of random deviations when measurements of a physical quantity are carried out repeatedly. When a series of repeated measurements are made on a component under similar conditions, the values or results of measurements vary. Specific causes for these variations cannot be determined, since these variations are unpredictable and uncontrollable by the experimenter and are random in nature. They are of variable magnitude and may be either positive or negative. When these repeated measurements are plotted, they follow a normal or Gaussian distribution. Random errors can be statistically evaluated, and their mean value and standard deviation can be determined. These errors scatter around a mean value. If n measurements are made using an instrument, denoted by v1, v2, v3,…, vn, then arithmetic mean is given as
v = v1 + v2 + v3 … … … vn / n
and standard deviation s is given by the following equation:
sσ = ± Σ ( ) v v −2/n
Standard deviation is a measure of dispersion of a set of readings. It can be determined by taking the root mean square deviation of the readings from their observed numbers, which is given by the following equation:
sσ = ± Σ ( ) v v −+−+ ( ) v v + − ( ) v v/n1 n222 2...
Random errors can be minimized by calculating the average of a large number of observations. Since precision is closely associated with the repeatability of the measuring process, a precise instrument will have very few random errors and better repeatability. Hence, random errors limit the precision of the instrument. The following are the likely sources of random errors:
1. Presence of transient fluctuations in friction in the measuring instrument.
2. Play in the linkages of the measuring instruments.
3. Error in operator’s judgement in reading the fractional part of engraved scale divisions.
4. Operator’s inability to note the readings because of fluctuations during measurement.
5. Positional errors associated with the measured object and standard, arising due to small variations in setting.
clearly depicts the relationship between systematic and random errors with respect to the measured value. The measure of a system’s accuracy is altered by both systematic and random errors. Table 1.1 gives the differences between systematic and random errors.
Fig. 1.5 Relationship between systematic and random errors with measured value