Random errors cause measurement values to differ randomly from the true value. i.e. the magnitude of the difference varies randomly above or below the
true value. (One cause of random errors could be inconsistent measurement techniques). If we can assume that the true value itself is constant* then
repeated measurements can reveal the presence of random errors by the random variation in the measurement values obtained.

(* it is of course possible that the property being measured is fluctuating)

Many texts state that simply carrying out repeated measurements and then calculating their average value will reduce or eliminate random errors and improve accuracy. The reasoning is that if only random errors are present then statistically repeated measurement values will be distributed evenly around the true value. Therefore the average of these measurements should lie very close to the true value.

**There are two problems with this argument.**

The first problem is that the reliability of statistical statements like this depend very much on the sample size involved (i.e. the number of
measurements taken in this example). To appreciate this consider the statistical prediction that when a coin is tossed randomly you should achieve
equal numbers of heads and tails . In practice if you toss a coin say six times then it is possible that the actual outcome may be very different
to this prediction ( in fact variations of up to 100% heads and 0% tails and vice versa are entirely possible).

Now if you were say to toss the coin a million times then you would find that the outcome will always be extremely close to the 50% heads and 50%
tails that was predicted.

So in comparison if we only take say six measurements then it is entirely possible that all six measurements could be above the true value etc. (therefore the average of these values would still be significantly above the true value!). So the problem is knowing how many measurements must be taken! On introductory courses it is not uncommon for students to be "allowed" to assume that as few as 10 measurements will be adequate. However this is just to introduce them to the principle of "averaging to eliminate random errors". In practical scientific work there are statistical techniques that can be used to analyse the measurements and to determine with more confidence when an adequate number of measurements have been taken.

The second problem that exists is that finding the mean value still only gives us the nominal value for the measurement. We also need the uncertainty interval to complete the measurement. Again on some introductory courses students are "allowed" to just use the same uncertainty interval as the individual measurements (this is not actually correct and is just "tolerated""in order to make the introduction to this subject a little less complex)

In one of the subsequent sections a statistical technique is briefly described which makes it possible to determine when an adequate number of measurements have been made and how to determine the uncertainty interval to go with the nominal value.