In many texts the error in a measurement is stated as being the difference between the true value and the value at the centre of the uncertainty interval (i.e. the nominal value). The precision is defined the same way as in this text (ie dependant on the width of the uncertainty interval) A common analogy used to explain precision and accuracy is the spread of shots produced by someone shooting at a target. The precision of the shots depends on how closely grouped they are and the accuracy is said to be how far the centre of the spread is from the bullseye. (this is comparable to the precision of a measurement being determined by the relative width of the uncertainty interval and the accuracy being the difference between the nominal value and the true value.)
There are several problems with this approach which will be illustrated by the following examples
Marksman A produces a grouping 5cm in radius, the centre of this grouping lies 1cm from the bullseye. None of the individual bullet holes are closer than 3.5cm from the bullseye.
Marksman B produces a grouping of radius 1cm whose centre is 1.2cm from the bullseye.
Using the criteria above we would be forced to conclude that Marksman A has produce the more accurate grouping despite the fact that every bullet he has fired has landed further from the bullseye than all of his competitor's shots!
This equivalent to regarding a measurement such as 10 +/- 0.5 cm being more accurate than 9.9 +/- 0.2cm for a true value of 10.1cm!
This equivalent to regarding a measurement such as 10 +/- 0.5 cm as being just as accurate as 10 +/- 0.2cm for a true value of 10.1cm!
This equivalent to regarding a measurement such as 10 +/- 0.5 cm for a true value of 10.0cm as being more completely accurate and having no error!