To illustrate the effects of precision on accuracy we will consider measurements made of a known true value and then imagine being able to adjust the width of the uncertainty interval while the mid point remains the same distance from the true value.

If the width of the uncertainty interval is increased, this will always push the furthest limit of the uncertainty interval further away from the true value This increases the error and so accuracy is reduced! Therefore reducing precision (by increasing the uncertainty) causes accuracy to be reduced.

If the width of the uncertainty is decreased, this will always pull the furthest limit of the uncertainty interval closer to the true value This reduces the error and so accuracy is increased! Therefore improving precision (by reducing the uncertainty) causes accuracy to be increased.

** It is important to point out though that although increasing precision improves accuracy this improvement is not always of any practical benefit.
This is explained in the following paragraphs.
**

The minimum error possible in a measurement occurs when the true value lies right at the centre of the uncertainty interval. The error will then be equal to half the uncertainty interval. In this situation any improvements in precision will result in practical improvements in accuracy.

However in practical situations the measuring instrument will never be perfectly calibrated and so the true value will be offset from the centre of the uncertainty interval.

Now if you reduce the uncertainty interval the limits will be pulled closer to the centre and as before the limit which is furthest from the true value will be pulled closer to it. So as before error is reduced and accuracy improved.

However from the diagram you can see that centre of the uncertainty interval will always be closer to the true value than the furthest limit is. This means that the error in the measurement will always be greater than the offset between the centre of the uncertainty interval and the true value. So if this offset itself would be considered too large an error then there is little point in improving the precision.

To illustrate this point consider the futility of taking extremely precise measurements from an instrument which has very large calibration errors!

To further illustrate the relevance of precision and accuracy the following diagrams show examples of two different measurements taken of the same true value.

**Example 1**

**Example 2**

**Example 3**