Unlike a continuous measuring device that can be accurate (on average) but not accurate, any lack of accuracy in an attribute measurement system necessarily leads to accuracy problems. If the error encoder is unclear or undecided about how to code an error, multiple errors of the same type are assigned to different codes, making the database inaccurate. In fact, for an attribute measurement system, imprecision is an important contribution to imprecision. For example, if repeatability is the main problem, evaluators are confused or undecided on certain criteria. If reproducibility is the problem, then evaluators have strong opinions on certain conditions, but those opinions differ. If the problems are shown by several evaluators, the problems are systemic or procedural. If the problems concern only a few evaluators, the problems may simply require a little personal attention. In both cases, training or work aids could be adapted either to specific individuals or to all evaluators, depending on the number of evaluators guilty of imprecise attribution of attributes. The accuracy of a measurement system is analyzed by subdividing it into two essential components: repeatability (the ability of a particular evaluator to assign the same value or attribute several times under the same conditions) and reproducibility (the ability of several evaluators to agree among themselves for a number of circumstances). In the case of an attribute measurement system, repeatability or reproducibility problems inevitably cause accuracy problems. In addition, if one knows the overall accuracy, repeatability and reproducibility, distortions can be detected even in situations where decisions are systematically wrong. Since implementing an attribute analysis can be time-saving, expensive, and usually uncomfortable for all parties involved (the analysis is simple compared to execution), it`s best to take a moment to really understand what needs to be done and why. An attribute agreement analysis allows the impact of repeatability and reproducibility on accuracy to be assessed simultaneously.
It allows the analyst to study the responses of multiple auditors, while examining multiple scenarios. It compiles statistics that assess the ability of evaluators to agree with themselves (repeatability), with each other (reproducibility) and with a well-known control or accuracy value (overall precision) for each characteristic – again and again. If the audit is indeed planned and designed, it may reveal enough information about the causes of accuracy issues to justify a decision not to use attribute agreement analysis at all. In cases where the audit does not provide sufficient information, the attribute agreement analysis allows for a more detailed analysis indicating the implementation of safer training and modifications to the measurement system. The audit should help determine which specific people and codes are the main sources of problems, and the assessment of the attribute agreement should help determine the relative contribution of reproducibility and reproducibility problems for those specific codes (and individuals) . . .