SHOULD We Teach AI to Make Mistakes?
Humans are prone to errors, and our behaviour can be unpredictable. This is especially evident in the field of visual quality control, where human involvement means that assessments can be subjective.
Consider the evaluation of ripples, scratches, or other imperfections: these judgments often lack precise boundaries, leaving it to the inspector to determine whether an issue constitutes a defect or is acceptable. Such decisions can be swayed by numerous factors — the factory’s conditions, the inspector’s mood, their personality, and whether they are fatigued. Consequently, the quality of output may vary, leading us to conclude that humans err due to their inability to maintain consistent detection.
Artificial Intelligence (AI), in contrast, operates within the parameters we set. It adheres to the boundaries we teach; if we define what constitutes a defect and what is acceptable, it consistently makes the same decision. However, this raises a pertinent question: Should we want AI to exhibit more sensitivity and adaptability to situations? Do we need AI to “make mistakes” in a manner akin to humans? This implies that rigid adherence to specifications for constant quality control may not suffice. In situations where production pressures may warrant a compromise on quality, AI could be taught to accept minor defects or show greater tolerance towards new materials.
In deploying AI solutions, we are already encountering attempts to imbue AI with a semblance of humanity. It appears likely that this will become a new requirement, moving AI closer to human-like discretion and decision-making.