Quality Inspection AI – Thanks, but we have expert humans doing our QA

by DarwinAI

How Quantitative Explainable AI can make human visual inspections more productive     

“Thanks, but we have expert humans doing our QA” is a common response when the idea of quality inspection AI is brought up to manufacturers. For a long time this was a fair response as there really weren’t many suitable AI offerings that would offer enough of a benefit to warrant the cost of installation and operation. 

What most manufacturers don’t realize is that AI can now coexist with human quality experts to improve the overall efficiency of the QA process. More importantly, deep learning technology does exist to emulate the cognitive capabilities of the human brain needed to perform at the same level as a human when looking for abnormalities. This spells out a great opportunity to make manufacturing processes most efficient. 

Depending on the complexity of the inspection task, AI can either conduct the visual inspection with high accuracy, instilling enough trust in the process to relocate that human to a more fulfilling task, or it can be integrated into the process to present the human with an assessment of what it has found and let the human make the final decision. In the latter case,  it will save your experts’ time and energy and direct it in ways that will produce better results.  Even the best and most experienced human inspector will suffer fatigue or be occasionally thinking about getting to their kids’ activities on time or can’t remember if they left the stove on.  Human error is always in play; it’s an inescapable part of being human. Several studies suggest inspectors are only 77-87% accurate, depending on the complexity of the task. AI does not get tired and it does not have family members or household appliances to worry about.  AI is able to perform with higher precision and reliability, particularly if specialized tools (like microscopes or x-ray machines) are required for inspection.  You can even collaborate between your experts if the AI flags a particular image as containing a fault.  Let’s say that 4 of your experts agree that the AI is correct and it is showing a defect, but the remaining 2 experts disagree.  They can hash it out amongst themselves and come to a consensus that the AI will learn from.  You can then tell exactly who approved that decision and when and where it occurred for traceability and accountability.

You will also need skilled subject matter experts to label the data being used to train the AI.  If your training data only averages 8 out of 10 images being labelled correctly, a perfectly operating model will only ever be 80% accurate.  

While some basic tasks can be performed independently through the use of AI, the models required for this function must be trained and their output inspected by human operators. One of the benefits of using an AI system is the fact that it has the ability to continuously learn, whereas traditional technology is fixed based on how it’s programmed. One of the most optimal ways this is achieved is by human experts reviewing an AI’s decisions and validating. Not only does this provide a comfort level for the human, monitoring the decisions made by an AI, but this continued interaction helps build a feedback loop that ensures that every iteration of the AI model delivers consistently better results – The more time it’s in use, the more data it collects, the more feedback it validates from humans, and the more accurate it becomes.  

Using DarwinAI’s Quantitative Explainable AI, the following is achieved: 

  • A human’s job is made easier and less time consuming  
  • The quality inspector can see exactly why the AI has made the decision it has

What Does Quantitative Explainable AI Look Like?

Our researchers took a public parts inspection algorithm and applied our Quantitative Explainable AI to it.  These images show which areas of the photographs the AI used to identify parts that were damaged by using a mask (shading out areas of the image that are not relevant).  Quality inspectors can immediately tell what part of the image they need to scrutinize to find the damage, meaning if manually inspecting a part takes 50 seconds to gain enough confidence to know if they’ve found all the possible damage, they will only need to spend 5-10 seconds to validate since the AI has completed the initial scan for them. They can then use their expert judgement to agree or disagree with the model’s assessment and the AI will use that decision to improve itself in future iterations.  

Here’s an example: In the above left image, you see a defective screw. The highlighted area shows where the AI was looking when it made its decision that it was defective. Finding that issue before using it in an assembly can prevent any potential issues once used. Additionally, DarwinAI’s software allows those images to be retained, so that all defective parts can be used as evidence to negotiate accountability from the supplier.

How Do I Know If It’s For Me?

The reasons to use Quantitative Explainable AI are various and several.  However, most of them come down to dollars and cents.   On average it currently takes about 2 days to determine if a part or assembly has faults.  Over those 2 days that’s a lot of parts and/or products that have to be repurposed or scrapped, increasing material costs and lengthening the delivery times to your customers.  Nobody wants to have to start paying penalties to the customer because you could not deliver on time.  Depending on your pipeline, some of those defective products could already be on the way to consumers before the faults are known.  Recalls are costly and if the defect is vast enough or serious enough the damage to your corporation’s reputation could be incalculable.  If you are able to find all defects in a highly productive manner, that completely changes the game.

Click the link to learn more about DarwinAI or Manufacturing Use Cases & White Paper Download.