How digitial inspectors decide
Instant reality check
Decision making relies on the interpretation of what we perceive.
Prior knowledge of what objects are supposed to look like, where they are positioned and what issues can arise is required for an accurate assessment of visual information.
Performing the task of visual inspection is called inference in machine learning language. Inference is the process of recognizing pre-learned patterns in data. In other words: applying existing knowledge to an incoming image. Is everything as it should be? If not, what's wrong?
In a manufacturing context, for example, more than half of quality checks involve visual confirmation to ensure parts are in the correct locations, have the right shape or colour or texture, and are defect free. BrainMatter learns to recognize when these parts are not in the right place, out of shape or the wrong colour/texture, and/or defective.
Inference can be performed on a central machine or on many edge devices.
Learning a skill
The knowledge that digital inspectors use to interpret the state of an asset is learned from a human expert who curated the examples to learn from.
The actual learning process happens in processing cycles called training. In a training session examples are fed to the AI models in our BrainMatter platform. After a training session your domain experts analyze the performance in order to decide if additional examples are required to strengthen the system's knowledge. Once a skill is learned digital inspectors can perform the inspection task automatically.
Each time the digital inspector interprets an image the result becomes a new example it can use to strengthen its knowledge about the object, or collect into a set of examples to learn about a new object, a property of that object or a detectable error.
Human experts serve as coaches to guide the continuous learning process, by curating new examples for digital inspectors out of processed inputs.