Aug 11, 2021
Artificial Vision and the Limits of the Human Eye
Artificial intelligence
In fact, our eyes are so important and the information they process is so voluminous that they monopolize an entire lobe of our brain, the occipital lobe. About 90% of our brain activity is used to process what we look at, from interpreting the signal captured by our eyes (color, shape, movement, beauty, etc.) to contextualizing it in the environment that surrounds us.
There is no need to tell you that in an industrial environment, looking at something carefully to find a defect or an anomaly, especially when you do it repeatedly, is downright exhausting!
For nearly 60 years, the concept of artificial vision has been a dream for researchers. It has been the source of inspiration for many science fiction movies. Computer vision, also called artificial vision or digital vision, is now at an extraordinary point in its history. The acceleration of the innovations and possibilities of computer vision systems has been underway for about 10 years now.
Two major changes have occurred in the last decade that have enabled this spectacular growth. The first is the arrival of deep neural networks as a learning model in artificial intelligence. The power and capabilities of neural networks are known to have radically changed the game in the world of artificial intelligence, especially in computer vision systems. Models using neural networks have actually opened up a whole new world of possibilities in image processing by artificial intelligence.
Add to that a dramatic drop in the price of industrial-grade cameras, which include “embedded” microprocessors, and you have the perfect combination to make this technology both powerful and affordable!
The everyday quote “I’ll believe it when I see it!” is the perfect opposite of the reflex we must have in business given what vision technologies make possible today. We need to stop wanting to see things to validate them and instead use our eyes to design and innovate solutions.
Production lines and machines are getting faster and faster, and the demands for perfection are increasing. Computer vision technologies have far surpassed what humans can do in terms of naked-eye validation—and I’m not even talking about the incredible speed that can be achieved with them.
Just think that today’s vision systems can perform very complex image analysis tasks in a fraction of a second and at a very low cost. The use of artificial intelligence in computer vision is a “quick win” too often underestimated in the manufacturing world and in the business world in general.
For simple dimensional validation or very advanced quality control, the training time of an algorithm can vary from a few weeks to a few months at most. After this first training, the system will have already beaten the human. Thereafter, the system becomes progressively even better as it is trained and new data is provided.
In 2012, billionaire and Sun Microsystems cofounder Vinod Khosla went so far as to say that radiologists still practicing in 10 years “will be killing patients.” Okay, it’s almost 10 years later, and we still need radiologists . . . but for how long?
However, if you are asking a human to validate with their eyes the conformity of something that is produced on the assembly line or to find defects in the vegetables you produce, it is high time to stop and entrust this task to a machine.
The rule is simple: what the human can see, the vision system will see. If that’s the case, please don’t assign the task to the human!