Basic principles of machine vision you should know

As a universal sensor, the camera provides information that allows you to automate processes. Machine vision and optical inspection systems perform their work tirelessly and unmistakably with the correct application design. And it is the principles of correct application design that we will mainly focus on in this series.

For the successful implementation of a machine vision system, we need not only quality cameras and powerful software, but the most important are the knowledge and experience of the author of the application. The author of the application must choose the appropriate technical means and structure of the application program.

We must choose the right technical solution, such as:

  • number and arrangement of cameras
  • the right type, resolution, image quality, brightness dynamics, noise, color and camera frame rate
  • lens type, resolution, aperture and focal length
  • method of lighting, lighting elements and their control
  • equipment and required computer performance
  • GPU usage and required GPU performance

In the field of software, we must decide, for example, on:

  • image processing using GPU or CPU
  • image field geometry calibration
  • adjustment of brightness, contrast, color
  • noise reduction, sharpness correction, image filtering
  • image processing such as thresholding, edge detection, morphological filtering, etc.
  • a sequence of appropriate steps to achieve the desired functionality
  • storing images and videos on a disk and transferring images on a computer network

Application being developed

All these decisions have a major impact on the accuracy and robustness of the application developed. The programmer must be well aware of the possibilities and limitations of individual image processing methods and algorithms. They need to know where each method has its limits, what they are suitable for and what they are not. Therefore, the series of articles will briefly summarize the basic methods and procedures commonly used in machine vision systems. The series cannot aim to consistently and in detail explain the current state of knowledge in the field of computer image processing. Those interested in more detailed theoretical information have access to a lot of literature and a lot of other information on the Internet.

When processing images, there are many algorithms that can very conveniently run in parallel and significantly increase the overall performance of the system. Today’s computers, which combine multi-core CPUs with a massively parallel GPU architecture, make it possible to take advantage of the hitherto virtually unavailable quality of machine vision algorithms.

Therefore, we will try to limit the mathematical form of the description of the problem and we will prefer an accessible form of explanation even at the cost of a certain loss of exactness.

Digital images

When processing images via a computer, we work with digital image data stored in two-dimensional matrices, so-called images or also images. The elements of these matrices are pixels, so-called pixels (from the English picture element). Pixel coordinates and their values ​​are usually integers.

As indicated by DZOptics, the quality of a digital image associates with its areal, luminance (precisely radiometric, but for the purposes of the article we will use the usual term luminance) and time resolution. The area resolution determined by the distance of the pixels increases with the number of rows and columns of the image matrix. Brightness resolution is given by the number of quantization brightness levels. For a monochrome image, a zero brightness value is black and the highest displayable brightness value is white. The time resolution is given by the time intervals between the individual frames.

Related Articles

Back to top button