Five key components of a machine vision system

computer vision

Machine vision comprises using computer vision for all industrial and non-industrial applications. While computer vision is primarily concerned with image processing on a hardware level, machine vision requires additional hardware I/O (input/output) and computer networks to transmit data generated by other process components, such as a robot arm.

Inspection of products such as microprocessors, cars, food, and pharmaceuticals is one of the most common uses of machine vision. Machine vision systems are increasingly used to solve industrial inspection issues, allowing for complete automation of the inspection process while improving accuracy and efficiency.

Devices are guided in executing their functions based on capturing and processing images by combining hardware and software. Industrial computer vision uses many similar algorithms and approaches as academic/educational and governmental/military computer vision applications, but there are some differences.

The major components of a machine vision system include the lighting, lens, image sensor, vision processing, and communications. Lighting illuminates the part to be inspected, allowing its features to stand out so the camera can see them. The lens captures the image and presents it to the sensor in the form of light. The sensor in a machine vision camera converts this light into a digital image sent to the processor for analysis.

Most machine vision hardware components, such as lighting modules, sensors, and processors, are commercial off-the-shelf (COTS). Machine vision systems can be assembled from COTS or purchased as an integrated system with all components in a single device.

This post will explore various key components of a machine vision system, including lighting, lenses, vision sensor, image processing, and communications.

1. Lighting

The most important factor in achieving successful machine vision results is lighting. Machine vision systems generate images by analyzing reflected light from an object rather than itself. A lighting technique entails the placement of a light source concerning the part and the camera. An image can be enhanced by using a specific lighting technique. By silhouetting a part obscures surface details to allow measurement of its edges; for example, it eliminates some features while enhancing others.

  • Backlighting: Backlighting enhances an object’s outline for applications that only require external or edge measurements. Backlighting aids in detecting shapes and improves the accuracy of dimensional measurements.
  • Axial diffuse lighting: From the side, axial diffuse lighting couples light into the optical path (coaxially). Light is cast downwards on the part by a semitransparent mirror illuminated from the side. The part reflects light to the camera through a semitransparent mirror, resulting in an image that is very evenly illuminated and uniform in appearance.
  • Structured light: Structured light happens when a light pattern (plane, grid, or more complex shape) is projected onto an object at a specific angle. It can be useful for contrast-independent surface inspections, dimensional data acquisition, and volume calculations.
  • Dark-field illumination: Surface defects are more easily revealed with directional lighting, including dark-field and brightfield illumination. For low-contrast applications, dark-field illumination is usually preferred. Specular light is reflected away from the camera in dark-field illumination, while diffused light from surface texture and elevation changes is reflected into the camera.
  • Brightfield illumination: High-contrast applications benefit from brightfield illumination. Highly directional light sources, such as quartz halogen and high-pressure sodium, may, on the other hand, produce sharp shadows and do not provide uniform illumination across the entire field of view. As a result, to provide even illumination in the brightfield, hot spots and specular reflections on shiny or reflective surfaces may necessitate a more diffused light source.
  • Diffused dome lighting: Diffused dome lighting provides the most uniform illumination of important features while masking irregularities that may distract the scene.
  • Strobe lighting: In high-speed applications, strobe lighting is used to freeze moving objects for examination. Blurring can also be avoided by using a strobe light.

2. Lenses

The image is captured by the lens and delivered to the camera’s image sensor. The lenses’ optical quality and price vary, and the captured image’s quality and resolution are determined by the lens used. On most vision system cameras, there are two types of lenses: interchangeable and fixed lenses. The most common interchangeable lens mounts are C-mounts and CS-mounts. Using the right lens and extension combination will yield the best image. Autofocus, either a mechanically adjusted lens or a liquid lens that can focus on the part automatically, is typically used in a standalone vision system with a fixed lens. Autofocus lenses usually have a fixed field of view at a given distance.

3. Image sensor

The ability of the camera to capture a properly illuminated image of the inspected object is dependent not only on the lens but also on the image sensor. To convert light (photons) to electrical signals, image sensors typically use charge-coupled devices (CCD) or complementary metal-oxide-semiconductor (CMOS) technology (electrons). The image sensor’s primary function is to capture light and convert it to a digital image while maintaining a balance of noise, sensitivity, and dynamic range. The image is made up of pixels.

Low light creates dark pixels, while bright pixels are created by bright light. It’s critical to ensure the camera has the correct sensor resolution for the job. The higher the resolution, the more detail and accurate measurements an image will have. Part size, inspection tolerances, and other parameters will dictate the required resolution.

4. Vision processing

Processing is the process of extracting data from a digital image, and it can happen either externally in a PC-based system or internally in a standalone vision system. The software is made up of several processing steps. The sensor is first used to obtain an image. Pre-processing may be required in some cases to optimize the image and ensure that all of the necessary features are visible. The software then locates the unique features, performs measurements, and compares them to the specification.

Finally, a decision is reached, and the outcomes are shared. While many physical components of a machine vision system (such as lighting) have similar specifications, the algorithms distinguish them. When comparing solutions, they should be at the top of the priority list. Vision software configures camera parameters, makes the pass-fail decision, communicates with the factory floor, and supports HMI development, depending on the system or application.

5. Communications

Because vision systems frequently use a variety of off-the-shelf components, these components must quickly and easily coordinate and connect to other machine elements. Typically, this is accomplished by sending discrete I/O signals or data over a serial connection to a device that logs or uses information. Discrete I/O points can be connected to a programmable logic controller (PLC), which can then use the data to control a work cell, an indicator such as a stack light, or directly to a solenoid that can trigger a reject mechanism.

A traditional RS232 serial output or Ethernet can be used to communicate data over a serial connection. Some systems use a higher-level industrial protocol, such as Ethernet/IP, which can be connected to a device such as a monitor or other operator interface to provide a process-specific operator interface for easy process monitoring and control.