The history of Machine Vision – Timeline

Machine Vision

Machine vision is a technology that encompasses many engineering disciplines, including computer science, optics, mechanical engineering, and industrial automation. It uses image capture and analysis to perform tasks with speed and accuracy human eyes can’t match.

The common use cases of machine vision include inspecting manufactured goods, such as semiconductor chips, automobile parts, food, and pharmaceuticals.

It can visually identify issues like product defects and process inefficiencies, and this is critical for manufacturers to constrain costs and drive high customer satisfaction. Machine vision systems also conduct presence/absence detection, dimensional measurement, positioning, and counting.

The following timeline is a brief history of machine vision, from the early stages to the present. It also includes the major milestones in computer vision.

400BC: Aristotle observed partial solar eclipse by seeing the sun’s Image projected through small spaces between the leaves of a tree. Over the centuries, this optical phenomenon led to the development of the camera.

500BC-1700’s: Development of the pin-hole camera obscura, initially used to reproduce images to aid in drawing and painting.

1839: The daguerreotype camera, named after its French inventor Louis Daguerre, was the first photographic camera developed for commercial manufacture.

1907: Russian scientist and inventor Boris Rosing developed electronic scanning methods of reproducing images.

1929: Kinescope, invented by Russian scientist Vladimir Zworykin, who worked with Rosing, was the first practical electronic system for the transmission and reception of images.

1939-43: RCA and Albert Rose introduced the Image Orthicon tube, winning a production contract from US Navy in 1944; it became a common video tube in American Broadcasting from 1946-68.

1950: RCA and P. K. Weimer, S. V. Forgue, and R. R. Goodrich introduced Vidicon tube, a video camera tube design in which the target material is a photoconductor. Before the late 1970s, NASA used Vidicon cameras on most unmanned deep-space probes equipped with remote sensing.

1955: Selfridge published a paper in which he mentioned: “…eyes and ears for the computer.”

1959: The first digital image scanner was invented by transforming images into grids of numbers.

1960: To offer more image stability and to compete with RCA, Philips introduced the Plumbicon, Hitachi the Saticon (along with Sony and Thomson), and Sony the Trinicon, which Sony also used in some moderate cost professional cameras in the 1980s, such as the DXC-1800 and BVP-1 models.

1960: Larry Roberts, accepted as the ‘father of computer vision,’ discussed the possibilities of extracting 3D geometrical information from 2D perspective views of blocks (polyhedra) in his MIT Ph.D. thesis.

1963: Larry Roberts, the father of CV, described deriving 3D info about solid objects from 2D photographs. Frank Wanlass, an American electrical engineer, patents CMOS (complementary metal-oxide-semiconductor) logic circuits in digital logic circuits and analog circuits like CMOS image sensors.

1965: Roberts published “Machine Perception of Three-Dimensional Solids.” He demonstrated how a computer can produce a 3D model from a single 2D photograph.

1966: Marvin Minksy instructed a graduate student to connect a camera to a computer and describe what it sees.

1966-1972: Shakey the Robot was the first general-purpose mobile robot. The first prototype had a mobile cart with a TV camera and an optical range finder. It was controlled from an SDS-40 over radio and TV links.

1969: CCD or charged-couple device was invented at American Bell Laboratories by William Boyle and George E. Smith, allowing for the transfer of a charge along the surface of a semiconductor from one storage capacitor to the next – a major contribution to digital imaging. Michael Tompsett, a British-born physicist who worked at American Bell Laboratories, designed and built the first-ever video camera with a solid-state (CCD) sensor. He received a patent in 1972.

1971: William K Pratt and Harry C Andrews found the USC Signal and Image Processing Institute (SIPI), one of the first in the world dedicated to image processing.

1974: Bryce Bayer, an American scientist working for Kodak, brought vivid color image capture to digital photography with the invention of his Bayer filter.

1975-1979: SRI Vision Module, a device for industrial part recognition using a trainable decision tree procedure, was introduced.

1976: Image Understanding (IU), a machine vision technique, was applied to photo interpretation as part of the ARPA IU Program.

1978: Machine Intelligence Corporation (MIC) was founded.

1979: Nagel publishes “Digitization and analysis of traffic scenes,” which incorporates natural scenes with motion.

1980: Kunihiko Fukushima built the ‘neocognitron,’ the precursor of modern Convolutional Neural Networks.

1981: RANSAC (Random Sample Consensus) was published, which introduced a new, widely accepted paradigm for robust computation.

1982-1984: ImagCalc, an image analysis system with displays at multiple resolutions, perspective projections, and a wide range of image operators, was introduced. It was the first interactive single-user image processing and manipulation system coupled with a high-resolution bit-mapped display. It provided flexible access to 2D image processing tools.

1983: SRI established the ARPADMA Image Understanding Testbed system to provide a framework for evaluating and demonstrating the applicability of IU research results to automated cartography.

1984-1986: TerrainCalc was an interactive system for creating realistic sequences of real-world terrain perspective views. The concept of the fly-through was introduced by TerrainCalc, which was created by texture-mapping aerial imagery onto digital terrain models.

1984: Automated Imaging Association (AIA), the world’s largest Machine Vision Trade Association.

1985: StereoSys, a hierarchical area-based matching system for the automatic construction of 3-D models from stereo pairs of images, was introduced.

1986: Epipolar-Plane Image Analysis (EPI) System was introduced for effectively constructing a 3D description of a scene from a sequence of images.

Mid 1980’s: Smart cameras for industrial applications were introduced, based on an optical mouse (first imaging device and embedded processing unit in a compact system) developed by Richard Lyons at Xerox in 1981

1986-1990: IEEE 1394 serial bus interface standard for high-speed communications developed by Apple and Firewire.

1990’s: Automated Imaging Association (AIA) formed the Camera Link Committee to develop standards.

1991-93: Multiplex recording devices were introduced, together with cover video surveillance for ATM machines.

1996: Dicksmanns developed autonomous navigation on highways

1999: Gigabit Ethernet standard cables and equipment came into use, challenging distance limitations of camera link protocol and offering more speed.

2000: Led by companies such as JAI and Basler, the Camera Link standard was introduced.

2001: Two researchers at MIT introduced the first face detection framework (Viola-Jones) that works in real-time.

2002: Bülthoff developed face modeling/recognition software

2003: Hongeng develops automated motion tracking and criminal act recognition. DALSA, JAI A/S, JAI PULNiX, Adimec, Atmel, Basler AG, CyberOptics, Matrox, National Instruments, Photonfocus, Pleora Technologies, and Stemmer Imaging co-found GigE Vision Standards Committee standardized delivery of video and image data over Gigabit Ethernet networks. The first version of the standard was released in 2006.

2005: Sony introduces its first Smart Camera.

2008: USB 3.0 standard was introduced, offering more throughput and 10x faster speed than USB 2.0. Automated Imaging Association announces the introduction of the USB 3.0 Vision standard in January 2014.

2009: FLIR introduces the FLIR GF320, 2nd generation Optical Gas Imaging (OGI) camera, a technology first introduced by FLIR Commercial Systems within the EU in 2006, and is now supplied to end-user operators, service provider companies, and EPA’s (Environmental Protection Agencies) worldwide. Google started testing robot cars on roads.

2010: Google released Goggles, an image recognition app for searches based on pictures taken by mobile devices. To help tag photos, Facebook began using facial recognition. BAUMER presented the first cameras to contain a dual GigE interface at VISION 2010 in Stuttgart, Germany – the SXG industrial cameras.

2011: Facial recognition was used to help confirm the identity of Osama bin Laden after he was killed in a US raid.

2012: Google Brain’s neural network recognized pictures of cats using a deep learning algorithm. Solving the cost problem of linescan inspection of electro- and photoluminescence of solar panels, Teledyne DALSA introduced its Piranha HS NIR at The Vision Show 2012 in Boston – a TDI line-scan camera is capable of detecting wavelengths up to approximately 1150 nm, with a 34.3-kHz line rate supported over a Camera Link Full interface.

2013-2014: Vision robots started learning to work side by side with humans, learning the human’s preferences and cooperating.

2015: Google launched the open-source Machine learning system TensorFlow.

2016: Google DeepMind’s AlphaGo algorithm beat the world Go champion.

2017: Waymo sued Uber for allegedly stealing trade secrets. Apple released the iPhone X, advertising face recognition as one of its primary new features.

2018: Alibaba’s AI model scored better than humans in a Stanford University reading and comprehension test. Amazon sold its real-time face recognition system Rekognition to police departments.

2019: The Indian government announced a facial recognition plan allowing police officers to search images through the mobile app. The UK High Court ruled that automatic facial recognition technology to search for people in crowds is lawful.