The Importance of Industry 4.0 and AI Adoption in a Changing Industry
Blogs from ODSC SpeakersBusiness + ManagementConferencesEast 2021posted by ODSC Community March 12, 2021 ODSC Community
I don’t need to tell you how much the world has changed over the last year – a (hopefully) once-in-a-lifetime pandemic took over our lives and caused massive disruption around the world. The way we live, work, and interact with each other was completely flipped on its head. Thankfully, we are humans that have the luxury of being able to adapt on the fly – the same cannot be said for whole industries, or even technology.
Take the manufacturing industry as an example. The world of manufacturing is historically very traditional, but in the last decade, the onset of Industry 4.0 brought about a desire to change. Now, manufacturers are looking to implement Industry 4.0 initiatives that improve quality and production yield. Add in the pressures of the COVID-19 pandemic, and the need to pivot is clear. Between factories being shut down worldwide and new social distancing requirements, manufacturers have struggled to meet consumer expectations. As a result, the demand for new technologies – and artificial intelligence (AI) in particular – has surged.
Manufacturers have realized the benefit of smart and autonomous systems, fueled by data and deep learning – and a powerful breed of AI to improve quality inspection on the factory floor.
But as manufacturers continue to embrace and adopt AI, it is important to remember that not all technologies are created equal. The traditional machine vision approach to quality control relies on a simple, two-step process. First, an expert decides which features (i.e. edges, curves, corners) in the images collected by each camera are important for each problem. Then, the expert creates a hand-tuned rule-based system, with several branching points—for example, how much “red” and “curvature” classify an object as a “ripe apple.” That system then automatically decides if the product is what it’s supposed to be.
This method was effective at the time, but manufacturers’ needs have evolved over time, and while traditional approaches to machine vision work well in some cases, it is often ineffective in situations where the difference between good and bad products is hard to detect.
So, what is the next frontier? Deep learning-enabled quality control software. In fact, a new kind of DNN called lifelong deep neural networks (L-DNN), inspired by neurophysiology, has emerged.
Rather than needing thousands of varied images, L-DNNs only require a handful of images to train and build a prototypical understanding of the object. The system can be deployed in seconds, and the handful of images can even be collected after the L-DNN has been deployed and the “RUN” button has been pressed, as long as an operator ensures none of these images actually shows a product with defects. Changes to the rules that define a prototypical object can also be made in real-time, to keep up with any changes in the production line.
Quality control software that is powered by L-DNN technology has been hugely beneficial in helping combat supply chain issues that still face manufacturers today. By leveraging L-DNN, the AI model is able to continuously learn from the data to better correct itself in the future.
By equipping manufacturers with technology that can identify novel objects as being different from the object it knows has countless benefits – less production line downtime, reduced material, and product returns and rework, and the ability to scale based on demand.
If you’re interested in learning more about how manufacturers can leverage technology such as AI and deep learning to improve quality inspections, I’ll be diving even deeper into the science and data behind the technology with real world use-case examples at my upcoming talk for ODSC East 2021, “What Kind of AI Can Help Manufacturing Adapt to a Pandemic.” Hope to see you there!
About the author/ODSC East 2021 speaker on Industry 4.0:
Anatoli Gorchet has over 20 years of experience developing massively parallel software for neural computation. He is a pioneer in applying general-purpose computing on graphics processing units to neural modeling. Anatoli has spoken at every major neural network conference as well as at GTC, DARPA, The National Institute for Aerospace and a keynote at the Embedded Systems Conference. He holds several patents, has authored over 30 publications on neural networks, and advises Fortune 500 companies on how to use AI to improve operational efficiencies. He has a PhD in Cognitive and Neural Systems, Boston University; an MS in Computer Science.