Skip to main content
NSF Project

Interpretable and Generalizable AI for Smart Manufacturing

Learn More

Learning Common Bases Underlying Data

A soft sensor design framework that can then be generalized across a wide fleet of process equipment.

Human-Like Reasoning for Decision Making

Integrating reasoning into neural networks to infer decisions in a human interpretable manner.

Overview

The global semiconductor market has seen a growth of 26% in 2021 and is expected to exceed $600 billion in 2022, according to the semiconductor industry association. This rising demand for semiconductors is putting immense pressure on manufacturers to increase production efficiency and output. The US Commerce chief has recently emphasized the shortage of semiconductor chips, stating that the solution is to produce more chips, particularly in America.

 

Project Goals

This project is a collaboration between Seagate Technology and the University of Minnesota. The aim of this project is to overcome the three main obstacles that are hindering the implementation of machine learning (ML) in the manufacturing sector. These obstacles include:

  1. Inadequate access to large amounts of data needed for researching and developing ML architectures that are suitable for manufacturing data.
  2. A lack of ML methods specifically designed for the manufacturing sector to aggregate, classify, and produce datasets tailored to training ML systems for specific processes, machines, or operations.
  3. Skepticism among manufacturing engineers towards “black box” methods, which makes it difficult to gain their trust.

By overcoming these obstacles, the project hopes to advance the use of machine learning in the manufacturing industry and unlock its full potential to improve productivity and yield.

Dr. Catherine Qi Zhao

Associate Professor
Department of Computer Science and Engineering
University of Minnesota

Dr. Sthitie Bom

Vice President
Seagate Technology

Support

The IGAIM Project is supported by the National Science Foundation award #2227450

Publications

  1. Chen, S., & Zhao, Q. (2023). Divide and Conquer: Answering Questions with Object Factorization and Compositional Reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [link]
  2. Qian, X., Zhang, C., Yella, J., Huang, Y., Huang, M. C., & Bom, S. (2021). Soft sensing model visualization: Fine-tuning neural network from what model learned. In 2021 IEEE International Conference on Big Data (pp. 1900-1908). [link]
  3. Huang, Y., Zhang, C., Yella, J., Petrov, S., Qian, X., Tang, Y., Zhu, X., & Bom, S. (2021). Grassnet: Graph soft sensing neural networks. In 2021 IEEE International Conference on Big Data (pp. 746-756). [link]
  4. Zhang, C., Yella, J., Huang, Y., Qian, X., Petrov, S., Rzhetsky, A., & Bom, S. (2021). Soft sensing transformer: hundreds of sensors are worth a single word. In 2021 IEEE International Conference on Big Data (pp. 1999-2008). [link]
  5. Yella, J., Zhang, C., Petrov, S., Huang, Y., Qian, X., Minai, A. A., & Bom, S. (2021). Soft-sensing conformer: A curriculum learning-based convolutional transformer. In 2021 IEEE International Conference on Big Data (pp. 1990-1998). [link]
  6. Chen, S., Jiang, M., Yang, J., & Zhao, Q. (2021). Attention in Reasoning: Dataset, Analysis, and Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7310-7326. [link]
  7. Zhang, Y., Jiang, M., & Zhao, Q. (2021). Explicit knowledge incorporation for visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1356-1365). [link]

Code & Data

The Seagate wafer factories are recognized as the leading 200mm wafer fabrication plants in the world. We share with the public the soft sensing datasets from Seagate wafer manufacturing data lake. Two related visual reasoning methods are also available.

  • Seagate Soft Sensing Datasets [link]

Two related visual reasoning methods are also available.

  • AiR: Attention with Reasoning Capability [link]
  • Explicit Knowledge Incorporation for Visual Reasoning [link]