2024 DAC

SmartATPG: A Learning-based Automatic Test Pattern Generation with Graph Convolutional Network and Reinforcement Learning

Author: Wenxing Li;Hongqin Lyu;Shengwen Liang;Tiancheng Wan;Huawei Li

Affiliation: State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing; University of Chinese Academy of Sciences

Abstract:

Automatic test pattern generation (ATPG) is a critical technology in integrated circuit testing. It searches for effective test patterns to detect all possible faults in the circuit as entirely as possible, thereby ensuring chip yield and improving chip quality. However, the process of searching for test patterns is NP-complete. At the same time, the large amount of backtracking generated during the search for test patterns can directly affect the performance of ATPG. In this paper, a learning-based ATPG framework SmartATPG is proposed to search for highquality test patterns, reduce the number of backtracking during the search process, and thereby improve the performance of ATPG. SmartATPG utilizes convolutional network (GCN) to fully extract circuit feature information and efficiently explore the ATPG search space through reinforcement learning (RL). Experimental results show that the proposed SmartATPG can perform better than traditional heuristic strategies and deep learning heuristic strategies on most benchmark circuits.

 

 

 

 

ASP-DAC 2024

HybMT: Hybrid Meta-Predictor based ML Algorithm for Fast Test Vector Generation

Authors: Shruti Pandey, Jayadeva, and Smruti R. Sarangi

Affiliation:Indian Institute of Technology, Delhi

Abstract

ML models are increasingly being used to increase the test coverage and decrease the overall testing time. This field is still in its nascent stage and up till now there were no algorithms that could match or outperform commercial tools in terms of speed and accuracy for large circuits. We propose an ATPG algorithm HybMT in this paper that finally breaks this barrier. Like sister methods, we augment the classical PODEM algorithm that uses recursive backtracking. We design a custom 2-level predictor that predicts the input net of a logic gate whose value needs to be set to ensure that the output is a given value (0 or 1). Our predictor chooses the output from among two first-level predictors, where the most effective one is a bespoke neural network and the other is an SVM regressor. As compared to a popular, state-of-the-art commercial ATPG tool, HybMT shows an overall reduction of 56.6% in the CPU time without compromising on the fault coverage for the EPFL benchmark circuits. HybMT also shows a speedup of 126.4% over the best ML-based algorithm while obtaining an equal or better fault coverage for the EPFL benchmark circuits.

 

 

 

 

2024 CSTIC

Deep Reinforcement Learning-Based Automatic Test Pattern Generation

Author: Wenxing Li, Hongqin Lyu, Shengwen Liang, Zizhen Liu, Ning Lin, Zhongrui Wang, Pengyu Tian, Tiancheng Wang, Huawei Li

Affiliation: State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing; University of Chinese Academy of Sciences

Abstract:

Automatic test pattern generation (ATPG) is a key technology in digital circuit testing. In this paper, we propose an ATPG method based on deep reinforcement learning (DRL), aiming to reduce the backtracking of ATPG and thereby improve its performance. Specifically, we apply deep Q-network (DQN) in reinforcement learning to the PODEM (path-oriented decision making) ATPG algorithm, and design a reward function to maximize cumulative rewards through continuous interactions with the circuit. Such a design can enable the DRL agent to learn the optimal policy to guide backtracing decisions within PODEM. Experimental results show that the proposed method can perform better than traditional and artificial neural network (ANN)- based heuristic strategies on most benchmark circuits.

 

 

 

 

2023 ATS

Intelligent Automatic Test Pattern Generation for Digital Circuits Based on Reinforcement Learning

Author: Wenxing Li, Hongqin Lyu, Shengwen Liang, Tiancheng Wang, Pengyu Tian, Huawei Li

Affiliation: State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing; University of Chinese Academy of Sciences

Abstract:

Automatic Test Pattern Generation (ATPG) is a crucial technology in the testing of digital circuits. The excessive backtracks during the ATPG process can consume considerable computational resources and deleteriously affect performance. In this study, we introduce an intelligent ATPG method based on reinforcement learning to reduce the number of backtracks and enhance performance. Specifically, the Q-learning algorithm is employed to learn an optimal backtracing strategy pattern from the ATPG data produced through path-oriented decision-making (PODEM). The learned model is then utilized to guide the backtracing decisions within the PODEM, thereby improving the performance of the ATPG process. Experimental results demonstrate that, compared with traditional heuristic strategies and the backtrace path selection strategy based on artificial neural network (ANN), the proposed method can reduce backtrack occurrences and enhance performance more effectively.

 

 

 

 

2022 ICCAD

Observation Point Insertion Using Deep Learning

Author: Bonita Bhaskaran, Sanmitra Banerjee, Kaushik Narayanun, Shao-Chun Hung, Seyed Nima Mozaffari Mojaveri, Mengyun Liu, Gang Chen, Tung-Che Liang

Affiliation: NVIDIA

Abstract:

Silent Data Corruption (SDC) is one of the critical problems in the field of testing, where errors or corruption do not manifest externally. As a result, there is increased focus on improving the outgoing quality of dies by striving for better correlation between structural and functional patterns to achieve a low DPPM. This is very important for NVIDIA's chips due to the various markets we target; for example, automotive and data center markets have stringent in-field testing requirements. One aspect of these efforts is to also target better testability while incurring lower test cost. Since structural testing is faster than functional tests, it is important to make these structural test patterns as effective as possible and free of test escapes. However, with the rising cell count in today's digital circuits, it is becoming increasingly difficult to sensitize faults and propagate the fault effects to scan-flops or primary outputs. Hence, methods to insert observation points to facilitate the detection of hard-to-detect (HtD) faults are being increasingly explored. In this work, we propose an Observation Point Insertion (OPI) scheme using deep learning with the motivation of achieving - 1) better quality test points than commercial EDA tools leading to a potential lower pattern count 2) faster turnaround time to generate the test points. In order to achieve better pattern compaction than commercial EDA tools, we employ Graph Convolutional Networks (GCNs) to learn the topology of logic circuits along with the features that influence its testability. The graph structures are subsequently used to train two GCN-type deep learning modes - the first mode predicts signal probabilities at different nets and the second mode uses these signal probabilities along with other features to predict the reduction in test-pattern count when OPs are inserted at different locations in the design. The features we consider include structural features like gate type, gate logic, reconvergent-fanouts and testability features like SCOAP. Our simulation results indicate that the proposed machine learning modes can predict the probabilistic testability metrics with reasonable accuracy and can identify observation points that reduce pattern count.

 

 

 

 

ITC 2022

DeepTPI: Test Point Insertion with Deep Reinforcement Learning

Authors: Zhengyuan Shi; Min Li; Sadaf Khan; Liuzheng Wang; Naixing Wang; Yu Huang; Qiang Xu

Affiliation: The Chinese University of Hong Kong, HiSilicon Technologies Co., Ltd., China

Abstract:

Test point insertion (TPI) is a widely used technique for testability enhancement, especially for logic built-in self-test (LBIST) due to its relatively low fault coverage. In this paper, we propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTpi. Unlike previous learning-based solutions that formulate the TPI task as a supervised-learning problem, we train a novel DRL agent, instantiated as the combination of a graph neural network (GNN) and a Deep Q-Learning network (DQN), to maximize the test coverage improvement. Specifically, we model circuits as directed graphs and design a graph-based value network to estimate the action values for inserting different test points. The policy of the DRL agent is defined as selecting the action with the maximum value. Moreover, we apply the general node embeddings from a pretrained model to enhance node features, and propose a dedicated testability-aware attention mechanism for the value network. Experimental results on circuits with various scales show that DeepTPI significantly improves test coverage compared to the commercial DFT tool. The code of this work is available at https://github.com/cure-lab/DeepTPI.

 

 

 

 

ITC 2022

Neural Fault Analysis for SAT-based ATPG

Authors: Junhua Huang; Hui-Ling Zhen; Naixing Wang; Hui Mao; Mingxuan Yuan; Yu Huang

Affiliation: Noah's Ark Lab, Huawei

Abstract

Continued advances in process technology have led to a relentless increase in the design complexity of integrated circuits (ICs). In order to meet the increasing demand of low defective-parts-per-million (DPPM) and high product quality of the complex circuit designs, Boolean Satisfactory (SAT) has worked as a robust alternative to conventional APTG techniques. In SAT-based ATPG, logic cones related to the target faults are transformed to Boolean formulas, and standard SAT solving procedures are then used for solving these formulas. Recently, artificial intelligence (AI) techniques have shown great potential in speeding-up SAT solvers. However, the high diversity of the structural characteristics within the logic cones of target faults limits the AI techniques being used for SAT-based ATPG. To meet this challenge, this paper proposes a neural fault analysis technology that is made up of a multi-stage learning model and the testability classifier to highly increase the SAT-based ATPG solving efficiency. The multi-stage learning model is composed of a generative model with a topology structure discriminator and a conflict structure discriminator. It is trained for high-quality data synthesis. Then the testability classifier is trained for adaptive heuristic selection and effective initialization in SAT-based ATPG. Experimental results on both open-source and industrial circuits demonstrate that the neural fault analysis can reduce the SAT solving time by 34.79% and reduce the runtime of SAT-based ATPG by 7.43% on average. It is also shown that the proposed neural fault analysis can cover 9.14% of the faults failed by the conventional SAT-based ATPG framework with a comparable runtime.

 

 

 

 

ITC 2021

Testability-Aware Low Power Controller Design with Evolutionary Learning

Authors:Min Li; Zhengyuan Shi; Zezhong Wang; Weiwei Zhang; Yu Huang; Qiang Xu

Affiliation:The Chinese University of Hong Kong, HiSilicon Technologies Co., Ltd., China

Abstract

XORNet-based low power controller is a popular technique to reduce circuit transitions in scan-based testing. However, existing solutions construct the XORNet evenly for scan chain control, and it may result in sub-optimal solutions without any design guidance. In this paper, we propose a novel testability-aware low power controller with evolutionary learning. The XORNet generated from the proposed genetic algorithm (GA) enables adaptive control for scan chains according to their usages, thereby significantly improving XORNet encoding capacity, reducing the number of failure cases with ATPG and decreasing test data volume. Experimental results indicate that under the same control bits, our GA-guided XORNet design can improve the fault coverage by up to 2.11%. The proposed GA-guided XORNets also allows reducing the number of control bits, and the total testing time decreases by 20.78% on average and up to 47.09% compared to the existing design without sacrificing test coverage.

 

 

 

 

2019 DAC

High Performance Graph Convolutional Networks with Applications in Testability Analysis

Author: Yuzhe Ma, Haoxing Ren, Brucek Khailany, Harbinder Sikka, Lijuan Luo, Karthikeyan Natarajan, and Bei Yu.

Affiliation: CUHK, NVIDIA

Abstract:

Applications of deep learning to electronic design automation (EDA) have recently begun to emerge, although they have mainly been limited to processing of regular structured data such as images. However, many EDA problems require processing irregular structures, and it can be non-trivial to manually extract important features in such cases. In this paper, a high performance graph convolutional network (GCN) mode is proposed for the purpose of processing irregular graph representations of logic circuits. A GCN classifier is firstly trained to predict observation point candidates in a netlist. The GCN classifier is then used as part of an iterative process to propose observation point insertion based on the classification results. Experimental results show the proposed GCN mode has superior accuracy to classical machine learning modes on difficult-to-observation nodes prediction. Compared with commercial testability analysis tools, the proposed observation point insertion flow achieves similar fault coverage with an 11% reduction in observation points and a 6% reduction in test pattern count.

AI+EDA

Design for test