2024 DAC

PowPrediCT: Cross-Stage Power Prediction with Circuit-Transformation-Aware Learning

Author: Yufan Du,Zizheng Guo,Xun Jiang ,Zhuomin Chai, Yuxiang Zhao; Yibo Lin , Runsheng Wang,Ru Huang

Affiliation: Peking University;Wuhan University

Abstract:

Accurate and efficient power analysis at early VLSI design stages is critical for effective power optimization. It is a promising yet challenging task, especially during placement stage with the clock tree and final signal routing unavailable. Additionally, optimization-induced circuit transformations like circuit restructuring and gate sizing can invalidate fine-grained power supervision. Addressing these, we introduce the first generalizable circuit-transformation aware power prediction model at placement stage.  Compared to the cutting-edge commercial IC engine Innovus, we have significantly reduced the cross-stage power analysis error between placement and detailed routing.

 

 

 

 

2024 DAC

Multi-order Differential Neural Network for TCAD Simulation of the Semiconductor Devices

Author: Zifei Cai,AnAoxue Huang , Yifeng Xiong , Dejiang Mu, Xiangshui Miao ,Xingsheng Wang

Affiliation: Huazhong University of Science and Technology;

Abstract:

Technology Computer Aided Design (TCAD) is a crucial step in the design and manufacturing of semiconductor devices. It involves solving physical equations that describe the behavior of semiconductor devices to predict various device parameters. Traditional TCAD methods, such as finite volume and finite element methods, discretize relevant physical equations to achieve numerical simulations of devices, significantly burdening the computation resources. For the first time, this paper proposes a novel method for TCAD simulation based on Physics-Informed Neural Networks (PINNs).We proposed Multi-order Differential Neural Network (MDNN), an improved Radial Basis Function Neural Network (RBFNN) model. By training MDNN, it achieves the coupled solution of the Poisson equation and drift-diffusion equation under steady-state conditions, without the need for a pre-existing dataset. To the best of our knowledge, this marks the first instance of an ML-TCAD simulation that does not require any pre-existing data. For an example of PN junction diode, this method effectively simulates the basic physical characteristics of the device, with a self-consistent solution error of less than 1×10^-5.

 

 

 

 

2024 ASP-DAC

Physics-Informed Learning for EPG-Based TDDB Assessment

Author: Dinghao Chen, Wenjie Zhu, Xiaoman Yang, Pengpeng Ren, Zhigang Ji, Hai-Bao Chen

Affiliation: Department of Micro/Nano Electronics, Shanghai Jiao Tong University, Shanghai, China

Abstract:

Time-dependent dielectric breakdown (TDDB) is one of the important failure mechanisms for copper (Cu) interconnects. Many TDDB models have been proposed based on different physics kinetics in the past. Recently, a physics-based TDDB model, which is based on the breakdown concept of electric path generation (EPG), has been proposed and has shown advantage over widely accepted existing electrostatic field-based TDDB assessment. However, the determination of the time-to-failure from this EPG based TDDB model includes solving partial differential equation (PDE) with time-consuming finite-element method (FEM). In recent years, deep neural networks have been proposed to predict numerical solutions of PDEs. In this paper, we use physics-informed neural network to solve the diffusion equation of ions in an electric field extracted from EPG based TDDB model. The continuous definite condition and hard constrain optimization methods are used for improving the performance of PINN in terms of accuracy and speed. Compared with the FEM method, the proposed PINN method can lead to about 100 times speedup with less than 0.1% mean squared error.

 

 

 

 

2024 ASP-DAC

LayNet: Layout Size Prediction for Memory Design Using Graph Neural Networks in Early Design Stage

Author: Jee Hyong Lee;  Jung Yun Choi

Affiliation: Sungkyunkwan University, Suwon-si, Gyeonggi-do, Korea;Samsung Electronics Co, Ltd., Hwaseong-si, Gyeonggi-do, Korea

Abstract:

In memory designs that adopt a full-custom design, accurately predicting the layout size of a circuit block is crucial for reducing design iterations. However, predicting the layout size is challenging due to the complex space sizes caused by wiring and layout-dependent effects between circuit elements. To address the challenge, we propose LayNet, a novel graph neural network mode that predicts the layout size by constructing a weighted graph. We convert a circuit into a weighted graph to mode the relationships between elements. By applying graph neural networks to the weighted graph, we can accurately predict the layout size. We also propose the edge selection and hierarchical graph learning approach to reduce memory usage and inference time for large circuit blocks. LayNet achieves state-of-the-art performance on 6300 pairs of circuits and layouts in industrial memory products. Specifically, it significantly reduces the mean absolute percentage error rate by 20.82%~88.17% for manually generated layouts and by 7.97%~73.39% for semi-auto-generated layouts, outperforming conventional approaches. Also, the edge selection and hierarchical graph learning approaches reduce memory usage by 140.85x and 238.10x for these two types of layouts, respectively, and inference time by 14.14x and 37.84x, respectively, while maintaining performance.

 

 

 

 

2023 TCAS-II

Patch-Based Adversarial Training for Error-Aware Circuit Annotation of Delayered IC Images

Author: Yee-Yang Tee, Xuenong Hong, Deruo Cheng, Chye-Soon Chee, Yiqiong Shi, Tong Lin, Bah-Hwee Gwee

Affiliation: School of Electrical and Electronic Engineering, Nanyang Technological University, Jurong West, Singapore; Temasek Laboratories@NTU, Nanyang Technological University, Jurong West, Singapore

Abstract:

Circuit annotation is one of the most important tasks in the analysis of integrated circuit images for hardware assurance. Recently, deep learning methods have been adopted for circuit annotation due to their promising accuracy. However, the pixel-wise optimization metrics in deep learning methods are insufficient to ensure a good spatial contiguity in the segmentation results. This could result in circuit connection errors that are detrimental to the subsequent circuit analysis. In this brief, a patch-based adversarial training framework is proposed to mitigate such circuit connection errors. Our proposed method consists of a encoder-decoder segmentation network and a patch-based discriminator to provide adversarial supervision for the segmentation network. The adversarial training aligns the distributions of the segmentation results with the ground truth in patches, which we hypothesize to be more effective for mitigating circuit connection errors. In our experiments, we have achieved a 17.4% reduction in circuit connection errors as compared to the second best reported technique. We investigated the explainability of our proposed method through a heat map analysis, and demonstrate that our patch-based discriminator has a higher feature response in regions that are likely to contain circuit connection errors.

 

 

 

 

2023 TCAD

GNN-Cap: Chip-Scale Interconnect Capacitance Extraction Using Graph Neural Network

AuthorsLihao Liu; Fan Yang; Li Shang; Xuan Zeng

Affiliation: School of Microelectronics, State Key Laboratory of Integrated Chips and System, Fudan University, Shanghai, China

Abstract:

Interconnect capacitive parasitics are becoming increasingly dominant at finer technology nodes. Chip-scale interconnect capacitance extraction is a critical but challenging task. The structure patterns of nanometer-scale on-chip interconnects are complex. The accuracy of widely used pattern-matching-based capacitance extraction methods is limited by labor-intensive pattern library construction. This work presents graph neural network (GNN)-Cap, a GNN-based method for chip-scale interconnect capacitance extraction. GNN-Cap uses graph presentation learning to model the complex interconnect structural patterns, which enables accurate and efficient prediction of wiring capacitances. Compared with StarRC, the de facto commercial capacitance extraction tool, GNN-Cap achieves a speed up of 11× to 13× , and reduces the average relative errors of total and coupling capacitances by 81% and 59%, respectively.

 

 

 

 

2023 ICCAD

Invited Paper: Unleashing the Potential of Machine Learning: Harnessing the Dynamics of Supply Noise for timing Sign-Off

Author: Yufei Chen, Xiao Dong, Wei-Kai Shih, Cheng Zhuo

Affiliation: Synopsys Inc., Mountain View, USA; Zhejiang University, Hangzhou, China

Abstract:

With the continuously growing supply noise in advanced technologies, timing sign-off has become increasingly challenging. On one hand, sign-off with the worst-case static supply level can be too conservative. On the other hand, the interplay between noise and timing can easily induce repeated design iterations. Thus, for accurate timing sign-off, it is critical to accurately account for the impact of supply noise while maintaining reasonable simulation complexity. In this work, we will present how to incorporate a machine learning (ML) assisted cell level timing mode into the conventional static timing analysis (STA) engine and use just-in-time integration technique to achieve both run-time efficiency and flexibility, which eventually enables more accurate dynamic noise-aware timing sign off.

 

 

 

 

2023 ICCAD

Fast Full-Chip Parametric Thermal Analysis Based on Enhanced Physics Enforced Neural Networks

Author: Liang Chen, Jincong Lu, Wentian Jin, Sheldon X.-D. Tan

Affiliation: Department of Electrical and Computer Engineering, University of California, Riverside, CA

Abstract:

In this work, we propose a fast full-chip thermal numerical analysis approach based on an enhanced physics-informed neural networks (PINN) framework. The new method, called ThermPINN, leverages both PINN-based DNN optimization framework and analytic solutions of simplified thermal problems for solving thermal partial differential equations (PDE). The resulting ThermPINN leads to more efficient training speed of DNN networks and more scalability for solving large PDE problems. Specifically, we propose to partially enforce physics laws based on closely related analytic solutions to simpler problems. As a result, we are able to significantly reduce the number of variables in the loss function and easily meet boundary conditions. To consider the impact of various ambient temperatures and effective convection coefficients, which are influenced by different design parameters and run-time conditions, we develop a parameterized thermal analysis technique. This technique enables design space exploration and uncertainty quantification (UQ), which are critical for ensuring the reliability of integrated circuits under various operating conditions. The numerical results on alpha21264 processor show that the proposed ThermPINN has 2× speedup and 3× better accuracy over the state-of-the-art thermal simulator, VarSim. The experimental results for 2-D full-chip thermal analysis of 3171 cases show that the proposed parameterized ThermPINN considering both training and inference time can achieve a 6× speedup over commercial COMSOL with an average mean absolute error (AE) of 0.47 K. In terms of training time, the proposed parameterized ThermPINN is 11× faster than the parameterized plain PINN with similar accuracy. The UQ analysis with 5000 samples for maximum temperature propagated from ambient temperature shows that the parameterized ThermPINN and parameterized plain PINN are 113× and 22× faster than COMSOL, respectively.

 

 

 

 

2022 ASP-DAC

Design Close to the Edge for Advanced Technology using Machine Learning and Brain-Inspired Algorithms

Author: Hussam Amrouch, Florian Klemme, Paul R. Genssler

Affiliation: Department of Computer Science, Chair of Semiconductor Test and Reliability (STAR);University of Stuttgart, Stuttgart, Germany

Abstract:

In advanced technology nodes, transistor performance is increasingly impacted by different types of design-time and run-time degradation. First, variation is inherent to the manufacturing process and is constant over the lifetime. Second, aging effects degrade the transistor over its whole life and can cause failures later on. Both effects impact the underlying electrical properties of which the threshold voltage is the most important. To estimate the degradation-induced changes in the transistor performance for a whole circuit, extensive SPICE simulations have to be performed. However, for large circuits, the computational effort of such simulations can become infeasible very quickly. Furthermore, the SPICE simulations cannot be delegated to circuit designers, since the required underlying transistor modes cannot be shared due to their high confidentiality for the foundry. In this paper, we tackle these challenges at multiple levels, ranging from transistor to memory to circuit level. We employ machine learning and brain-inspired algorithms to overcome computational infeasibility and confidentiality problems, paving the way towards design close to the edge.

 

 

 

 

2022 TCAS-II

Detailed routing Short Violation Prediction Using Graph-Based Deep Learning mode

Author: Xuan Chen, Zhixiong Di, Wei Wu, Qiang Wu, Jiangyi Shi, Quanyuan Feng

Affiliation: School of Microelectronics, Xidian University, Xi’an, China; School of Information Science and Technology, Southwest Jiaotong University, Chengdu, China

Abstract:

As the manufacturing process continuously shrinks, how to accurately estimate routability at placement is becoming increasingly important. In addition to extracting local features, this article innovatively constructs an adjacency matrix to represent the connection relationship among tiles, which can reflect the placement quality more comprehensively. To effectively map local features of tiles to the corresponding adjacency matrix, a graph neural network is employed. This trained mode is used to predict short violations at the placement stage. Experimental results demonstrate the proposed method can achieve better binary classification quality for designs with severe shorts and outperforms in inductive learning than available machine learning frameworks.

 

 

  

  

2022 ASP-DAC  

A Fast and Accurate Middle End of Line Parasitic Capacitance Extraction for MOSFET and FinFET Technologies Using Machine Learning

Author: Mohamed Saleh Abouelyazid, Sherif Hammouda, Yehea Ismail

Affiliation: Siemens EDA ;American University in Cairo,Cairo, Egyp;

Abstract:

A novel machine learning modeling methodology for parasitic capacitance extraction of middle-end-of-line metal layers around FinFETs and MOSFETs is developed. Due to the increasing complexity and parasitic extraction accuracy requirements of middle-end-of-line patterns in advanced process nodes, most of the current parasitic extraction tools rely on field-solvers to extract middle-end-of-line parasitic capacitances. As a result, a lot of time, memory, and computational resources are consumed. The proposed modeling methodology overcomes these problems by providing compact models that predict middle-end-of-line parasitic capacitances efficiently. The compact models are pre-characterized and technology-dependent. Also, they can handle the increasing accuracy requirements in advanced process nodes. The proposed methodology scans layouts for devices, extracts geometrical features of each device using a novel geometry-based pattern representation, and uses the extracted features as inputs to the required machine learning models. Two machine learning methods are used including: support vector regressions and neural networks. The testing covered more than 40M devices in several different real designs that belong to 28nm and 7nm process technology nodes. The proposed methodology managed to provide outstanding results as compared to field-solvers with an average error < 0.2%, a standard deviation < 3%, and a speed up of 100X.

 

 

 

 

2021 ICCAD

CNN-Cap: Effective Convolutional Neural Network Based Capacitance modes for Full-Chip Parasitic Extraction

Author: Dingcheng Yang, Wenjian Yu, Yuanbo Guo, Wenjie Liang

Affiliation: Dept. Computer Science & Tech., BNRist, Tsinghua University, Beijing, China

Abstract:

Accurate capacitance extraction is becoming more important for designing integrated circuits under advanced process technology. The pattern matching based full-chip extraction methodology delivers fast computational speed, but suffers from large error, and tedious efforts on building capacitance modes of the increasing structure patterns. In this work, we propose an effective method for building convolutional neural network (CNN) based capacitance modes (called CNN-Cap) for two-dimensional (2-D) structures in full-chip capacitance extraction. With a novel grid-based data representation, the proposed method is able to mode the pattern with a variable number of conductors, so as to largely reduce the number of patterns. Based on the ability of ResNet architecture on capturing spatial information and the proposed training skills, the obtained CNN-Cap exhibits much better performance over the multilayer perception neural network based capacitance mode while being more versatile. Extensive experiments on a 55nm and a 15nm process technologies have demonstrated that the error of total capacitance produced with CNN-Cap is always within 1.3% and the error of produced coupling capacitance is less than 10% in over 99.5% probability. CNN-Cap runs more than 4000X faster than 2-D field solver on a GPU server, while it consumes negligible memory compared to the look-up table based capacitance mode.

 

 

 

 

2020 ICCAD

Routing-free Crosstalk Prediction

Author: Rongjian Liang, Zhiyao Xie, Jinwook Jung, Vishnavi Chauha, Yiran Chen, Jiang Hu, Hua Xiang, and Gi-Joon Nam

Affiliation: Texas A&M University

Abstract:

Interconnect spacing is getting increasingly smaller in advanced technology nodes, which adversely increases the capacitive coupling of adjacent interconnect wires. It makes crosstalk a significant contributor to signal integrity and timing, and it is now imperative to prevent crosstalk-induced noise and delay issues in the earlier stages of VLSI design flow. Nonetheless, since the crosstalk effect depends primarily on the switching of neighboring nets, accurate crosstalk evaluation is only viable at the late stages of design flow with routing information available, e.g., after detailed routing. There have also been previous efforts in early-stage crosstalk prediction, but they mostly rely on time-expensive trial routing. In this work, we propose a machine learning-based routing-free crosstalk prediction framework. Given a placement, we identify routing and net topology-related features, along with electrical and logical features, which affect crosstalk-induced noise and delay. We then employ machine learning techniques to train the crosstalk prediction modes, which can be used to identify crosstalk-critical nets in placement stages. Experimental results demonstrate that the proposed method can instantly classify more than 70% of crosstalk-critical nets after placement with a false-positive rate of less than 2%.

 

 

 

 

AI+EDA

Physical feature analysis and prediction