2024 TCAD

Accurate Interpolation of Library timing Parameters Through Recurrent Convolutional Neural Network

Author: Daijoon Hyun, Younggwang Jung, Youngsoo Shin

Affiliation: Department of Semiconductor Systems Engineering, Sejong University, Seoul, South Korea; School of Electrical Engineering, KAIST, Daejeon, South Korea

Abstract:

Interpolation is used to approximate the timing parameters of logic cells not specified in timing tables. Bilinear interpolation has been taken for granted in the industry, but the error increases as the nonlinearity of the timing parameters increases. In this article, we propose machine learning (ML)-based interpolation to obtain more accurate timing parameters. Recurrent convolutional neural network (R-CNN) is employed and various ranges of table entries form a sequence of input data, in which the recurrent network allows them to influence the interpolation. In addition, variational autoencoder (VAE) is used to capture the distribution feature of the table. ML interpolation is parallelized in GPU to minimize the runtime overhead from numerous arithmetic operations. Experimental results demonstrate that ML interpolation reduces timing parameter error by 19.7% and path delay error by 3.4% compared to bilinear interpolation at the cost of 13% runtime overhead.

 

 

 

 

2024 TCAS II

Fast and Accurate Aging-Aware Cell Timing mode via Graph Learning

Author: Yuyang Ye, Tinghuan Chen, Zicheng Wang, Hao Yan, Bei Yu, Longxing Shi

Affiliation: National ASIC Research Center, Southeast University, Nanjing, China; School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China

Abstract:

With transistors scaling down, aging effects become increasingly significant in circuit design. Thus, the aging-aware cell timing mode is necessary for evaluating the aging-induced delay degradation and their impact on circuit performance. However, the tradeoff between accuracy and efficiency becomes a bottleneck in traditional methods. In this brief, we propose a fast and accurate aging-aware cell timing mode via graph learning. The information of multi-typed devices on different arcs can be embedded by heterogeneous graph attention networks (H-GAT) and the embedded results help improve the accuracy of our aging-aware timing mode. The experimental results indicate the proposed timing mode can achieve high accuracy efficiently.

 

 

 

 

2024 ASP-DAC

Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks

Author: Tianliang Ma, Zhihui Deng, Xuguang Sun, Leilai Shao 

Affiliation: Shanghai Jiao Tong University, Shanghai, China

Abstract:

Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area (PPA) for advanced semiconductor process development. Cell library characterization is essential in DTCO flow, but traditional methods are time-consuming and costly. To overcome these challenges, we propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization. Our model incorporates cell structures and demonstrates high prediction accuracy across various process-voltage-temperature (PVT) corners and technology parameters. Validation with 512 unseen technology corners and over one million test data points shows accurate predictions of delay, power, and input pin capacitance for 33 types of cells, with a mean absolute percentage error (MAPE) ≤ 0.95% and a speedup of 100X compared with SPICE simulations. Additionally, we investigate system-level metrics such as worst negative slack (WNS), leakage power, and dynamic power using predictions obtained from the GNN-based model on unseen corners. Our model achieves precise predictions, with absolute error ≤ 3.0 ps for WNS, percentage errors ≤ 0.60% for leakage power, and ≤ 0.99% for dynamic power, when compared to golden reference. With the developed model, we further proposed a fine-grained drive strength interpolation methodology to enhance PPA for small-to-medium-scale designs, resulting in an approximate 1-3% improvement.

 

 

 

 

2022 TCAS I

Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks

Author: Florian Klemme, Hussam Amrouch

Affiliation: Faculty of Computer Science, Electrical Engineering and Information Technology, University of Stuttgart, Stuttgart, Germany

Abstract:

To ensure the correct functionality of a chip throughout its entire lifetime, preliminary circuit analysis with respect to aging-induced degradation is indispensable. However, state-of-the-art techniques only allow for the consideration of uniformly applied degradations, despite the fact that different workloads will lead to different degradations due to their distinct induced activities. This imposes over-pessimism when estimating the required timing guardbands, resulting in an unnecessary loss of performance and efficiency. In this work, we propose an approach that takes real-world workload dependencies into account and generates workload-specific aging-aware standard cell libraries, allowing for accurate analysis of aging-induced degradations. We employ machine learning techniques to overcome infeasible simulation times for individual transistor aging while sustaining high prediction accuracy. We also demonstrate scalability to previously unknown workloads and discuss multiple approaches to estimate the machine learning accuracy by employing coverage metrics. In our evaluation, we achieve predictions of workload-dependent aging-aware standard cells with an average accuracy (R2 score) of 95.28%. Using predicted cell libraries in static timing analysis, timing guardbands for multiple circuits are reported with an error of less than 0.1% on average. We demonstrate that timing guardband requirements can be reduced by up to 30% when considering specific workloads over worst-case estimations as performed in state-of-the-art tool flows. Even for unknown workloads of different circuits, accurate prediction with relative errors below 1% can be achieved.

 

 

 

 

2022 TCAS I

Efficient Learning Strategies for Machine Learning-Based Characterization of Aging-Aware Cell Libraries

Author: Florian Klemme, Hussam Amrouch

Affiliation: Faculty of Computer Science, Electrical Engineering and Information Technology, University of Stuttgart, Stuttgart, Germany

Abstract:

Machine learning (ML)-driven standard cell library characterization enables rapid, on-the-fly generation of cell libraries, opening the door for extensive design-space exploration and other, previously infeasible approaches. However, the benefits of ML-based cell library characterization are strongly limited by its high demand in training data and the costly SPICE simulation required to generate the training samples. Therefore, efficient learning strategies are needed to minimize the required training data for ML models while still sustaining high prediction accuracy. In this work, we explore multiple active and passive learning strategies for ML-based cell library characterization with focus on aging-induced degradation. While random sampling and greedy sampling strategies operate with low computational overhead, active learning considers the performance of ML models to find the most valuable samples for training. We also introduce a hybrid approach of active learning and greedy sampling to optimize the trade-off between reduction in training samples and computational overhead. Our experiments demonstrate an achievable training data reduction of up to 77% compared to the state of the art, depending on the targeted accuracy of the ML models.

 

 

 

 

2021 ASP-DAC

Standard Cell routing with Reinforcement Learning and Genetic Algorithm in Advanced Technology Nodes.

Author: Haoxing Ren, Matthew Fojtik

Affiliation: NVIDIA Corporation

Abstract:

Standard cell layout in advanced technology nodes are done manually in the industry today. Automating standard cell layout process, in particular the routing step, are challenging because of the constraints of enormous design rules. In this paper we propose a machine learning based approach that applies genetic algorithm to create initial routing candidates and uses reinforcement learning (RL) to fix the design rule violations incrementally. A design rule checker feedbacks the violations to the RL agent and the agent learns how to fix them based on the data. This approach is also applicable to future technology nodes with unseen design rules. We demonstrate the effectiveness of this approach on a number of standard cells. We have shown that it can route a cell which is deemed unroutable manually, reducing the cell size by 11%.

 

 

 

 

AI+EDA

Standard cell library design optimization