2024 AAAI

PreroutGNN for Timing Prediction with Order Preserving Partition, Global Circuit Pre-training, Local Delay Learning and Attentional Cell modeing

Author: Ruizhe Zhong; Junjie Ye; Zhentao Tang; Shixiong Kai,;Mingxuan Yuan; Jianye Hao, Junchi Yan

Affiliation: Dept. of Computer Science and Engineering & MoE Key Lab of AI, Shanghai Jiao Tong University; Huawei Noah’s Ark Lab; College of Intelligence and Computing, Tianjin University

Abstract:

Pre-routing timing prediction has been recently studied for evaluating the quality of a candidate cell placement in chip design. It involves directly estimating the timing metrics for both pin-level (slack, slew) and edge-level (net delay, cell delay), without time-consuming routing. However, it often suffers from signal decay and error accumulation due to the long timing paths in large-scale industrial circuits. To address these challenges, we propose a two-stage approach. First, we propose global circuit training to pre-train a graph autoencoder that learns the global graph embedding from circuit netlist. Second, we use a novel node updating scheme for message passing on GCN, following the topological sorting sequence of the learned graph embedding and circuit graph. This scheme residually modes the local time delay between two adjacent pins in the updating sequence, and extracts the lookup table information inside each cell via a new attention mechanism. To handle large-scale circuits effciently, we introduce an order preserving partition scheme that reduces memory consumption while maintaining the topological dependencies. Experiments on 21 real world circuits achieve a new SOTA R2 of 0.93 for slack prediction, which is significantly surpasses 0.59 by previous SOTA method. Code will be available at: https://github.com/Thinklab-SJTU/EDA-AI.

 

 

 

 

2024 DAC

Disentangle, Align and Generalize: Learning A timing Predictor from Different Technology Nodes

Author: Xinyun Zhang ; Binwu Zhu; Fangzhou Liu; Ziyi Wang; Peng Xu; Hong Xu; Bei Yu

Affiliation: The Chinese University of Hong Kong

Abstract:

In VLSI design, accurate pre-routing timing prediction is paramount. Traditional machine learning-based methods require extensive data, posing challenges for advanced technology nodes due to the time-consuming data preparation. To mitigate this issue, we propose a novel transfer learning framework that uses data from previous nodes for learning on the target node. Our method initially disentangles and aligns timing path features across different nodes, then predicts each path's arrival time employing a Bayesian-based mode capable of handling highly variable arrival time and generalizing to new designs. Experimental results on transfer learning from 130nm to 7nm nodes validate our method's effectiveness.

 

 

 

 

2024 DAC

Annotating Slack Directly on Your Verilog: Fine- Grained RTL Timing Evaluation for Early Optimization

Author: Wenji Fang; Shang Liu ; Hongce Zhang; Zhiyao Xie

Affiliation: Hong Kong University of Science and Technology

Abstract:

In digital IC design, the early register-transfer level (RTL) stage offers greater optimization flexibility than post-synthesis netlists or layouts. Some recent machine learning (ML) solutions propose to predict the overall timing of a design at the RTL stage, but the fine-grained timing information of individual registers remains unavailable. In this work, we introduce RTL-Timer, the first fine-grained general timing estimator applicable to any given design. RTL-Timer explores multiple promising RTL representations and customizes loss functions to capture the maximum arrival time at register endpoints. RTL-Timer's fine-grained predictions are further applied to guide optimization in a standard logic synthesis flow.

 

 

 

 

2024 DAC

A Crosstalk-Aware Timing Prediction Method in Routing

Authors: Leilei Jin, Jiajie Xu, Wenjie Fu, Hao Yan, Longxing Shi

Affiliation: The National ASIC System Engineering Technology Research Center, Southeast University

Abstract:

With shrinking interconnect spacing in advanced technology nodes, existing timing predictions become less precise due to the challenging quantification of crosstalk-induced delay. During the routing, the crosstalk effect is typically modeled by predicting coupling capacitance with congestion information. However, the timing estimation tends to be overly pessimistic, as the crosstalk-induced delay depends not only on the coupling capacitance but also on the signal arrival time. This work presents a crosstalk-aware timing estimation method using a two-step machine learning approach. Interconnects that are physically adjacent and overlap in signal timing windows are filtered first. Crosstalk delay is predicted by integrating physical topology and timing features without relying on post-routing results and the parasitic extraction. Experimental results show a match rate of over 99% for identifying crosstalk nets compared to the commercial tool on the OpenCores benchmarks, with prediction results being more accurate than those of other state-of-the-art methods.

 

 

 

 

2024 DATE

A Deep-Learning-Based Statistical Timing Prediction Method for Sub-16nm Technologies

Authors: Jiajie Xu; Leilei Jin; Wenjie Fu; Longxing Shi

Affiliation: The National ASIC System Engineering Technology Research Center, Southeast University

Abstract:

Pre-routing timing estimation is vital but challenging since accurate net information is available only after routing and parasitic extraction. Existing methodologies predict the timing metrics with the help of the placement information of standard cells. However, neglecting the analysis of process variation effects hinders the precision of those methodologies, especially in sub-16nm technologies as delay distributions become asymmetric. Therefore, a deep-learning-based statistical timing prediction method is proposed to model process variation effects in the pre-routing stage. Congestion features and pin-to-pin features are fed into graph neural networks for post-routing interconnect parasitic and arc delay prediction. Moreover, a calibration method is proposed to compensate for the precision loss of the delay propagation. We evaluate our methods using open-source designs and EDA tools, which demonstrate improved accuracy in pre-routing timing prediction methods and a remarkable speed-up compared to traditional routing and timing analysis process.

 

 

 

 

2024 ASP-DAC

An Optimization-aware Pre-routing timing Prediction Framework Based on Heterogeneous Graph Learning

Author: Peng Cao

Affiliation: Southeast Univ., China

Abstract:

Accurate and efficient pre-routing timing estimation is particularly crucial in timing-driven placement, as design iterations caused by timing divergence are time-consuming. However, existing machine learning prediction modes overlook the impact of timing optimization techniques during routing stage, such as adjusting gate sizes or swapping threshold voltage types to fix routing-induced timing violations. In this work, an optimization-aware pre-routing timing prediction framework based on heterogeneous graph learning is proposed to calibrate the timing changes introduced by wire parasitic and optimization techniques. The path embedding generated by the proposed framework fuses learned local information from graph neural network and global information from transformer network to perform accurate endpoint arrival time prediction.Experimental results demonstrate that the proposed framework achieves an average accuracy improvement of 0.10 in terms of R2 score on testing designs and brings average runtime acceleration of three orders of magnitude compared with the design flow.

 

 

 

 

2024 TCAS-II

High Efficiency Variation-Aware SRAM timing Characterization via Machine-Learning-Assisted Netlist Extraction

Author: Inseong Jeon, Hyunho Park, Taehwan Yoon, Hanwool Jeong

Affiliation: Department of Electronic Engineering, Kwangwoon University, Seoul, Republic of Korea

Abstract:

Abstract:This brief presents a highly efficient methodology for reducing SPICE netlist and characterizing variation-aware timing, utilizing machine learning to reduce simulation time. We present a netlist reduction algorithm that automatically extracts critical path components using several exclusion rules. We then employ simplified RC mode on the extracted critical path, where the parasitic RC values are determined through Bayesian Optimization. Our method reduces the number of transistors by 95.2-98.7% compared to the original and speeds up the simulation time by 26-105x compared to the conventional 1000 sample Monte Carlo with the post-layout, with an accuracy loss of below 3.63%.

 

 

 

 

2023 TCAD

TF-Predictor: Transformer-based pre-routing path delay prediction framework

Author: Peng Cao, Guoqing He, and Tai Yang

Affiliation: National ASIC Systems Engineering Technology Research Center, Southeast University, Nanjing, China

Abstract:

Timing mismatch between different stages of physical design poses great challenges for circuit optimization to achieve the desired performance, power, and area (PPA) tradeoff. The inaccurate timing estimation prior to routing may lead to over-design with unwanted power and area consumption or iterating back to cell placement at the cost of design turn-around time. Existing learning modes could not predict post-routing circuit timing with satisfying accuracy and efficiency due to the limitations of the ignorance of delay correlation along the timing path and the empirical feature selection solutions. In this work, an accurate and efficient prerouting path delay prediction framework is proposed by utilizing a transformer network and residual mode with an ensemble feature selection mechanism. Owing to the combined filter and wrapper methods, an ensemble feature selection mechanism is implemented to determine the optimal feature subset based on the timing and physical information at the placement stage for path delay prediction, which is extracted as feature sequences for each cell along the timing path to be trained by transformer network. With the residual mode, the predicted timing mismatch between the placement and routing stages by the transformer network is further calibrated to estimate the post-routing path delay. The proposed framework has been validated with ISCAS’85 and OpenCores benchmark circuits for the prediction of post-routing path delay, where the perdition error in terms of relative root mean squared error is limited within 1.3% and 3.0% and the correlation coefficient $R$ is higher than 0.999 and 0.995 for seen and unseen circuits, respectively, indicating an error reduction by 2.3–10.6 times compared by prior learning-based modes. In addition, the framework achieves average three orders of magnitude speedup compared with the commercial tools and is accelerated by a factor of 14–128 as against the competitive learning modes, which is promising to be applied to guide design optimization prior to time-consuming routing stage.

 

 

 

 

2023 DATE

Fast and Accurate Wire Timing Estimation Based on Graph Learning

Authors: Yuyang Ye; Tinghuan Chen; Yifei Gao; Hao Yan; Bei Yu; Longxing Shi

Affiliation: The National ASIC System Engineering Technology Research Center, Southeast University

Abstract:

Accurate wire timing estimation has become a bottleneck in timing optimization since it needs a long turn-around time using a sign-off timer. The gate timing can be calculated accurately using lookup tables in cell libraries. In comparison, the accuracy and efficiency of wire timing calculation for complex RC nets are extremely hard to trade-off. The limited number of wire paths opens a door for the graph learning method in wire timing estimation. In this work, we present a fast and accurate wire timing estimator based on a novel graph learning architecture, namely GNNTrans. It can generate wire path representations by aggregating local structure information and global relationships of whole RC nets, which cannot be collected with traditional graph learning work efficiently. Experimental results on both tree-like and non-tree nets demonstrate improved accuracy, with the max error of wire delay being lower than 5 ps. In addition, our estimator can predict the timing of over 200K nets in less than 100 secs. The fast and accurate work can be integrated into incremental timing optimization for routed designs.

 

 

 

 

2023 ASP-DAC

Graph-learning-driven path-based timing analysis results predictor from graph-based timing analysis

Author: Yuyang Ye, Tinghuan Chen, Yifei Gao, Hao Yan, Bei Yu, and Longxing Shi

Affiliation: School of Electronic Science and Engineering, Southeast University; Department of Computer Science and Engineering, Chinese University of Hong Kong

Abstract:

With diminishing margins in advanced technology nodes, the performance of static timing analysis (STA) is a serious concern, including accuracy and runtime. The STA can generally be divided into graph-based analysis (GBA) and path-based analysis (PBA). For GBA, the timing results are always pessimistic, leading to overdesign during design optimization. For PBA, the timing pessimism is re-duced via propagating real path-specific slews with the cost of severe runtime overheads relative to GBA. In this work, we present a fast and accurate predictor of post-layout PBA timing results from inex-pensive GBA based on deep edge-featured graph attention network, namely deep EdgeGAT. Compared with the conventional machine and graph learning methods, deep EdgeGAT can learn global timing path information. Experimental results demonstrate that our pre-dictor has the potential to substantially predict PBA timing results accurately and reduce timing pessimism of GBA with maximum error reaching 6.81 ps, and our work achieves an average 24.80x speedup faster than PBA using the commercial STA tool.

 

 

 

 

2023 ISLPED

Efficient Multi-Objective Optimization for PVT Variation-Aware Circuit Sizing Using Surrogate modes and Smart Corner Sampling

Author: Octavian Pascu, Catalin Visan, Georgian Nicolae, Mihai Boldeanu, Horia Cucu, Cristian Diaconu, Andi Buzo, Georg Pelz

Affiliation: University “Politehnica” of Bucharest; Infineon Technologies

Abstract:

Circuit sizing for designs with many design variables and responses is a complex task that requires highly experienced and creative designers to invest precious time in trial and error, routine work. In addition, sizing the circuit while also taking into account PVT (process, voltage, temperature) variation corners increases the complexity further. To simplify such tasks, designers select the most unfavorable PVT corner in advance (leveraging their expertise), perform circuit sizing for this condition, and finally verify the resulting design in all PVT corners. This procedure might generate designs that fail the specifications in other PVT corners leading to more design-verification iterative loops. Recent years brought machine learning (ML) and optimization techniques to the field of circuit design, with evolutionary algorithms and Bayesian modes showing good results for automated circuit sizing. However, these methods can still require an unfeasibly large number of simulations, especially if taking into account several PVT corners. In this context, we introduce a methodology that uses surrogate ML modes to perform PVT variation-aware circuit sizing. We propose to dynamically select the worst PVT corners and take them into account when sizing the circuit. In addition, we explore the best ways to mode process corners with Gaussian Processes, leading to more than 10x improvements for such surrogate modes. We evaluate the proposed corner management method on two voltage regulators showing different levels of complexity and highlight that it enables finding feasible solutions 2x faster when compared to baseline algorithms which optimize in all PVT corners. In addition, the quality and diversity of the proposed solutions are significantly higher by one to three orders of magnitude in terms of population hypervolume.

 

 

 

 

2023 DAC

Critical Paths Prediction under Multiple Corners Based on BiLSTM Network

Author: Qianqian Song, Xu Cheng, Peng Cao

Affiliation: National ASIC System Engineering Technology Research Center, Southeast University, Nanjing, China

Abstract:

Critical path generation poses significant challenge to integrated circuit (IC) design flow in terms of huge computational complexity and crucial impact to circuit optimization, whose early prediction is of vital importance for accelerating the design closure, especially under multiple process-voltage-temperature (PVT) corners. In this work, a post-routing critical path prediction framework is proposed based on Bidirectional Long Short-Term Memory (BiLSTM) network and Multi-Layer Perceptron (MLP) network to learn from the sequential features and global features at logic synthesis stage, which are extracted from the timing and physical information of cell sequences and operation conditions for circuit respectively. Experimental results demonstrate that with the proposed framework, the average prediction accuracy of critical paths achieves 95.0% and 93.6% for seen and unseen circuits in terms of F1-score for ISCAS’89 benchmark circuits under TSMC 22nm process, demonstrating an increase of 10.8% and 13.9% compared with existing learning-based critical paths prediction method.

 

 

 

 

2023 DAC

TOTAL: Multi-Corners timing Optimization Based on Transfer and Active Learning

Author: Wei W. Xing, Zheng Xing, Rongqi Lu, Zhelong Wang, Ning Xu, Yuanqing Cheng, Weisheng Zhao

Affiliation: Graphics & Computing Department, Rockchip Electronics Co., Ltd, Fuzhou, China; School of Integrated Circuit Science and Engineering, Beihang University, Beijing, China; College of Physics and Electronic Engineering, Sichuan Normal University, Sichuan, China; School of Information Engineering, Wuhan University of Technology, Wuhan, China

Abstract:

In modern advanced integrated circuit design, a design normally needs to be progressively optimized until the static timing analysis (STA) of full process corners meets the timing constraints. To improve efficiency, using machine learning to predict the path timings directly in order to reduce the extensive time-consuming SPICE simulations has become a promising technique to approach fast design closure. However, current methods lack both flexibility and reliability to be used in a practical industrial environment. To resolve these challenges, we propose TOTAL, which is constructed using a generalized linear mode with latent features to effectively capture knowledge transferred from previous designs and delivers state-of-the-art (SOTA) prediction accuracy that is up to 6.6x improvement over the competitors in terms of mean absolute error (MAE). Most importantly, TOTAL is equipped with a Bayesian decision strategy to actively update uncertain predictions and deliver reliable predictions with accuracy close to 100%, pushing the frontier of the machine-learning-based STA for practical implementation.

 

 

 

 

2023 DAC

Restructure-Tolerant timing Prediction via Multimodal Fusion

Author: Ziyi Wang, Siting Liu, Yuan Pu, Song Chen, Tsung-Yi Ho, Bei Yu

Affiliation: Chinese University of Hong Kong; University of Science and Technology of China

Abstract:

Fast and accurate pre-routing timing prediction is crucial in the very-large-scale integration (VLSI) design flow. Existing machine learning (ML)-assisted pre-routing timing evaluators neglect the impact of timing optimization, which may render their approaches impractical in real circuit design flows. To mode the impact of timing optimization, we propose an endpoint embedding framework that integrates netlist-layout information via multimodal fusion. An end-to-end flow is further developed for pre-routing restructure-tolerant prediction on global timing metrics. Comprehensive experiments on large-scale RISC-V designs with advanced 7-nm technology node demonstrate the superiority of our mode compared to the SOTA pre-routing timing evaluators.

 

 

 

 

2023 ICCAD

GraPhSyM: Graph Physical Synthesis mode

Author: Ahmed Agiza, Rajarshi Roy, Teodor-Dumitru Ene, Saad Godil, Sherief Reda, Bryan Catanzaro

Affiliation: NVIDIA, Santa Clara, CA, USA; Brown University, Providence, RI, USA

Abstract:

In this work, we introduce GraPhSyM, a Graph Attention Network (GATv2) mode for fast and accurate estimation of post-physical synthesis circuit delay and area metrics from pre-physical synthesis circuit netlists. Once trained, GraPhSyM provides accurate visibility of final design metrics to early EDA stages, such as logic synthesis, without running the slow physical synthesis flow, enabling global co-optimization across stages. Additionally, the swift and precise feedback provided by GraPhSym is instrumental for machine-learning-based EDA optimization frameworks. Given a gate-level netlist of a circuit represented as a graph, GraPhSyM utilizes graph structure, connectivity, and electrical property features to predict the impact of physical synthesis transformations such as buffer insertion and gate sizing. When trained on a dataset of 6000 prefix adder designs synthesized at an aggressive delay target, GraPhSyM can accurately predict the post-synthesis delay (98.3%) and area (96.1%) metrics of unseen adders with a fast 0.22s inference time. Furthermore, we illustrate the compositionality of GraPhSyM by employing the mode trained on a fixed delay target to accurately anticipate post-synthesis metrics at a variety of unseen delay targets. Lastly, we report promising generalization capabilities of the GraPhSyM mode when it is evaluated on circuits different from the adders it was exclusively trained on. The results show the potential for GraPhSyM to serve as a powerful tool for advanced optimization techniques and as an oracle for EDA machine learning frameworks.

 

 

 

 

2023 ICCAD

Invited Paper: The Inevitability of AI Infusion Into Design Closure and Signoff

Author: Jiang Hu, Andrew B. Kahng

Affiliation: Texas A&M University College Station, TX, USA; UC San Diego, La Jolla, USA

Abstract:

SoC design teams embrace new technologies and methodologies that bring clear value. Given this, future infusion of AI into design closure and signoff is inevitable. Predictive AI modes help focus the application of last-mile incremental optimizations (sizing, placement and routing) to achieve timing and noise closure; successful examples range from routing-free crosstalk prediction to timing/power evaluation in early RTL development. Design closure becomes more efficient when “imperfect but fast” ML inferencing is used to filter out potential violations, which can then be passed to golden analysis tools. Learning methods also improve the design process in many ways, ranging from smarter PVT corner selection to predicting the CPU and memory usage of signoff tools. At a higher level, AI will help design teams learn to avoid design trajectories that lead to time-consuming closure and signoff iterations. This talk will provide a broad overview of directions in which AI will inevitably improve the cost and efficiency of signoff in the coming years.

 

 

 

 

2022 DAC

A timing engine inspired graph neural network mode for pre-routing slack prediction

Author: Zizheng Guo, Mingjie Liu, Jiaqi Gu, Shuhan Zhang, David Z. Pan, Yibo Lin

Affiliation: School of Computer Science, Peking University; School of Integrated Circuits, Peking University; Department of Electrical and Computer Engineering, The University of Texas at Austin

Abstract:

Fast and accurate pre-routing timing prediction is essential for timing-driven placement since repetitive routing and static timing analysis (STA) iterations are expensive and unacceptable. Prior work on timing prediction aims at estimating net delay and slew, lacking the ability to mode global timing metrics. In this work, we present a timing engine inspired graph neural network (GNN) to predict arrival time and slack at timing endpoints. We further leverage edge delays as local auxiliary tasks to facilitate mode training with increased mode performance. Experimental results on real-world open-source designs demonstrate improved mode accuracy and explainability when compared with vanilla deep GNN modes.

 

 

 

 

2022 DAC

Accurate timing prediction at placement stage with look-ahead RC network

Author: Xu He, Zhiyong Fu, Yao Wang, Chang Liu, and Yang Guo.

Affiliation: Hunan University; National University of Defense Technology Changsha, China

Abstract:

Timing closure is a critical but effort-taking task in VLSI designs. In placement stage, a fast and accurate net delay estimator is highly desirable to guide the timing optimization prior to routing, and thus reduce the timing pessimism and shorten the design turn-around time. To handle the timing uncertainty at the placement stage, we propose a fast net delay timing predictor based on machine learning, which extract the fully timing features using a look-ahead RC network. Experimental results show that the proposed timing predictor has achieved average correlation over 0.99 with the post-routing sign-off timing results obtained in Synopsys Primetime.

 

 

 

 

2021 ICCAD

Doomed Run Prediction in Physical Design by Exploiting Sequential flow and Graph Learning

Author: Yi-Chen Lu, Siddhartha Nath, Vishal Khandelwal, Sung Kyu Lim

Affiliation: Synopsys Inc., Hillsboro, OR; School of ECE, Georgia Institute of Technology, Atlanta, GA; Synopsys Inc., Mountain View, CA

Abstract:

Modern designs are increasingly reliant on physical design (PD) tools to derive full technology scaling benefits of Moore's Law. Designers often perform power, performance, and area (PPA) exploration through parallel PD runs with different tool configurations. Efficient exploration of PPA is mission-critical for chip designers who are working with stringent time-to-market constraints and finite compute resources. Therefore, a framework that can accurately predict a “doomed run” (i.e., will not meet the PPA targets) at early phases of the PD flow can provide a significant productivity boost by enabling early termination of such runs. Multiple QoR metrics can be leveraged to classify successful or doomed PD runs. In this paper, we specifically focus on the aspect of timing, where our goal is to identify the PD runs that cannot achieve end-of-flow timing results by predicting the post-route total negative slack (TNS) values in early PD phases. To achieve our goal, we develop an end-to-end machine learning (ML) framework that performs TNS prediction by modeing PD implementation as a sequential flow. Particularly, our framework leverages graph neural networks (GNNs) to encode netlist graphs extracted from various PD phases, and utilize long short-term memory (LSTM) networks to perform sequential modeing based on the GNN-encoded features. Experimental results on seven industrial designs with 5:2 train/test split ratio demonstrate that our framework predicts post-route TNS values in high fidelity within 5.2% normalized root mean squared error (NRMSE) in early design stages (e.g., placement, CTS) on the two validation designs that are unseen during training.

 

 

 

 

2019 DAC

Machine Learning-Based Pre-routing Timing Prediction with Reduced Pessimism

Author: Erick Carvajal Barboza; Nishchal Shukla; Yiran Chen; Jiang Hu

Affiliation: Texas A&M University, Duke University

Abstract:

Optimizations at placement stage need to be guided by timing estimation prior to routing. To handle timing uncertainty due to the lack of routing information, people tend to make very pessimistic predictions such that performance specification can be ensured in the worst case. Such pessimism causes over-design that wastes chip resources or design effort. In this work, a machine learning-based pre-routing timing prediction approach is introduced. Experimental results show that it can reach accuracy near post-routing sign-off analysis. Compared to a commercial pre-routing timing estimation tool, it reduces false positive rate by about 2/3 in reporting timing violations.

 

 

 

 

2018 ICCD

Using machine learning to predict path-based slack from graph-based timing analysis

Author: Andrew B Kahng, Uday Mallappa, and Lawrence Saul.

Affiliation: CSE and ECE Departments, UC San Diego, La Jolla, CA, USA

Abstract:

With diminishing margins in advanced technology nodes, accuracy of timing analysis is a serious concern. Improved accuracy helps to reduce overdesign, particularly in P&R-based optimization and timing closure steps, but comes at the cost of runtime. A major factor in accurate estimation of timing slack, especially for low-voltage corners, is the propagation of transition time. In graph-based analysis (GBA), worst-case transition time is propagated through a given gate, independent of the path under analysis, and is hence pessimistic. The timing pessimism results in overdesign and/or inability to fully recover power and area during optimization. In path-based analysis (PBA), pathspeci?c transition times are propagated, reducing pessimism. However, PBA incurs severe (4X or more) runtime overheads relative to GBA, and is often avoided in the early stages of physical implementation. With numerous operating corners, use of PBA is even more burdensome. In this paper, we propose a machine learning mode, based on bigrams of path stages, to predict expensive PBA results from relatively inexpensive GBA results. We identify electrical and structural features of the circuit that affect PBA-GBA divergence with respect to endpoint arrival times. We use GBA and PBA analysis of a given testcase design along with arti?cially generated timing paths, in combination with a classi?cation and regression tree (CART) approach, to develop a predictive mode for PBA-GBA divergence. Empirical studies demonstrate that our mode has the potential to substantially reduce pessimism while retaining the lower turnaround time of GBA analysis. For example, a mode trained on a post-CTS and tested on a post-route database for the leon3mp design in 28nm FDSOI foundry enablement reduces divergence from true PBA slack (i.e., mode prediction divergence, versus GBA divergence) from 9.43ps to 6.06ps (mean absolute error), 26.79ps to 19.72ps (99th percentile error), and 50.78ps to 39.46ps (maximum error).

 

 

 

 

2017 DAC

LSTA: Learning-Based Static timing Analysis for High-Dimensional Correlated On-Chip Variations

Author: Song Bian ,Michihiro Shintan,Masayuki Hiromoto,Takashi Sato

Affiliation: Department of Communications and Computer Engineering, Kyoto University

Abstract:

As the transistor process technology continues to scale, the aging effect posits new challenges to the already complex static timing analysis (STA) process. In this paper, we first observe that aging can be thought of a type of correlated dynamic on-chip variations (OCV), and identify the problem introduced by such type of OCV. In particular, we take the negative bias temperature instability (NBTI) as an example dynamic OCV mechanism. We then propose a learning-based STA (LSTA) library to "predict" the timing of gates by capturing the correlation between our designed predictors. In the experiment, we used a linear regressor, support vector regression, and a non-linear method, random forest, to create the prediction mode. An ISCAS'89 benchmark circuit is used as a training sample to for the algorithms to learn the aging mode of gates, and the accuracies of the mode is then tested on two processor-scale designs using the library are evaluated, achieving a maximum absolute error of 3.42%.

 

 

 

 

2016 ASP-DAC

Learning-based prediction of embedded memory timing failures during initial floorplan design

Author: Wei-Ting J. Chan, Kun Young Chung, Andrew B. Kahng, Nancy D. MacDonald, Siddhartha Nath

Affiliation: CSE and ECE Departments, UC San Diego, La Jolla, CA, USA

Abstract:

Embedded memories are critical to success or failure of complex system-on-chip (SoC) products. They can be significant yield detractors as a consequence of occupying substantial die area, creating placement and routing blockages, and having stringent Vccmin and power integrity requirements. Achieving timing-correctness for embedded memories in advanced nodes is costly (e.g., closing the design at multiple (logic-memory) cross-corners). Further, multiphysics (e.g., crosstalk, IR, etc.) signoff analyses make early understanding and prediction of timing (-correctness) even more difficult. With long tool and design closure subflow runtimes, design teams need improved prediction of embedded memory timing failures, as early as possible in the implementation flow. In this work, we propose a learning-based methodology to perform early prediction of timing failure risk given only the netlist, timing constraints, and floorplan context (wherein the memories have been placed). Our contributions include (i) identification of relevant netlist and floorplan parameters, (ii) the avoidance of long P&R tool runtimes (up to a week or even more) with early prediction, and (iii) a new implementation of Boosting with Support Vector Machine regression with focus on negative-slack outcomes through weighting in the mode construction. We validate accuracy of our prediction modes across a range of “multiphysics” analysis regimes, and with multiple designs and floorplans in 28FDSOI foundry technology. Our work can be used to identify which memories are “at risk”, guide floorplan changes to reduce predicted “risk”, and help refine underlying SoC implementation methodologies. Experimental results in 28nm FDSOI technology show that we can predict P&R slack with multiphysics analysis to within 253ps (average error less than 10ps) using only post-synthesis netlist, constraints and floorplan information. Our predictions are 40% more accurate than the predictions (worst-case error of 358ps and average error of 42ps) of a nonlinear Support Vector Machine mode that uses only post-synthesis netlist information.

 

 

 

 

AI+EDA

Timing analysis and prediction