2024 ISPD

Engineering the Future of IC Design with AI.

Author:Ruchir Puri

Affiliation:IBM Research Yorktown Heights, NY, USA

Abstract:

Software and Semiconductors are two fundamental technologies that have become woven into every aspect of our society, and it will be fair to say that "Software and Semiconductors have eaten the world". More recently, advances in AI are starting to transform every aspect of our society as well. These are three tectonic forces of transformation - "AI", -Software", and "Semiconductors" which are colliding together resulting in a seismic shift - a future where both software and semiconductor chips themselves will be designed, optimized, and operated by AI - pushing us towards a future where "Computers can program themselves!". In this talk, we will discuss these forces of "AI for Chips and Code" and how the future of Semiconductor chip design and software engineering is being redefined by AI.

 

 

 

 

2024 CAD

Artisan: Automated Operational Amplifier Design via Domain-specific Large Language Model

Author:Zihao Chen Jiangli Huangb Yiting Liu Fan Yang Li Shang Dian Zhou (The University of Texas at Dallas) Xuan Zeng

Affiliation:Fudan University

Abstract:

This paper presents Artisan, an automated operational amplifier design framework using large language models. We develop a bidirectional representation to align abstract circuit topologies with their structural and functional semantics. We further employ Tree-of-Thoughts and Chain-of-Thoughts approaches to model the design process as a hierarchical question-answer sequence, implemented by a mechanism of multi-agent interaction. A high-quality opamp dataset is developed to enhance the design proficiency of Artisan. Experimental results demonstrate that Artisan outperforms state-of-the-art optimization-based methods and benchmark LLMs, in success rate, circuit performance metrics, and interpretability, while accelerating the design process by up to 50.1x. Artisan will be released for public access.

 

 

 

 

2023 TCAD

PTPT: Physical Design Tool Parameter Tuning via Multi-Objective Bayesian Optimization.

Author: Hao Geng, Tinghuan Chen, Yuzhe Ma, Binwu Zhu, Bei Yu

Affiliation: School of Information Science and Technology, ShanghaiTech University, Shanghai, China; Microelectronics Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong

Abstract:

Physical design flow through associated electronic design automation (EDA) tools plays an imperative role in the advanced integrated circuit design. Mostly, the parameters fed into physical design tools are mainly manually picked based on the domain knowledge of the experts. Nevertheless, owing to the ever-shrinking scaling down of technology nodes and the complexity of the design space spanned by combinations of the parameters, even coupled with the time-consuming simulation process, such manual explorations for parameter configurations of physical design tools have become extremely laborious. There exist a few works in the field of design flow parameter tuning. However, very limited prior arts explore the complex correlations among multiple quality-of-result (QoR) metrics of interest (e.g., delay, power, and area) and explicitly optimize these goals simultaneously. To overcome these weaknesses and seek effective parameter settings of physical design tools, in this article, we propose a multi-objective Bayesian optimization (BO) framework with a multi-task Gaussian mode as the surrogate mode. An information gain-based acquisition function is adopted to sequentially choose candidates for tool simulation to efficiently approximate the Pareto-optimal parameter configurations. The experimental results on three industrial benchmarks under the 7-nm technology node demonstrate the superiority of the proposed framework compared to the cutting-edge works.

 

 

 

 

2023 ICCAD

Invited Paper: CircuitOps: An ML Infrastructure Enabling Generative AI for VLSI Circuit Optimization.

Author:Rongjian Liang, Anthony Agnesina, Geraldo Pradipta, Vidya A. Chhabria, Haoxing Ren

Affiliation: Arizona State University, Tempe, AZ, US; NVIDIA, Santa Clara, CA, US; NVIDIA, Austin, TX, US

Abstract:

An innovative ML infrastructure named CircuitOps is developed to streamline dataset generation and model inference for various generative AI (GAI)-based circuit optimization tasks. Addressing the challenges of the absence of a shared Intermediate Representation (IR), steep EDA learning curves, and AI-unfriendly data structures, we propose solutions that empower efficient data handling. Our contributions encompass the following: (1) labeled property graphs (LPGs) as IR for flexible netlist representation and efficient parallel processing; (2) tools-agnostic IR generation from standard EDA files; (3) customizable dataset generation facilitated through AI-friendly LPGs; (4) gRPC-based inference deployment. Compared with using Tcl interfaces of EDA design tools, CircuitOps achieves a significant 99× dataset generation speedup and 75K nets per second transfer throughput, validating its effectiveness in optimizing GAI tasks.

 

 

 

 

2023 arXiv

ChipNeMo: Domain-adapted LLMs for chip design.

Author:

Mingjie Liu, Teodor-Dumitru Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, Bonita Bhaskaran, Bryan Catanzaro, Arjun Chaudhuri, Sharon Clay, Bill Dally, Laura Dang, Parikshit Deshpande, Siddhanth Dhodhi, Sameer Halepete, Eric Hill, Jiashang Hu, Sumit Jain, Brucek Khailany, George Kokai, Kishor Kunal, Xiaowei Li, Charley Lind, Hao Liu, Stuart Oberman, Sujeet Omar, Sreedhar Pratty, Jonathan Raiman, Ambar Sarkar, Zhengjiang Shao, Hanfei Sun, Pratik P Suthar, Varun Tej, Walker Turner, Kaizhe Xu, and Haoxing Ren.

Affiliation: NVIDIA

Abstract:

ChipNeMo aims to explore the applications of large language modes (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: domain-adaptive tokenization, domain-adaptive continued pretraining, mode alignment with domain-specific instructions, and domain-adapted retrieval modes. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our evaluations demonstrate that domain-adaptive pretraining of language modes, can lead to superior performance in domain related downstream tasks compared to their base LLaMA2 counterparts, without degradations in generic capabilities. In particular, our largest mode, ChipNeMo-70B, outperforms the highly capable GPT-4 on two of our use cases, namely engineering assistant chatbot and EDA scripts generation, while exhibiting competitive performance on bug summarization and analysis. These results underscore the potential of domain-specific customization for enhancing the effectiveness of large language modes in specialized applications.

 

 

 

 

2023 arXiv

ChatEDA: A large language mode powered autonomous agent for EDA

Author: Zhuolun He, Haoyuan Wu, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng Zheng, and Bei Yu.

Affiliation: 1Chinese University of Hong Kong 2Shanghai Artificial Intelligent Laboratory

Abstract:

The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers. Recent advancements in large language modes (LLMs) have showcased their exceptional capabilities in natural language processing and comprehension, offering a novel approach to interfacing with EDA tools. This research paper introduces ChatEDA, an autonomous agent for EDA empowered by a large language mode, AutoMage, complemented by EDA tools serving as executors. ChatEDA streamlines the design flow from the Register-Transfer Level (RTL) to the Graphic Data System Version II (GDSII) by effectively managing task planning, script generation, and task execution. Through comprehensive experimental evaluations, ChatEDA has demonstrated its proficiency in handling diverse requirements, and our fine-tuned AutoMage mode has exhibited superior performance compared to GPT-4 and other similar LLMs.

 

 

 

 

2022 ISPD

Improving Chip Design Performance and Productivity Using Machine Learning.

Author:Narender Hanchate

Affiliation: Cadence Design Systems, Inc. 2655 Seely Ave., San Jose CA, 95134

Abstract:

Engineering teams are always under pressure to deliver increasingly aggressive power, performance and area (PPA) goals, as fast as possible, on many concurrent projects. Chip designers often spend significant time tuning the implementation flow for each project to meet these goals. Cadence Cerebrus machine learning chip design flow optimization automates this whole process, delivering better PPA much more quickly. During this presentation Cadence will discuss Cerebrus machine learning and distributed computing techniques which enable RTL to GDS flow optimization, delivering better engineering productivity and design performance.

 

 

 

 

2022 DAC

PPATuner: pareto-driven tool parameter auto-tuning in physical design via gaussian process transfer learning.

Author: Hao Geng, Qi Xu, Tsung-Yi Ho, Bei Yu

Affiliation: ShanghaiTech & CUHK; USTC

Abstract:

Thanks to the amazing semiconductor scaling, incredible design complexity makes the synthesis-centric very large-scale integration (VLSI) design flow increasingly rely on electronic design automation (EDA) tools. However, invoking EDA tools especially the physical synthesis tool may require several hours or even days for only one possible parameters combination. Even worse, for a new design, oceans of attempts to navigate high quality-of-results (QoR) after physical synthesis have to be made via multiple tool runs with numerous combinations of tunable tool parameters. Additionally, designers often puzzle over simultaneously considering multiple QoR metrics of interest (e.g., delay, power, and area). To tackle the dilemma within finite resource budget, designing a multi-objective parameter auto-tuning framework of the physical design tool which can learn from historical tool configurations and transfer the associated knowledge to new tasks is in demand. In this paper, we propose PPATuner, a Pareto-driven physical design tool parameter tuning methodology, to achieve a good trade-off among multiple QoR metrics of interest (e.g., power, area, delay) at the physical design stage. By incorporating the transfer Gaussian process (GP) mode, it can autonomously learn the transfer knowledge from the existing tool parameter combinations. The experimental results on industrial benchmarks under the 7nm technology node demonstrate the merits of our framework.

 

 

 

 

2020 ICCAD

VLSI placement parameter optimization using deep reinforcement learning.

Author: Anthony Agnesina, Kyungwook Chang, and Sung Kyu Lim.

Affiliation: School of ECE, Georgia Institute of Technology, Atlanta, GA

Abstract:

The quality of placement is essential in the physical design flow. To achieve PPA goals, a human engineer typically spends a considerable amount of time tuning the multiple settings of a commercial placer (e.g. maximum density, congestion effort, etc.). This paper proposes a deep reinforcement learning (RL) framework to optimize the placement parameters of a commercial EDA tool. We build an autonomous agent that learns to tune parameters optimally without human intervention and domain knowledge, trained solely by RL from self-search. To generalize to unseen netlists, we use a mixture of handcrafted features from graph topology theory along with graph embeddings generated using unsupervised Graph Neural Networks. Our RL algorithms are chosen to overcome the sparsity of data and latency of placement runs. Our trained RL agent achieves up to 11% and 2.5% wirelength improvements on unseen netlists compared with a human engineer and a state-of-the-art tool auto-tuner, in just one placement iteration (20× and 50× less iterations).

 

 

 

 

2020 ASP-DAC

FIST: A feature-importance sampling and tree-based method for automatic design flow parameter tuning.

Author:Zhiyao Xie, Guan-Qi Fang, Yu-Hung Huang, Haoxing Ren, Yanqing Zhang, Brucek Khailany, Shao-Yun Fang, Jiang Hu, Yiran Chen, and Erick Carvajal Barboza

Affiliation:Duke University

Abstract:

Design flow parameters are of utmost importance to chip design quality and require a painfully long time to evaluate their effects. In reality, flow parameter tuning is usually performed manually based on designers' experience in an ad hoc manner. In this work, we introduce a machine learning-based automatic parameter tuning methodology that aims to find the best design quality with a limited number of trials. Instead of merely plugging in machine learning engines, we develop clustering and approximate sampling techniques for improving tuning efficiency. The feature extraction in this method can reuse knowledge from prior designs. Furthermore, we leverage a state-of-the-art XGBoost mode and propose a novel dynamic tree technique to overcome overfitting. Experimental results on benchmark circuits show that our approach achieves 25% improvement in design quality or 37% reduction in sampling cost compared to random forest method, which is the kernel of a highly cited previous work. Our approach is further validated on two industrial designs. By sampling less than 0.02% of possible parameter sets, it reduces area by 1.83% and 1.43% compared to the best solutions hand-tuned by experienced designers

 

 

 

 

2019 DAC

A learning-based recommender system for autotuning design fiows of industrial high-performance processors

Author:Jihye Kwon, Matthew M Ziegler, and Luca P Carloni

Affiliation:Department of Computer Science Columbia University, New York, NY, USA

Abstract:

Logic synthesis and physical design (LSPD) tools automate complex design tasks previously performed by human designers. One time-consuming task that remains manual is configuring the LSPD flow parameters, which significantly impacts design results. To reduce the parameter-tuning effort, we propose an LSPD parameter recommender system that involves learning a collaborative prediction mode through tensor decomposition and regression. Using a mode trained with archived data from multiple state-of-the-art 14nm processors, we reduce the exploration cost while achieving comparable design quality. Furthermore, we demonstrate the transfer-learning properties of our approach by showing that this mode can be successfully applied for 7nm designs

 

 

 

 

2018 DAC

Developing synthesis flows without human knowledge

Author:Cunxi Yu, Houping Xiao, and Giovanni De Micheli

Affiliation:Integrated Systems Laboratory, EPFL Lausanne, Switzerland Integrated Systems Laboratory, EPFL Lausanne, Switzerland Integrated Systems Laboratory, EPFL Lausanne, Switzerland

Abstract:

Design flows are the explicit combinations of design transformations, primarily involved in synthesis, placement and routing processes, to accomplish the design of Integrated Circuits (ICs) and System-on-Chip (SoC). Mostly, the flows are developed based on the knowledge of the experts. However, due to the large search space of design flows and the increasing design complexity, developing Intellectual Property (IP)-specific synthesis flows providing high Quality of Result (QoR) is extremely challenging. This work presents a fully autonomous framework that artificially produces design-specific synthesis flows without human guidance and baseline flows, using Convolutional Neural Network (CNN). The demonstrations are made by successfully designing logic synthesis flows of three large scaled designs.

 

 

 

 

2018 DAC

Developing_Synthesis_flows_Without_Human_Knowledge

Author:CunxiYu、HoupingXiao、Giovanni De Micheli

Affiliation:Integrated Systems Laboratory, EPFL, Lausanne, Switzerland, SUNY Buffalo, Buffalo, NY, USA

Abstract:

Design flows are the explicit combinations of design transformations, primarily involved in synthesis, placement and routing pro cesses, to accomplish the design of Integrated Circuits (ICs) and System-on-Chip (SoC). Mostly, the flows are developed basedonthe knowledge of the experts. However, due to the large search space of design flows andthe increasing design complexity, developing Intel lectual Property (IP)-specific synthesis flows providing high Quality of Result (QoR) is extremely challenging. This work presents a fully autonomous framework that artificially produces design-specific synthesis flows without human guidance and baseline flows, us ing Convolutional Neural Network (CNN). The demonstrations are made by successfully designing logic synthesis flows of three large scaled designs.

 

 

AI+EDA

Auto flow control and parameter optimization