2024 DAC
ML-based Physical Design Parameter Optimization for 3D ICs: From Parameter Selection to Optimization
Author:Hao-Hsiang Hsiao; Pruek Vanna-iampikul; Yi-Chen Lu; Sung Kyu Lim
Affiliation: Georgia Institute of Technology; Georgia Tech
Abstract:
While various studies have shown effective parameter optimizations for specific designs, there is limited exploration of parameter optimization within the domain of 3D Integrated Circuits. We present the first comprehensive study, both qualitatively and quantitatively, comparing five state-of-the-art (SOTA) techniques for parameter optimization applied to 3D ICs. Additionally, we introduce an end-to-end machine learning-based framework, encompassing important parameter selection through optimization, all without human intervention. Extensive studies across six industrial designs under the TSMC 28nm technology node reveal that our proposed framework outperforms SOTA techniques in three different optimization objectives in both optimization quality and runtime
2024 DAC
Mixed-Size 3D Analytical placement with Heterogeneous Technology Nodes
Author:Yan-Jen Chen; Cheng-Hsiu Hsieh; Po-Han Su; Shao-Hsiang Chen; Yao-Wen Chang
Affiliation: National Taiwan University
Abstract:
This paper proposes a mixed-size 3D analytical placement framework for face-to-face stacked integrated circuits fabricated with heterogeneous technology nodes and connected by hybrid bonding technology.The proposed framework efficiently partitions a given netlist into two dies and optimizes the positions of each macro, standard cell, and hybrid bonding terminal (HBT). A multi-technology objective function and a multi-technology density penalty calculation process are adopted to handle the heterogeneous-technology-node constraints during mixed-size 3D global placement. Furthermore, a 3D objective function is used to refine the placement result during HBT-cell co-optimization. Our placer achieves the best results for all contest test cases compared with the participating teams at the 2023 CAD Contest at ICCAD on 3D placement with Macros.
2024 ISPD
AI for EDA/Physical Design: Driving the AI Revolution: The Crucial Role of 3D-IC.
Author:Erick Chao
Affiliation: Cadence Design Systems, Inc.Hsinchu, Taiwan
Abstract:
3D Integrated Circuits (3D-ICs) represent a significant advancement in semiconductor technology, offering enhanced functionality in smaller form factors, improved performance, and cost reductions. These 3D-ICs, particularly those utilizing Through-Silicon Vias (TSVs), are at the forefront of industry trends. They enable the integration of system components from various process nodes, including analog and RF, without being limited to a single node. TSVs outperform wire-bonded System in Package (SiP) in terms of reduced (RLC) parasitics, offering better performance, more power efficiency, and denser implementation. Compared to silicon interposer methods, vertical 3D die stacking achieves higher integration levels, smaller sizes, and quicker design cycles. This presentation introduces a novel AI-driven method designed to tackle the challenges hindering the automation of 3D-IC design flows.
2023 DAC
DeepOHeat: Operator Learning-based Ultra-fast Thermal Simulation in 3D-IC Design.
Author:Ziyue Liu, Yixing Li, Jing Hu, Xinling Yu, Shinyu Shiau, Xin Ai, Zhiyu Zeng, Zheng Zhang
Affiliation:University of California at Santa Barbara, Santa Barbara, CA; Cadence Design Systems, Austin, TX; Cadence Design Systems, San Jose, CA
Abstract:
Thermal issue is a major concern in 3D integrated circuit (IC) design. Thermal optimization of 3D IC often requires massive expensive PDE simulations. Neural network-based thermal prediction modes can perform real-time prediction for many unseen new designs. However, existing works either solve 2D temperature fields only or do not generalize well to new designs with unseen design configurations (e.g., heat sources and boundary conditions). In this paper, for the first time, we propose DeepOHeat, a physics-aware operator learning framework to predict the temperature field of a family of heat equations with multiple parametric or non-parametric design configurations. This framework learns a functional map from the function space of multiple key PDE configurations (e.g., boundary conditions, power maps, heat transfer coefficients) to the function space of the corresponding solution (i.e., temperature fields), enabling fast thermal analysis and optimization by changing key design configurations (rather than just some parameters). We test DeepOHeat on some industrial design cases and compare it against Celsius 3D from Cadence Design Systems. Our results show that, for the unseen testing cases, a well-trained DeepOHeat can produce accurate results with 1000× to 300000× speedup.
2023 DAC
DeepOHeat: Operator Learning-based Ultra-fast Thermal Simulation in 3D-IC Design.
Author:Ziyue Liu, Yixing Li, Jing Hu, Xinling Yu, Shinyu Shiau, Xin Ai, Zhiyu Zeng, Zheng Zhang
Affiliation:University of California at Santa Barbara, Santa Barbara, CA; Cadence Design Systems, Austin, TX; Cadence Design Systems, San Jose, CA
Abstract:
Thermal issue is a major concern in 3D integrated circuit (IC) design. Thermal optimization of 3D IC often requires massive expensive PDE simulations. Neural network-based thermal prediction modes can perform real-time prediction for many unseen new designs. However, existing works either solve 2D temperature fields only or do not generalize well to new designs with unseen design configurations (e.g., heat sources and boundary conditions). In this paper, for the first time, we propose DeepOHeat, a physics-aware operator learning framework to predict the temperature field of a family of heat equations with multiple parametric or non-parametric design configurations. This framework learns a functional map from the function space of multiple key PDE configurations (e.g., boundary conditions, power maps, heat transfer coefficients) to the function space of the corresponding solution (i.e., temperature fields), enabling fast thermal analysis and optimization by changing key design configurations (rather than just some parameters). We test DeepOHeat on some industrial design cases and compare it against Celsius 3D from Cadence Design Systems. Our results show that, for the unseen testing cases, a well-trained DeepOHeat can produce accurate results with 1000× to 300000× speedup.
2023 DAC
Any-Angle routing for Redistribution Layers in 2.5D IC Packages.
Author:Min-Hsuan Chung, Je-Wei Chuang, Yao-Wen Chang
Affiliation:Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan
Abstract:
Redistribution layers (RDLs) are widely applied for signal transmissions in advanced packages. Traditional redistribution layer (RDL) routers use only 90- and 135-degree turns for routing. With technological advances, routing in RDLs can be any obtuse angle, leading to larger routing solution spaces and shorter total wirelength. This paper proposes the first any-angle routing algorithm in the literature for multiple RDLs. We first give a novel global routing algorithm with accurate routing resource estimation. A multi-net access point adjustment method is then proposed based on dynamic programming and our partial net separation scheme. Finally, we develop an efficient tile routing algorithm to obtain valid routes with fixed access points. Experimental results show that our algorithm can achieve a 15.7% shorter wirelength compared with a traditional RDL router.
2023 ICCAD
Floorplanning for Embedded Multi-Die Interconnect Bridge Packages.
Author:Chung-Chia Lee, Yao-Wen Chang
Affiliation:Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan
Abstract:
Modern heterogeneous integration requires dense IO interconnections among chips, such as CPU and memory, to facilitate bandwidth-aware packaging. The embedded multi-die interconnect bridge (EMIB) has attracted much attention recently by providing a high wiring density and low manufacturing cost. However, EMIB optimization must consider constrained wire orientations and crosstalk. This paper presents the first work on floorplanning for EMIB-based packaging. We first mode the floorplanning problem for EMIB-based packaging. Based on a hybrid structure of transitive closure graphs and B*-trees, we present a novel simulated-annealing-based algorithm to efficiently generate the desired EMIB-aware floorplans. We employ maximum-spanning-tree-based partitioning and tree-based classification for already found partial topologies to search for desired solutions more efficiently. Experimental results show that our algorithm can significantly improve the area, total wirelength, and computation time compared with simulated annealing based on TCGs alone.
2023 ICCAD
Invited Paper: Solving Fine-Grained Static 3DIC Thermal with ML Thermal Solver Enhanced with Decay Curve Characterization.
Author:Haiyang He, Norman Chang, Jie Yang, Akhilesh Kumar, Wenbo Xia, Lang Lin, Rishikesh Ranade
Affiliation:Ansys Inc., USA
Abstract:
Static chip thermal analysis provides detailed and accurate thermal profile on chip. The chip power map, commonly modeed as rectangular regions of distinct heat sources, significantly impacts the chip thermal profile. Since the heat sources result from numerous cells in functional blocks, the design space of chip power map is prohibitively enormous. Numerical simulations can be reliable for solving complex power maps; however, it could be very time-consuming when simulating a large SoC and/or 3DIC designs. Thus, there is an urgent need for speeding up the static chip thermal analysis to tackle various power maps. In this paper, we propose an approach of integrating our developed machine learning thermal solver [1] and decay curve characterization for solving static chip thermal with diverse power maps. The machine learning thermal solver would first solve the power maps on a coarse level (e.g., 200 um). The thermal results are further enhanced using the decay curve algorithm which would fine tune the solution locally provided by the machine learning thermal solver and calculate the local temperature variations at a finer level (e.g., 10 um). The deep learning modes are trained on augmented artificial power maps and tested on realistic chip power maps. Experimental results validate the effectiveness of the proposed approach of offering fast and accurate chip thermal profile.
2022 ASPDAC
Fast Thermal Analysis for Chiplet Design based on Graph Convolution Networks.
Author: Liang Chen, Wentian Jin, Sheldon X.-D. Tan
Affiliation: Department of Electrical and Computer Engineering, University of California, Riverside, CA 92521 USA
Abstract:
2.5D chiplet-based technology promises an efficient integration technique for advanced designs with more functionality and higher performance. Temperature and related thermal optimization, heat removal are of critical importance for temperature-aware physical synthesis for chiplets. This paper presents a novel graph convolutional networks (GCN) architecture to estimate the thermal map of the 2.5D chiplet-based systems with the thermal resistance networks built by the compact thermal mode (CTM). First, we take the total power of all chiplets as an input feature, which is a global feature. This additional global information can overcome the limitation that the GCN can only extract local information via neighborhood aggregation. Second, inspired by convolutional neural networks (CNN), we add skip connection into the GCN to pass the global feature directly across the hidden layers with the concatenation operation. Third, to consider the edge embedding feature, we propose an edge-based attention mechanism based on the graph attention networks (GAT). Last, with the multiple aggregators and scalers of principle neighborhood aggregation (PNA) networks, we can further improve the modeing capacity of the novel GCN. The experimental results show that the proposed GCN mode can achieve an average RMSE of 0.31 K and deliver a 2.6× speedup over the fast steady-state solver of open-source HotSpot based on SuperLU. More importantly, the GCN mode demonstrates more useful generalization or transferable capability. Our results show that the trained GCN can be directly applied to predict thermal maps of six unseen datasets with acceptable mean RMSEs of less than 0.67 K without retraining via inductive learning.
2020 DAC
TP-GNN: A Graph Neural Network Framework for Tier Partitioning in Monolithic 3D ICs
Author: Sung Kyu Lim
Affiliation: School of ECE, Georgia Institute of Technology, Atlanta, GA
Abstract:
3D integration technology is one of the few options that can keep Moore's Law trajectory beyond conventional scaling. Existing 3D physical design flows fail to benefit from the full advantage that 3D integration provides. Particularly, current 3D partitioning algorithms do not comprehend technology and design-related parameters properly, which results in sub-optimal partitioning solutions. In this paper, we propose TP-GNN, an unsupervised graph-learning-based tier partitioning framework, to overcome this issue. Experimental results on 7 industrial designs demonstrate that our framework significantly improves the QoR of the state-of-the-art 3D implementation flows. Specifically, in OpenPiton, a RISC-V-based multi-core system, we observe 27.4%, 7.7% and 20.3% improvements in performance, wirelength, and energy-per-cycle respectively.
2017 TVLSI
Application of Machine Learning for Optimization of 3-D Integrated Circuits and Systems
Author:Sung Joo Park,Bumhee Bae, Joungho Kim,Madhavan Swaminathan
Affiliation: Samsung Electronics, Co., Ltd., Hwaseong, South Korea School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Abstract:
The 3-D integration helps improve performance and density of electronic systems. However, since electrical and thermal performance for 3-D integration is related to each other, their codesign is required. Machine learning, a promising approach in artificial intelligence, has recently shown promise for addressing engineering optimization problems. In this paper, we apply machine learning for the optimization of 3-D integrated systems where the electrical performance and thermal performance need to be analyzed together for maximizing performance. In such systems, modeing can be challenging due to the multiscale geometries involved, which increases computation time per iteration. In this paper, we show that machine learning can be applied to such systems where multiple parameters can be optimized to achieve the desired performance using the minimum number of iterations. These results have been compared with other promising optimization methods in this paper. The results show that on an average, 4.4%, 31.1%, and 6.9% improvement in temperature gradient, CPU time, and skew are possible using machine learning, as compared with other methods.
2016 Journal of information and communication convergence engineering
Machine Learning Based Variation modeing and Optimization for 3D Ics
Author:Sandeep Kumar Samal , Guoqing Chen , and Sung Kyu Lim
Affiliation:School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Abstract:
Three-dimensional integrated circuits (3D ICs) experience die-to-die variations in addition to the already challenging within- die variations. This adds an additional design complexity and makes variation estimation and full-chip optimization even more challenging. In this paper, we show that the industry standard on-chip variation (AOCV) tables cannot be applied directly to 3D paths that are spanning multiple dies. We develop a new machine learning-based mode and methodology for an accurate variation estimation of logic paths in 3D designs. Our mode makes use of key parameters extracted from existing GDSII 3D IC design and sign-off simulation database. Thus, it requires no runtime overhead when compared to AOCV analysis while achieving an average accuracy of 90% in variation evaluation. By using our mode in a full-chip variation-aware 3D IC physical design flow, we obtain up to 16% improvement in critical path delay under variations, which is verified with detailed Monte Carlo simulations.
2015 ICCAD
Optimizing 3D NoC design for energy efficiency: A machine learning approach
Author:Krishnendu Chakrabarty
Affiliation:School of EECS, Washington State University Pullman, WA, USA
Abstract:
Three-dimensional (3D) Network-on-Chip (NoC) is an emerging technology that has the potential to achieve high performance with low power consumption for multicore chips. However, to fully realize their potential, we need to consider novel 3D NoC architectures. In this paper, inspired by the inherent advantages of small-world (SW) 2D NoCs, we explore the design space of SW network-based 3D NoC architectures. We leverage machine learning to intelligently explore the design space to optimize the placement of both planar and vertical communication links for energy efficiency. We demonstrate that the optimized 3D SW NoC designs perform significantly better than their 3D MESH counterparts. On an average, the 3D SW NoC shows 35% energy-delay-product (EDP) improvement over 3D MESH for the nine PARSEC and SPLASH2 benchmarks considered in this work. The highest performance improvement of 43% was achieved for RADIX. Interestingly, even after reducing the number of vertical links by 50%, the optimized 3D SW NoC performs 25% better than the fully connected 3D MESH, which is a strong indication of the effectiveness of our optimization methodology.
AI+EDA
3D IC