Our workshop welcomes posters in the following topics:
- Graph Neural Networks for performance, power, and area (PPA) prediction and optimization for all stages of EDA pipeline, including HLS, RTL/logic synthesis, and physical designs;
- LLM-based code analysis for PPA prediction and optimization at all stages of EDA pipeline;
- Multi-modality ML models (e.g., LLM + GNN) for PPA prediction and optimization at all stages of EDA pipeline;
- Domain and task transfer for HLS performance prediction with new kernels and new versions of EDA tools at different stages;
- ML methods to optimize circuit aging and reliability;
- ML for design technology co-optimization (DTCO);
- ML for analog, mixed-signal, and RF IC designs;
- Reinforcement Learning for Design Space Exploration (DSE);
- LLM-based design generation;
- Active learning and importance sampling of design points;
- AI for compiler and code transformation;
- Benchmark datasets.
Submission Guidelines
- Formatting: The submission should be a 3-page pdf file following the NeurIPS 2024 template. This submission should contain title, authors (affiliations and email addresses), a short abstract, background introduction, and a summary of the proposed problem, methods, and main results. References are excluded from the page limit.
- Originality: The submission is encouraged to be unpublished or recently accepted work due to the forward-looking nature of the workshop.
- Poster presentation: All accepted submissions will be presented as posters on the workshop day.
- Submission link: Please submit through this submission link.
- Submission deadline: November 4, 2024, 11:59 PM UTC-12 (AoE)
- Notification date: November 10, 2024 (or earlier, as soon as we finish reviewing)
- Late submission deadline: November 11, 2024, 11:59 PM UTC-12 (AoE) (we will review quickly after it)
Accepted Posters
- "Hybrid Graph Representation and Learning Framework for High-Level Synthesis Design Space Exploration," Pouya Taghipour, Eric Granger, Yves Blaquiere (École de Technologie Supérieure). ID: 1
- "Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis," Weikai Li, Ding Wang, Zijian Ding, Atefeh Sohrabizadeh, Zongyue Qin, Jason Cong, Yizhou Sun (University of California - Los Angeles). ID: 2
- "CTGAN-Bandit: A Conditional Tabular GAN Model Leveraging Upper Confidence Bound Estimators for Hardware Design Verification," Lorenzo Ferretti, Surya Teja Bandlamudi, Nihar Athreyas, Michael Yan, Vikram Narayan, Samir Mittal (Micron Technology). ID: 3
- "AnalogCoder: Analog Circuit Design via Training-Free Code Generation," Yao Lai1, Sungyoung Lee2, Guojin Chen3, Souradip Poddar2, Mengkang Hu1, David Z. Pan2, Ping Luo1 (1The University of Hong Kong, 2The University of Texas at Austin, 3The Chinese University of Hong Kong). ID: 4
- "FloorSet: Benchmark for AI-driven Floorplanning," Uday Mallappa, Hesham Mostafa, Mikhail Galkin, Mariano Phielipp, Somdeb Majumdar (Intel Labs). ID: 5
- "Inverse Patterning Technology with Transformer-Based Optimization," Sooyong Lee1, Guandao Yang2, Suyeon Choi2, Seongtae Jeong1, Gordon Wetzstein2 (1Samsung Electronics, 2Stanford University). ID: 6
- "Coverage Closure with an Agentic LLM Workflow," Ximin Shan, Rahul Krishnamurthy, Jayanth Raman, Nick Cheng, Vikram Narayan, Samir Mittal (Micron Technology). ID: 7
- "AI to Reduce Wasted Time in FPGA Routing," Andrew David Gunter, Steven Wilton (The University of British Columbia). ID: 8
- "Deep Learning Enabled Design of RF/mmWave Circuits," Emir Ali Karahan1, Jonathan Zhou1, Zheng Liu2, Kaushik Sengupta1 (1Princeton University, 2Texas Instruments). ID: 9
- "Multi-Objective Bayesian Optimization for Efficient HDnn-PIM Software-Hardware Co-Design with Metric Constraints," Chien-Yi Yang1, Tim Chen1, Minxuan Zhou2, Flavio Ponzina1, Dongxia Wu1, Raid Ayoub3, Pietro Mercati3, Mahesh Subedar3, Yian Ma1, Rose Yu1, Tajana Rosing1 (1University of California - San Diego, 2Illinois Institute of Technology, 3Intel). ID: 10
- "NetVGE: Netwise Hardware Trojan Detection at RTL Using Variable Dependency and Knowledge Graph Embedding," Yaroslav Popryho, Debjit Pal, Inna Partin-Vaisband (University of Illinois Chicago). ID: 11
- "PromptV: Leveraging LLM-Powered Multi-Agent Prompt Learning for High-Quality Verilog Generation," Zhendong Mi1, Renming Zheng1, Haowen Zhong2, Yue Sun3, Shaoyi Huang1 (1Stevens Institute of Technology, 2University of Washington, 3Lehigh University). ID: 12
- "LLM-VeriPPA: Power, Performance, and Area-aware Verilog Code Generation and Refinement with Large Language Models," Kiran Thorat1, Jiahui Zhao1, Amit Hasan1, Yaotian Liu2, Xi Xie1, Hongwu Peng1, Bin Lei1, Jeff Zhang2, Caiwen Ding3 (1University of Connecticut, 2Arizona State University, 3University of Minnesota). ID: 13
- "Simulator Generation via Large Language Models: A Custom Programming Interface Approach," Jihoon Hong, Hyewon Suh, Yonggan Fu, Yingyan (Celine) Lin (Georgia Institute of Technology). ID: 14
- "Balor with Merlin Compiler: Exploiting Diversity of Kernel Representation and Multi-Dataset Learning," Emmet Murphy, Lana Josipovic (ETH Zürich). ID: 15
- "VeriDPO: Exploring Alignment for Verilog Designs," Daniel Kiesewalter, Luca Valente, Andrea Bonetti, Malte J. Rasch, Lorenzo Servadei (Sony AI). ID: 16
- "PVT-Goal: Solving PVT-Aware Transistor Sizing Problem with Goal-Conditioned Reinforcement Learning Framework," Seunggeun Kim, David Z. Pan (The University of Texas at Austin). ID: 17
- "Agentic-HLS: An Agentic Reasoning Based High-Level Synthesis System Using Large Language Models," Ali Emre Oztas, Zheyu Liu, Mahdi Jelodari (The University of Manchester). ID: 18
- Anonymous. ID: 19
- "An Efficient Transformer and Genetic Algorithm Based HLS Design Space Exploration," Yujie Yan, Guanhua Chen, Chang Wu (Fudan University). ID: 20
- "MAGE: A Multi-Agent Engine for Automated RTL Code Generation," Yujie Zhao, Hejia Zhang, Hanxian Huang, Zhongming Yu, Jishen Zhao (University of California San Diego). ID: 21