To be held in conjunction with MICRO 2025
Workshop Date: October 18, 2025The goal of the workshop is to provide a forum for researchers and practitioners to exchange ideas and discuss the latest advances in the field of computer architecture modeling and simulation. The focus on modeling and simulation techniques is of vital importance to the ongoing advancements in microarchitecture, as these methods are essential tools for improving system performance, efficiency, and reliability.
The workshop will cover various aspects of computer architecture modeling and simulation, including but not limited to:
The workshop invites submissions of original work in the form of full papers (up to 6 pages, reference not included) covering all aspects of computer architecture modeling and simulation. Submissions will be peer-reviewed, and accepted papers will be included in the workshop proceedings.
Full paper submissions must be in PDF format for US letter-size or A4 paper. They must not exceed 6 pages (excluding unlimited references) in standard ACM two-column conference format (review mode, with page numbers, 9pt font minimum). More concise papers with ideas clearly expressed are also welcomed. Authors may choose whether to reveal their identity in the submission.
We use the ACM Primary article template. Templates for ACM format are available for Microsoft Word and LaTeX at this link. https://www.acm.org/publications/proceedings-template. For Latex users, please use the sigplan example in the template folder. For Overleaf users, the template is also available at https://www.overleaf.com/latex/templates?q=acm+sigconf .
We do not include the papers in the ACM or IEEE digital libraries. Therefore, papers submitted to this event may be submitted to other venues without restrictions.
At least one author of accepted papers is expected to present in person during the event. We understand the travel difficulty in the post-pandemic era. In exceptional cases, we will allow remote or pre-recorded presentations.
Starting from CAMS 2024, we organize a specialized session that allows tool creators to announce their new releases to amplify their exposure. The talks will announce new simulators, or new releases of existing simulators, highlighting their new features and improvements.
Last year, all the talks were invited. This year, we are soliciting talks from a broader community. Please submit a 1-page (including references) abstract that includes which simulator, which new version, and what new features you want to present. No peer review process will be applied to abstracts. The workshop chairs will make the selection, mainly based on the relevancy and potential interests of the audience.
We encourage in-person talks, but remote talks are also acceptable.
Abstract submissions should follow the same format as paper submissions (see above), except for the 1-page limit.
All times are in Korea Standard Time (KST, UTC+9).
| Time | Event |
| 8:00 AM – 8:10 AM | Opening Remarks |
| Session 1: Simulation for AI | |
| 8:10 AM – 9:00 AM | [Keynote] Accelerating Accelerator Research: The
ONNXim and PyTorchSim Story
Speaker: Gwangsun Kim |
| 9:00 AM – 9:20 AM | [Paper] NetTLMSim: A Virtual Prototype Simulator
for Large-Scale Accelerator Networks
Junsu Heo, Shinyoung Kim, Hyeseong Shin, Jaesuk Lee (Konkuk University), Sungkyung Park (Pusan National University) and Chester Sungchung Park (Konkuk University) |
| 9:20 AM – 9:40 AM | [Paper] A comprehensive analysis and modeling of
spill operations in vector processing units
Hossein Mokhtarnia, Osman Unsal and Adrian Cristal Kestelman (Barcelona Supercomputing Center) |
| 9:40 AM – 10:00 AM | Discussion |
| 10:00 AM – 10:30 AM | Coffee Break |
| Session 2: AI for Simulation | |
| 10:30 AM – 10:50 AM | [Paper] gem5 Co-Pilot: AI Assistant Agent for
Architectural Design Space Acceleration [Remote]
Zuoming Fu (Cornell University), Alexander Manley (University of Kansas) and Mohammad Alian (Cornell University) |
| 10:50 AM – 11:10 AM | [Paper] DaisenBot: Human-AI Collaboration in GPU
Performance Analysis with Multi-Modal AI Assistant
[Remote]
Enze Xu, Jeremy Coonley, Daoxuan Xu and Yifan Sun (William & Mary) |
| 11:10 AM – 11:30 AM | [Talk] ML-accelerated microarchitecture
simulation: Insights, Challenges, and Opportunities
[Remote]
Speaker: Santosh Pandey |
| 11:30 AM – 11:50 AM | Discussion |
| 12:00 PM – 1:00 PM | Lunch Break |
| Session 3: Network on Chip and System-level Simulation | |
| 1:00 PM – 2:00 PM | [Keynote] Reflections on Building a
High-Performance Microarchitectural Simulation
Framework
Speaker: Heiner Litz |
| 2:00 PM – 2:20 PM | [Paper] An End-to-End Evaluation Framework for
NoC IP: Performance Analysis to Verification Support
Chanwoo Song and Hyun-Gyu Kim (Openedges Technology) |
| 2:20 PM – 2:40 PM | [Paper] Latency-Aware QoS Optimization of XY-YX
Routing in NoCs via Analytical Latency Estimation
Jongwon Oh, Seongmo An, Jinyoung Shin and Seung Eun Lee (Seoul National University of Science and Technology) |
| 2:40 PM – 3:00 PM | [Paper] Enabling Realistic Virtualized Cloud
Workload Evaluation in RISC-V
Nikos Karystinos, George-Marios Fragkoulis and Dimitris Gizopoulos (University of Athens) |
| 3:00 PM – 3:30 PM | Coffee Break |
| Session 4: Simulation Design Methods | |
| 3:30 PM – 3:50 PM | [Paper] Phalanx: A Processor Simulator Based on
the Entity Component System Architecture
Toshiki Maekawa (Nagoya Institute of Technology), Akihiko Odaki (The University of Tokyo), Toru Koizumi, Tomoaki Tsumura (Nagoya Institute of Technology) and Ryota Shioya (The University of Tokyo) |
| 3:50 PM – 4:10 PM | [Tool Release Talk] Mess Simulator: New Capabilities in the
Latest Release
Pouya Esmaili-Dokht, Ashkan Asgharzadeh, Petar Radojkovic and Eduard Ayguadé (Barcelona Supercomputing Center) |
| 4:10 PM – 4:30 PM | [Tool Release Talk] Sniper 9.0 Faster Automated Sampling and
Virtuoso Integration
Speaker: Trevor E. Carlson |
| 4:30 PM – 4:50 PM | Discussion |
| 4:50 PM – 5:00 PM | Closing Remarks |

Speaker: Gwangsun Kim
Title: Accelerating Accelerator Research: The ONNXim and
PyTorchSim Story
Abstract:
Recently, AI has been advancing at an unprecedented pace, driving transformative
changes across the world. However, state-of-the-art AI algorithms have
become increasingly demanding in both compute and memory resources,
posing significant challenges in system design, particularly in the
development of AI accelerators or Neural Processing Units (NPUs). Thus,
as technology scaling slows down, it has become even more critical to
develop innovative NPU hardware architectures and software technologies
that can fully exploit hardware capabilities. To this end, accurate,
fast, and versatile full-stack NPU simulators are essential for effective
design-space exploration of both hardware and software. Yet achieving
these goals simultaneously is difficult, as the aforementioned requirements
often conflict with one another.
In this talk, I will share my experience working with my students to
build two open-source NPU simulators, ONNXim and PyTorchSim, developed
to address these challenges. ONNXim is a fast, cycle-accurate NPU simulator
that takes DNNs in ONNX format to evaluate the inference performance
of NPUs. Building on ONNXim, PyTorchSim extends simulation capability
to DNNs written in PyTorch, enabling fast and cycle-accurate NPU simulation
for both inference and training, with integrated compiler support. I
will discuss the motivation behind ONNXim, how it evolved into PyTorchSim,
and the key insights and lessons learned throughout their development.
I will also briefly talk about future directions for PyTorchSim.
Bio: Gwangsun Kim is an Associate Professor in the Department of Computer Science and Engineering at POSTECH, South Korea. He is also currently a Visiting Academic at Arm, based in Cambridge, UK. Before joining POSTECH, he worked as a Senior Research/Performance Engineer at Arm from 2016 to 2018. He received his Ph.D. and M.S. degrees in Computer Science from KAIST in 2016 and 2012, respectively, and his B.S. degree in Computer Science and Engineering and Electrical Engineering from POSTECH in 2010. His work received the Best Paper Award at PACT 2013 and was nominated for the Best Paper Award at PACT 2016. His research interests include various areas of computer systems, such as CPU/GPU/NPU architectures, near-data processing, systems for AI, memory systems, networking, and simulation methodology.

Speaker: Heiner Litz
Title: Reflections on Building a High-Performance
Microarchitectural Simulation Framework
Abstract:
Microarchitectural simulation lies at the heart of computer-architecture
research: it enables hypothesis validation, design-space exploration,
and performance projection of next-generation processors. Credible simulation
studies demand faithful models of modern out-of-order CPUs, representative
workloads executed at scale, and productive workflows that combine high
simulation speed with robust automation and analysis.
Developing such a framework is a daunting challenge, as modern CPUs
are among the most intricate engineering systems ever created. In this
talk, I will share insights from leading the development of Scarab,
a high-performance microarchitectural simulator that combines detailed
modeling accuracy, fast simulation throughput, and an industry-grade
infrastructure. I will discuss lessons learned in (1) designing accurate
and realistic models, (2) performing rigorous calibration and validation,
(3) applying strong software-engineering practices, and (4) building
scalable and maintainable simulation infrastructure. Together, these
principles have made Scarab one of the most capable and extensible open-source
CPU simulators available today.
Bio: Heiner Litz is an Associate Professor of Computer Science and Engineering at the University of California, Santa Cruz, where he holds the Kumar Malavalli Endowed Chair and directs the Center for Research in Storage Systems (CRSS). His research spans computer architecture, datacenter systems, storage, and scalable AI infrastructure. His contributions have been recognized with an NSF CAREER Award, the Intel Outstanding Researcher Award, multiple Best Paper Awards, and Google and Meta Faculty Awards. Litz received his Ph.D. in Computer Engineering from Mannheim University and previously held research appointments at the University of Heidelberg, Stanford, Google, and MIT.
![]() | ![]() | ![]() |
| Yifan Sun | Trevor E. Carlson | Enze Xu |
| Chair | Chair | Web Chair |
| William & Mary | National University of Singapore | William & Mary |
In this workshop, we are experimenting with a PhD and practitioner-led PC. We believe that PhD students and practitioners are the end users of simulation and performance modeling tools and hence, should know the tools the best. We will report our experience during the workshop event.