Scalable Architecture Lab
Scalable Architecture Lab (SARCHLAB) is a research lab that runs under the Computer Science Department of William & Mary, under the supervision of Dr. Yifan Sun. SARCHLAB aims to develop scalable and efficient computer architectures by emphasizing the capabilities for human to understand the architecture and for the architecture to serve human.
Research Topics
Explainable Architecture
Ever-increasing complex chip designs make them hard for humans to understand. If the designers cannot fully understand the architecture, the chip will inevitably contain hardware bugs, performance bottlenecks, reliability issues, and security vulnerabilities. Therefore, we need to develop data visualization tools that help designers make well-informed and evidence-based decisions.
Computer Architecture Simulation
A cycle-based simulator is an essential tool for computer architecture researchers to validate their ideas. The community requires the simulators to be easy to learn, highly flexible, highly performant, and highly accurate. I am honored to take the challenge and contribute to the community with the Akita simulator framework and the MGPUSim multi-GPU simulator.
Multi-GPU and Wafer-Scale System Design
Single GPU systems struggle to meet the performance requirement. Therefore, researchers start to use large-scale multi-GPU systems to achieve extreme performance. Inter-GPU communication can easily kill the performance. I design architecture and system solutions for multi-GPU systems to avoid inter-GPU traffic and improve performance.
Sponsors
National Science Foundation
Advanced Micro Devices
Lab News
- [Jul 2024] Our paper "Looking into the Black Box: Monitoring Computer Architecture Simulations in Real-Time with AkitaRTM" has been accepted by the 57th IEEE/ACM International Symposium on Microarchitecture (MICRO '24')! Congrats Ali!
- [Apr 2024] Our paper "Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study" has been accepted by the Tenth ACM Conference on Learning @ Scale (L@S '24') !
- [Apr 2024] Our project "Enabling Graphics Processing Unit Performance Simulation for Large-Scale Workloads with Lightweight Simulation Methods" has been awarded by NSF! Thanks to NSF and my collaborator Adwait Jog and Sreepathi Pai.
- [Mar 2024] Our First Lightweight Community Workshop on Akita and MGPUSim has been successfully organized. Thank you all for participating in the event.
- [Jan 2024] Our paper "Impact of Raindrops on Camera-Based Detection in Software-Defined Vehicles" has been accepted by the 2nd IEEE International Conference on Mobility: Operations, Services, and Technologies (MOST '24') !
- [Jan 2024] Sabila Al Jannat has achieved Ph.D. candidacy. Congrats!
- [Nov 2023] Our paper "Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study" has been accepted by TVCG!
- [Sep 2023] Our paper "Path Forward Beyond Simulators: Fast and Accurate GPU Execution Time Prediction for DNN Workloads" has been accepted by MICRO 2023! Congrats Ying!
- [Sep 2023] Our paper "Photon: A Fine-grained Sampled Simulation Methodology for GPU Workloads" has been accepted by MICRO 2023!
- [Sep 2023] Daoxuan Xu has joined our lab as a Ph.D. student. Welcome!
- [Mar 2023] Our NSF CCRI Proposal Enabling Computer Architecture Simulation as a Service has been awarded! Thanks to NSF and my collaborator Kate Isaacs!
- [Feb 2023] Our NSF CRII Proposal Building Explainable Architecture with Simulation and Visualization Techniques has been awarded! Thank you NSF!
- [Dec 2022] Our book Accelerated Computing with HIP is published! It is available on Amazon or Barnes & Noble.
- [Aug 2022] Out CHIP dataset has been highlighted by the "Data Is Plural" column of FiveThirtyEight.