me

Hongguang Chen

chenhon@chalmers.se

+46-764510776

About Me

Hi! I'm Hongguang Chen, a PhD student at Chalmers University of Technology, focusing on High-Performance Computing. I have a strong background in AI, robotics, and computer vision, with hands-on experience in both academia and industry. I enjoy building systems that bridge hardware and software, and I am passionate about sharing knowledge through open-source projects and education.

2025/09 - Present
Chalmers University of Technology - Gothenburg, Sweden
PhD in Computer Science and Engineering (focus on High-Performance Computing and RISC-V).
About Chalmers University of Technology
2024/10 - 2025/02
University of Tokyo - Tokyo, Japan
Exchange student program.
About University of Tokyo
2023/08 - now
Chalmers University of Technology - Gothenburg
Master's in High-Performance Computing, focusing on AI and robotics.
About Chalmers
Chalmers Logo
Currently pursuing a master's degree at Chalmers, specializing in high-performance computing and AI robotics.
2021/04 - 2023/08
Inceptio Technology - Shanghai
Worked on autonomous truck R&D, responsible for key technology implementation.
About Inceptio
Autonomous Truck
Developed autonomous truck systems at Inceptio, gaining deep experience in intelligent transportation and cutting-edge technology.
2019/12 - 2021/04
SenseTime EIG - Shanghai
Joined SenseTime, worked on educational robotics projects.
About Sensetime
Educational Robot
Worked on Rover Mini and other educational robotics projects, promoting AI and programming education.
2019/07 - 2019/12
South China Normal University & JIMI Technology
Bachelor's in Communication Engineering, internship at JIMI, worked on smart pipeline and automated warehouse systems.
About SCNU About JIMI
JIMI Project
During the JIMI internship, participated in machine vision recognition projects, improving automation and algorithm skills.
2018
SenseTime MIG - Shenzhen
Research intern, participated in AI project development.
About Sensetime
2018 Internship
Interned at SenseTime Shenzhen, participated in AI prototype development and gained valuable engineering experience.
TinyML Demo

TinyML - A MiniGPT Pure C++ Implementation

A minimal GPT-like model implementation using pure C++, reproducing the TinyStories-33M structure. Features a custom tensor library (MTB), neural network framework (MNN), and GPT-Neo style blocks (GNEO) with caching mechanism.

C++ GPT Transformer Deep Learning Neural Networks
JLC Architecture

JLC - Javalette Compiler

A comprehensive compiler for a toy language combining C and Java features. Includes BNFC-generated frontend, multi-stage type checker, and LLVM backend with support for classes, structs, enums, arrays, and runtime polymorphism.

Compiler LLVM Type Checker BNFC Language Design
Smoke Simulator Demo

SmokeSimulator - Real-time GPU Smoke Simulation

A real-time GPU-accelerated smoke simulator using CUDA, implementing grid-based semi-Lagrangian method with linear interpolation. Features ray marching rendering with self-shadowing and voxelization support, achieving 30fps at 50³ grid resolution.

CUDA OpenGL Computer Graphics Real-time Fluid Simulation
Unified Particles Demo

Unified Particles - Physics Simulation System

A comprehensive particle-based physics simulation system supporting fluid dynamics, cloth simulation, and interactive particle systems. Features real-time rendering and interactive controls for dynamic scene manipulation.

Physics Simulation Particle Systems Fluid Dynamics Cloth Simulation Real-time

Robotics & Automation Projects

A collection of robotics and automation projects including RoverX autonomous robot with advanced navigation and AI capabilities, Ranger search & rescue robot for hazardous environments, and industrial automation systems for manufacturing processes.

Robotics AI Navigation Rescue Automation IoT

AMST: Alternating Multimodal Skip Training

H. M. A. H. Silva, H. Chen, Selpi

We propose AMST, a novel multimodal training strategy that adaptively skips updates across modalities to balance learning dynamics. AMST improves stability, reduces unnecessary computation, and achieves state-of-the-art efficiency on multiple benchmarks. This repository provides the official implementation and refactored, clean codebase.

Multimodal Learning Training Efficiency
View on GitHub Read Pre-print