Ax Muhammed \"Oz, Jasmin H\"orter, Kaleb Phipps, Charlotte Debus, Achim Streit, Markus G\"otz 3/31/2026

Differentiable Power-Flow Optimization

Differentiable power-flow optimization using neural networks to replace Newton-Raphson methods for scalable grid simulation.

Ax He Du, Qiming Ge, Jiakai Hu, Aijun Yang, Zheng Cai, Zixian Huang, Sheng Yuan, Qinxiu Cheng, Xinchen Xie, Yicheng Chen, Yining Li, Jiaxing Xie, Huanan Dong, Yaguang Wu, Xiangjun Huang, Jian Yang, Hui Wang, Bowen Zhou, Bowen Li, Qipeng Guo, Kai Chen 3/31/2026

Kernel-Smith: A Unified Recipe for Evolutionary Kernel Optimization

Framework for evolutionary GPU kernel optimization using evaluation-driven agent and evolutionary techniques for operator generation.

Ax Hongtao Wu, Boyun Zheng, Dingjie Song, Yu Jiang, Jianfeng Gao, Lei Xing, Lichao Sun, Yixuan Yuan 3/31/2026

Towards a Medical AI Scientist

Medical AI Scientist system that autonomously generates hypotheses, conducts experiments, and writes manuscripts in clinical medicine.

Ax Dimitris Bertsimas, Agni Orfanoudaki 3/31/2026

Algorithmic Insurance

Paper examining financial insurance mechanisms for AI system failures and algorithmic liability in high-stakes domains.

Ax Qiao Yuan, Sheng-Uei Guan, Pin Ni, Tianlun Luo, Ka Lok Man, Prudence Wong, Victor Chang 3/31/2026

Continual Graph Learning: A Survey

Survey on continual graph learning covering incremental learning from streaming graph data with experience and generative replay approaches.

Ax Jun Liu, Zhenglun Kong, Peiyan Dong, Changdi Yang, Tianqi Li, Hao Tang, Geng Yuan, Wei Niu, Wenbin Zhang, Pu Zhao, Xue Lin, Dong Huang, Yanzhi Wang 3/31/2026

Structured Agent Distillation for Large Language Model

Structured Agent Distillation compresses large LLM-based ReAct agents into smaller models while preserving reasoning and action consistency.

Ax Zhibin Wang, Rui Ning, Chao Fang, Zhonghui Zhang, Xi Lin, Shaobo Ma, Mo Zhou, Xue Li, Zhongfeng Wang, Chengying Huan, Rong Gu, Kun Yang, Guihai Chen, Sheng Zhong, Chen Tian 3/31/2026

CoDec: Prefix-Shared Decoding Kernel for LLMs

CoDec kernel optimizes LLM decoding by sharing prefix computation across multiple prompts to reduce memory-intensive KV cache access.