
Johns Hopkins computer scientists will present their research at the upcoming 2025 International Conference on Learning Representations, to be held April 24–28 in Singapore.
The annual event brings together professionals with a range of backgrounds across all aspects of deep learning, including AI, statistics and data science, machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
Johns Hopkins researchers will present the following papers:
- “Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence” by Dharmesh Tailor, Alvaro Correia, Eric Nalisnick, and Christos Louizos
- “Autoregressive Pretraining with Mamba in Vision” by Sucheng Ren, Xianhang Li, Haoqin Tu, Feng Wang, Fangxun Shu, Lei Zhang, Jieru Mei, Linjie Yang, Peng Wang, Heng Wang, Alan Yuille, and Cihang Xie
- “COMBO: Compositional World Models for Embodied Multi-Agent Cooperation” by Hongxin Zhang, Zeyuan Wang, Qiushi Lyu, Zheyuan Zhang, Sunli Chen, Tianmin Shu, Behzad Dariush, Kwonjoon Lee, Yilun Du, and Chuang Gan
- “Compositional 4D Dynamic Scenes Understanding with Physics Priors for Video Question Answering” by Xingrui Wang, Wufei Ma, Angtian Wang, Shuo Chen, Adam Kortylewski, and Alan Yuille
- “Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements” by Jingyu Zhang, Ahmed Elgohary, Ahmed Magooda, Daniel Khashabi, and Benjamin Van Durme
- “ELBOing Stein: Variational Bayes with Stein Mixture Inference” by Ola Rønning, Eric Nalisnick, Christophe Ley, Padhraic Smyth, and Thomas Hamelryck
- “Generative World Explorer” by Taiming Lu, Tianmin Shu, Alan Yuille, Daniel Khashabi, and Jieneng Chen, also covered here
- “Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models” by Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, and Jack Hessel
- “Sharpness Aware Minimization: General Analysis and Improved Rates” by Dimitris Oikonomou and Nicolas Loizou
- “Stochastic Polyak Step-Sizes and Momentum: Convergence Guarantees and Practical Performance” by Dimitris Oikonomou and Nicolas Loizou
- “Syntactic and semantic control of large language models via sequential Monte Carlo” by João Loula, Benjamin LeBrun, Li Du, Ben Lipkin, Clemente Pasti, Gabriel Grand, Tianyu Liu, Yahya Emara, Marjorie Freedman, Jason Eisner, Ryan Cotterell, Vikash Mansinghka, Alexander K. Lew, Tim Vieira, and Timothy J. O’Donnell
- “X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale” by Haoran Xu, Kenton Murray, Philipp Koehn, Hieu Hoang, Akiko Eriguchi, and Huda Khayrallah