UMD Machine Learning Experts Presenting Multiple Papers and Workshops at ICML 2022

University of Maryland researchers focused on machine learning are well represented this week at the 39th International Conference on Machine Learning (ICML 2022), being held from July 17–23 in Baltimore.

The UMD researchers—a mix of faculty, postdocs and students—are presenting 18 papers and are featured in 18 workshops. Their research covers a wide range of topics such as measuring the robustness of neural networks, improving the accuracy of classifying images in uncommon settings, and leveraging human assistance to help AI agents cope with unfamiliar situations.

Many of the participating UMD faculty—Tom Goldstein, Aravind Srinivasan, Furong Huang, John Dickerson, Soheil Feizi, Pratap Tokekar and Dinesh Manocha—are part of the University of Maryland Center for Machine Learning.

These same faculty—as well as Hal Daumé III and Rama Chellappa—all have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS), which provides substantial administrative and technical support for the machine learning center.

“We’re proud to see our machine learning experts, particularly our talented graduate students, represented so well at a major conference,” says Mihai Pop, professor of computer science and the director of UMIACS. “The work they are doing is important—impacting transportation, the financial sector, privacy and security, and much more—and will continue to evolve rapidly as new technologies and new scientific knowledge come into play.”

UMD-affiliated papers being presented at ICML 2022:

Measuring Representational Robustness of Neural Networks Through Shared Invariances,” by Vedant Nanda, Till Speicher, Camila Kolling, John Dickerson, Krishna Gummadi and Adrian Weller.

Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments,” by Ryan Sullivan, Jordan Terry, Benjamin Black and John Dickerson.

Certified Neural Network Watermarks with Randomized Smoothing,” by Arpit Bansal, Ping-yeh Chiang, Michael Curry, Rajiv Jain, Curtis Wigington, Varun Manjunatha, John Dickerson and Tom Goldstein.

N-Penetrate: Active Learning of Neural Collision Handler for Complex 3D Mesh Deformations,” by Qingyang Tan, Zherong Pan, Breannan Smith, Takaaki Shiratori and Dinesh Manocha.

On the Hidden Biases of Policy Mirror Ascent in Continuous Action Spaces,” by Amrit Singh Bedi, Souradip Chakraborty, Anjaly Parayil, Brian M Sadler, Pratap Tokekar and Alec Koppel.

FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning,” by Anis Elgabli, Chaouki Ben Issaid, Amrit Singh Bedi, Ketan Rajawat, Mehdi Bennis and Vaneet Aggarwal.

Scaling-Up Diverse Orthogonal Convolutional Networks by A Paraunitary Framework,” by Jiahao Su, Wonmin Byeon and Furong Huang.

FOCUS: Familiar Objects in Common and Uncommon Settings,” by Priyatham Kattakinda and Soheil Feizi.

Robust Counterfactual Explanations for Tree-Based Ensembles,” by Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli and Daniele Magazzeni.

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification,” by Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum and Tom Goldstein.

EAT-C: Environment-Adversarial sub-Task Curriculum for Efficient Reinforcement Learning,” by Shuang Ao, Tianyi Zhou, Jing Jiang, Guodong Long, Xuan Song and Chengqi Zhang.

Identity-Disentangled Adversarial Augmentation for Self-Supervised Learning,” by Kaiwen Yang, Tianyi Zhou, Xinmei Tian and Dacheng Tao.

Independent policy gradient for large-scale Markov potential games: Sharper rates, function approximation, and game-agnostic convergence,” by Dongsheng Ding, Chen-Yu Wei, Kaiqing Zhang and Mihailo R Jovanović.

Do differentiable simulators give better policy gradients?” by HJ Suh, Max Simchowitz, Kaiqing Zhang and Russ Tedrake.

On improving model-free algorithms for decentralized multi-agent reinforcement learning,” by Weichao Mao, Lin Yang, Kaiqing Zhang and Tamer Basar.

Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations,” by Amin Ghiasi, Hamid Kazemi, Steven Reich, Chen Zhu, Micah Goldblum and Tom Goldstein.

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation,” by Wenxiao Wang, Alexander Levine and Soheil Feizi. (Note: This is also being presented at the Workshop on Formal Verification of Machine Learning.)

A Framework for Requesting Rich and Contextually Useful Information from Humans,” by Khanh Nguyen, Yonatan Bisk and Hal Daumé III. (Note: This is also being presented at the Workshop on Human-Machine Teaming and Collaboration.)

UMD-affiliated workshops being presented at ICML 2022:

High-Dimensional Bayesian Optimization with Invariance,” by Souradip Chakraborty, Ekansh Verma and Ryan-Rhys Griffiths.

Centralized vs Individual Models for Decision Making in Interconnected Infrastructure,” by Stephanie Allen, John Dickerson and Steven Gabriel.

Everyone Matters: Customizing the Decision Boundary for Adversarial Robustness,” by Yuancheng Xu, Yanchao Sun and Furong Huang.

Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy,” by Xiyao Wang, Wichayaporn Wongkamjan and Furong Huang.

Networked Restless Bandits with Positive Externalities,” by Christine Herlihy, Pranav Goel and John Dickerson.

Does Continual Learning Equally Forget All Parameters?” by Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang and Chengqi Zhang.

Toward Efficient Robust Training against Union of Lp Threat Models,” by Gaurang Sriramanan, Maharshi Gor and Soheil Feizi.

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning,” by Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum and Tom Goldstein.

What is a good metric to study generalization of minimax learners?” by Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang and Kaiqing Zhang.

Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks,” by Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang and Chengqi Zhang.

Federated Learning from Pre-Trained Models: A Contrastive Learning Approach,” by Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou and Jing Jiang.

Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting,” by Christine Herlihy, Aviva Prins, Aravind Srinivasan and John Dickerson.

Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication,” by Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh and Furong Huang.

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning,” by Yongyuan Liang, Yanchao Sun, Ruijie Zheng and Furong Huang.

Towards Better Understanding of Self-Supervised Representations,” by Neha Kalibhat, Kanika Narang, Hamed Firooz, Maziar Sanjabi and Soheil Feizi.

How much Data is Augmentation Worth?” by Jonas Geiping, Gowthami Somepalli, Ravid Shwartz-Ziv, Andrew Wilson, Tom Goldstein and Micah Goldblum.

This article was published by the University of Maryland Institute for Advanced Computer Studies.

Previous
Previous

App Developed by a Saint Louis University Researcher Helps Stop Human Trafficking

Next
Next

CLIP Lab Presents a Plethora of Papers and Workshops on Human-Centered NLP