Program

This page is archived from the 2023 event. Please see the home page for up-to-date information

The bridge’s program spans activities from traditional tutorials and software labs on the educational side, invited vision talks and contributed ones based on submitted position papers, to an interactive panel and breakout discussions.

Detailed information on the invited speakers and panelists can be found further down on this page. The program at one glance is as follows:

All times are in Eastern Time Zone (EST, GMT-5)

Day 1 (February 7th)

Day 2 (February 8th)

Vision Speakers

image-left Yejin Choi - University of Washington, Allen Institute for AI
Yejin Choi is the Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Her research investigates a wide variety of problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is a MacArthur Fellow and a co-recipient of the NAACL Best Paper Award in 2022, the ICML Outstanding Paper Award in 2022, the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, the NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI’s 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.

image-left Christopher Kanan - University of Rochester
Christopher Kanan is an Associate Professor of Computer Science at the University of Rochester. His research focuses on deep learning, especially continual machine learning, where he takes inspiration from cognitive science to make artificial neural networks capable of learning over time for large-scale vision and multi-modal perception tasks. With his students, he created widely known continual learning algorithms, including Streaming Linear Discriminant Analysis, REMIND, and FearNet. Other recent projects cover self-supervised learning, open-world learning, and creating bias robust neural network architectures. Previously, he led AI R&D at the start-up Paige, leading to the first FDA approved computer vision system for helping pathologists diagnose cancer in whole slide histopathology images. Kanan received a PhD in computer science from the University of California at San Diego. Prior to joining the University of Rochester, he worked as a researcher at NASA JPL and then as a professor of Imaging Science at the Rochester Institute of Technology. He is an NSF CAREER award recipient.

image-left Tobias Gerstenberg - Stanford University
Tobias Gerstenberg is an Assistant Professor of Psychology at Stanford University. He leads the Causality in Cognition Lab (CiCL). The CiCL studies the role of causality in people’s understanding of the world, and of each other. Professor Gerstenberg’s research is highly interdisciplinary. It combines ideas from philosophy, linguistics, computer science, and the legal sciences to better understand higher-level cognitive phenomena such as causal inferences and moral judgments. The CiCL’s research uses a variety of methods that include computational modeling, online experiments, eye-tracking experiments, as well as developmental studies with children. Professor Gerstenberg’s work has appeared in top journals including Psychological Review, Journal of Experimental Psychology: General, Psychological Science, Cognitive Psychology, Cognition, and Cognitive Science.

image-left Vineeth N Balasubramanian - IIT Hyderabad, Visiting Faculty Fellow Carnegie Mellon University
Vineeth N Balasubramanian is an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology, Hyderabad (IIT-H), India, and is currently a Fulbright-Nehru Visiting Faculty Fellow at Carnegie Mellon University. He was also the Founding Head of the Department of Artificial Intelligence at IIT-H. His research interests include deep learning, machine learning, computer vision and explainable AI. His research has been published at various top-tier venues such as ICML, CVPR, NeurIPS, ICCV, KDD, AAAI, and IEEE TPAMI, with Best Paper Awards at recent venues such as CODS-COMAD 2022, CVPR 2021 Workshop on Causality in Vision, etc. He was the General Chair for ACML 2022, and regularly serves as a Senior PC/Area Chair for conferences such as CVPR, ICCV, AAAI, IJCAI, ECCV with recent awards including Outstanding Reviewer at ICLR 2021, CVPR 2019, ECCV 2020, etc. He is also a recipient of the Teaching Excellence Award at IIT-H (2017 and 2021), Fulbright-Nehru Professional and Academic Excellence Fellowship (2022-23), Google Research Scholar Award (2021), NASSCOM AI Gamechanger Award (2022), Google exploreCSR award (2022), among others.

Software Labs

Continual Learning with Avalanche - Antonio Carta

image-left Avalanche: is an End-to-End Continual Learning Library based on PyTorch, born within ContinualAI with the unique goal of providing a shared and collaborative open-source (MIT licensed) codebase for fast prototyping, training and reproducible evaluation of continual learning algorithms. Avalanche has over 1000 stars on Github. The recent beta version of Avalanche was presented at the CVPR 21 workshop on continual learning in computer vision and is now rapidly picking up on citations in scientific papers (50+ citations according to google scholar within 1 year of publication).


image-left
Antonio Carta - University of Pisa, ContinualAI
Antonio Carta is an Assistant Professor at the University of Pisa and a member of the Pervasive AI Lab. His research is focused on continual learning in rehearsal free and distributed settings. He is also the lead maintainer of Avalanche, an open source continual learning library developed by ContinualAI.


Causality with DoWhy - Peter Götz & Patrick Blöbaum

image-left DoWhy: is a Python library that aims to spark causal thinking and analysis, as part of the larger PyWhy ecosystem. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, frontdoor, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library. DoWhy has over 5000 stars on Github and more than 700 forks.

image-left Peter Götz: Amazon Web Services
Peter Götz is a Senior Software Development Engineer at Amazon Web Services, where he currently works on problems involving causal machine learning. Other work within Amazon has included building and launching systems to expand Amazon’s international business. Before joining AWS, he worked at IBM where he helped to build IBM’s cloud offering. He holds a Diploma degree in Physics from Stuttgart University, Germany.


Unfortunately, Robert Osazuwa Ness was hindered from holding the software lab just prior to the conference due to unforseen circumstances. Patrik Blöbaum will handle the second part of the software lab instead.

Patrick Blöbaum is a Senior Applied Scientist at AWS working on problems in Causal Inference and a core contributor to the open-source library DoWhy. Prior to working at AWS, he got his PhD degree in the area of causality focusing on graphical causal models. His research interests include topics such as root cause analysis and causal discovery.

Panelists

image-left Dhireesha Kudithipudi, University of Texas San Antonio
Kudithipudi is the founding director of the MATRIX AI Consortium, the Robert F McDermott Endowed Chair in Engineering, and a Professor in ECE/CS at UTSA. Her research interests are in neuromorphic computing, low power machine intelligence, brain-inspired accelerators, and diversifying the AI field. Her team has developed neocortex-inspired AI accelerators, brain-inspired lifelong learning models, the first lifelong learning accelerator with spiking neurons, low power ML models, and tapered numerical precision architectures. She is the 2018 recipient of the Clare Booth Luce Scholarship in STEM for women in higher-ed, the 2018 Technology Woman of the Year in Rochester, the 2022 Academy of Distinguished Researchers, and the 2022 SanAntonio Lights award.

image-left Christopher Kanan - University of Rochester
Christopher Kanan is an Associate Professor of Computer Science at the University of Rochester. His research focuses on deep learning, especially continual machine learning, where he takes inspiration from cognitive science to make artificial neural networks capable of learning over time for large-scale vision and multi-modal perception tasks. With his students, he created widely known continual learning algorithms, including Streaming Linear Discriminant Analysis, REMIND, and FearNet. Other recent projects cover self-supervised learning, open-world learning, and creating bias robust neural network architectures. Previously, he led AI R&D at the start-up Paige, leading to the first FDA approved computer vision system for helping pathologists diagnose cancer in whole slide histopathology images. Kanan received a PhD in computer science from the University of California at San Diego. Prior to joining the University of Rochester, he worked as a researcher at NASA JPL and then as a professor of Imaging Science at the Rochester Institute of Technology. He is an NSF CAREER award recipient.

image-left Vineeth N Balasubramanian - IIT Hyderabad, Visiting Faculty Fellow Carnegie Mellon University
Vineeth N Balasubramanian is an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology, Hyderabad (IIT-H), India, and is currently a Fulbright-Nehru Visiting Faculty Fellow at Carnegie Mellon University. He was also the Founding Head of the Department of Artificial Intelligence at IIT-H. His research interests include deep learning, machine learning, computer vision and explainable AI. His research has been published at various top-tier venues such as ICML, CVPR, NeurIPS, ICCV, KDD, AAAI, and IEEE TPAMI, with Best Paper Awards at recent venues such as CODS-COMAD 2022, CVPR 2021 Workshop on Causality in Vision, etc. He was the General Chair for ACML 2022, and regularly serves as a Senior PC/Area Chair for conferences such as CVPR, ICCV, AAAI, IJCAI, ECCV with recent awards including Outstanding Reviewer at ICLR 2021, CVPR 2019, ECCV 2020, etc. He is also a recipient of the Teaching Excellence Award at IIT-H (2017 and 2021), Fulbright-Nehru Professional and Academic Excellence Fellowship (2022-23), Google Research Scholar Award (2021), NASSCOM AI Gamechanger Award (2022), Google exploreCSR award (2022), among others.

image-left Moritz Grosse-Wentrup, University of Vienna
Moritz Grosse-Wentrup is a full professor and head of the Research Group Neuroinformatics in the Faculty of Computer Science at the University of Vienna, Austria. He works on interpretable machine learning (IML) techniques from a causal perspective and uses IML methods to explain how artificial and biological intelligent systems create (disorders of) cognition and generate behavior.



Tutorials

Tutorial on Continual Learning

image-left Keiland Cooper: University of California, ContinualAI
Keiland Cooper is a National Science Foundation Fellow and Ph.D. candidate at the Department of Neurobiology and Behavior at the University of California, Irvine. He is also a co-founder of ContinualAI, a research non-profit dedicated to building intelligent machines that learn like we do. His work aims to understand the brain systems that allow this lifelong, continual learning and remembrance to occur.


image-left Martin Mundt: Technical University of Darmstadt, hessian.AI, ContinualAI
Martin Mundt (he/him) is a junior research group leader at the Technical University of Darmstadt (TU Darmstadt) and the Hessian Center for Artificial Intelligence (hessian.AI), where he leads the Open World Lifelong Learning (OWLL) lab. He is also a board member of directors at the non-profit organization ContinualAI. Previously, he has obtained a PhD degree in computer science and an M.Sc. in physics from Goethe University. The main vision behind OWLL is to develop systems that are not only able to learn continuously, but also successfully recognize new situations and actively choose data to train on, while autonomously adapting in a robust and interpretable way.

Tutorial on Causality

image-left Adèle Helena Ribeiro: University of Marburg
Adele Ribeiro is a postdoc in Dominik Heider’s research group at Philipps-Universität Marburg, Germany. Previously, she worked with Elias Bareinboim as a postdoc in the Causal AI Lab at Columbia University, USA. Her research lies at the intersection of Computer Science, Statistics, and Artificial Intelligence in Healthcare. Her efforts are focused on advancing the theory of causal inference and learning for discovering, generalizing, and personalizing cause-effect relationships from multiple observational and experimental data collections. She has a particular interest in applications in the Health Sciences and has directed her research towards addressing challenges that emerge in such domains to help bridge the gap between theory and practical applications.

image-left Devendra Singh Dhami: Technical University of Darmstadt, hessian.AI
Devendra Singh Dhami is a DEPTH research group leader on Causality And neUro-Symbolic artificial intElligence (CAUSE), under The Hessian Center for Artificial Intelligence (hessian.AI) and TU Darmstadt. He is also a Postdoctoral Researcher under Prof. Dr. Kristian Kersting at TU Darmstadt in the Artificial Intelligence and Machine Learning Lab. He obtained his Ph.D. in Computer Science from University of Texas, Dallas (Advisor : Professor Sriraam Natarajan - StARLinG lab) with a focus on learning effective models from noisy, heterogeneous and multi-relational healthcare data.