AI2050

AI2050 will support exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI.

Scroll

Overview

“It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?”

 

This is AI2050’s motivating question. The initiative aims to answer this question by making awards to support work conducted by researchers from across the globe and at various stages in their careers.

AI2050's work

Awards

AI2050 will issue awards to support work conducted by researchers. These awards will primarily aim to enable and encourage bold and ambitious work, often multi-disciplinary, that is typically hard to fund but socially beneficial. Awards will be given for exceptional work tackling one or multiple items from a working list of hard problems.

Artifacts

Work supported by AI2050 will be open-source and published, in order that society can benefit from this important work. This includes research from the award recipient network, from our collaborations with leading groups, and from the initiative itself.

Community

AI2050 Fellows will come from around the globe, and include qualified researchers and practitioners in Asia, Africa, Europe, and Latin America. Through this initiative, we will plan to support talented researchers at various stages of their careers, to help encourage the next generation of researchers to focus on the hard problems in AI. AI2050 Fellows, our expert group, the broader AI community, and other stakeholders will regularly convene to discuss and advance the work of the initiative.

AI2050 is co-chaired by Schmidt Futures co-founder Eric Schmidt and James Manyika, Senior Advisor at Schmidt Futures.

How to get involved

We will continue to share information about this initiative as we develop it. If you would like to learn more, please reach out to us at [email protected]

Working list of hard problems

Drawing on previous work in AI, and through numerous conversations with other experts, the initiative has developed an initial working list of the hard problems for AI2050 to take on. This list is aimed at realizing the opportunity for society from AI and addressing the risks and challenges that could result from it. This list will be updated often as society’s use of AI continues to evolve.

Our Community

AI2050 Early Career Fellows

Schmidt Futures is excited to announce the following scholars as inaugural AI2050 Early Career Fellows: 

Aditi Raghunathan

Dr. Aditi Raghunathan is an assistant professor in the Computer Science Department at Carnegie Mellon University. Through this fellowship, Aditi will develop techniques for building “robust” machine learning systems that are guaranteed to behave as expected when deployed into real-world applications. This will advance Hard Problem 2 (assurance) by assuring that AI applications behave as expected.

Adji Bousso Dieng

Dr. Adji Bousso Dieng is an assistant professor of Computer Science at Princeton University where she leads Vertaix on research at the intersection of AI and the natural sciences. Through this fellowship, she will be using AI to design novel materials that can be used in healthcare, for carbon capture, and other applications requiring the ability to selectively capture and release small molecules. This will advance Hard Problem 4 (opportunities) by using AI to realize scientific applications not possible today and apply them to some of our most pressing problems (climate, drug discovery).

Baobao Zhang

Dr. Baobao Zhang is an assistant professor of Political Science at Maxwell School of Citizenship and Public Affairs at Syracuse University. Through this fellowship, Baobao will create a “public assembly” of 40 participants, randomly selected from the US population, to learn about high-risk AI systems and participate in extended deliberations about how these systems should be governed. This will advance Hard Problem 9 (Governance) by collecting important data on the use of public assemblies for AI governance—a currently novel use for this relatively new approach to public engagement in policy making.

Bryan Wilder

Dr. Bryan Wilder is an assistant professor in the Machine Learning Department at Carnegie Mellon University. Through this fellowship, Bryan will develop machine learning models that can combine real-time data from multiple data sources (such large-scale public health surveillance information combined with data collected at discrete healthcare encounters) to track health disparities and ameliorate biases in risk prediction models used in healthcare. This will advance Hard Problem 2 (Assurance) and 4 (Opportunities) by advancing the ability of health models to address bias that is prevalent in models today and of AI systems to better work with heterogeneous datasets.

Connor Coley

Dr. Connor W. Coley is an assistant professor at MIT in the Departments of Chemical Engineering & Electrical Engineering and Computer Science. Through this fellowship, Connor will develop AI models that can predict the chemical properties of novel chemicals and procedures to make them based on their chemical structure. This will advance Hard Problem 4 (Opportunities) by accelerating the creation of new medicines and other useful products.

Dylan Hadfield-Menell

Dr. Dylan Hadfield-Menell is an assistant professor on the faculty of Artificial Intelligence and Decision-Making in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Through this fellowship, he will build AI systems that can manage uncertainty about rewards and adapt the support of the reward distribution in coordination with the system’s ability to influence the state of the world. This will advance Hard Problem 3 (Alignment) by seeking to make fundamental improvements to broadly used AI techniques so that they are more readily aligned with ethical requirements.

Elizaveta Semenova

Dr. Elizaveta Semenova is a postdoctoral research associate at the University of Oxford in the Department of Computer Science. Through this fellowship, Elizaveta will develop a new generation of disease and environment surveillance systems that can quickly inform policymakers and communities at risk with high spatial and temporal granularity. This will advance Hard Problem 4 (Opportunities) with fundamental improvements in the use of AI in an area that has been resistant to success so far.

Gissella Bejarano

Dr. Gissella Bejarano is a postdoctoral researcher at Baylor University. Through this fellowship, she will be developing new approaches to allow computers to understand iconicity (the property of certain signs have forms that correspond to their meaning) in Peruvian Sign Language, American Sign Language, and non-verbal communication. This will advance Hard Problem 1 (Capabilities) by enabling AI to better understand sign language, and  Hard Problem 6 (Access) by increasing access to and participation in AI for people who rely on sign language.

Huan Zhang

Dr. Huan Zhang is a postdoctoral researcher in the department of Computer Science at Carnegie Mellon University. Through this fellowship, he will use mathematical proofs to improve and guarantee the trustworthiness of AI, making AI safer, more robust, more predictable and more reliable. This will advance Hard Problem 2 (Safety) by developing tools that can mathematically prove that AI will not act in a way that is out of its predicted safety zone. 

Jennifer Ngadiuba

Dr. Jennifer Ngadiuba is an Associate Scientist at the Fermi National Accelerator Laboratory. Through this fellowship, Jennifer is applying AI to particle physics towards more intelligent detector systems, data reduction and data analysis strategies for an efficient extraction of the most fundamental physics information from the data collected at the Large Hadron Collider. This will advance Hard Problem 4 (Opportunities) by empowering the experiments with innovative strategies that have the ability of expanding their physics reach and thus of leading, hopefully, to new scientific discoveries.

John Zerilli

Dr. John Zerilli is a Chancellor’s Fellow (assistant professor) in AI, Data, and the Rule of Law at the University of Edinburgh. Through this fellowship, John will make headway on the many ramifications of data for traditional concepts of political and legal theory (especially legitimacy, democracy, rights, and procedural justice). This will advance Hard Problem 10 (What it means to be human) gathering data on the evolving role of AI in human society.

Karina Vold

Dr. Karina Vold is an assistant professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. Through this fellowship, Karina will explore the epistemic implications of humans gaining access to insights that are discovered by advanced AI systems that achieve superhuman capabilities, such as beating world champions at Go and solving 50-year-old grand challenges in biology. This will advance Hard Problem 10 (What it means to be human) by helping understand what it means to be a human expert when AI systems perform better.

Qian Yang

Dr. Qian Yang is an assistant professor in Computing and Information Science at Cornell University. Through this fellowship, Qian will work to understand and improve the impact of generative models (such as GPT-3 and Dall-E) on people’s cognitive processes in creative work, so that such models enhance human cognition rather than de-skill them. This will advance Hard Problem 5 (Economics) and 6 (Access) by exploring the impact of advanced AI on knowledge workers.

Sam Kriegman

Dr. Sam Kriegman is an assistant professor of Computer Science, Chemical and Biological Engineering, and Mechanical Engineering at Northwestern University. Through this fellowship, Sam will use machine learning to invent new designs for autonomous robots in a computationally efficient manner. This will advance Hard Problems 1 (Capabilities) and 4 (Opportunities) by addressing a current technological limitation in robot design and realizing a new application where AI can make an important difference.

Sina Fazelpour

Dr. Sina Fazelpour is an assistant professor of Philosophy and Computer Science at Northeastern University. Through this fellowship, Sina will develop a framework for integrating our functional and ethical values into human-AI teams, in ways that leverage the diverse and complementary capabilities of AI tools and human experts. This will advance Hard Problem 2 (Assurance), by exploring issues of reliability and fairness in human-AI hybrid teams and Hard Problem 10 (What it means to be human), as such teams will be a primary unit of organizational decision-making.

AI2050 Senior Fellows

AI2050 Senior Fellows are leaders in their respective fields who collectively showcase the range of research that will be critical toward answering the AI2050 motivating question.

Selected expert group

Explore the latest on AI2050

AI2050 News

doing research in lab doing research in lab

Work we’re doing for science

We build networks of brilliant researchers at different career stages. We lead Virtual Institutes of Science to solve hard problems across locations and fields using modern tools.