[Link] Eric Schmidt’s new AI2050 Fund

This is a linkpost for https://​​www.schmidtfutures.com/​​schmidt-futures-launches-ai2050-to-protect-our-human-future-in-the-age-of-artificial-intelligence/​​

I am posting this here as it may be of interest to some members.

Schmidt Futures Launches AI2050 to Protect Our Human Future in the Age of Artificial Intelligence

$125 million, five-year commitment by Eric and Wendy Schmidt will support leading researchers in artificial intelligence making a positive impact

New York — Today, Schmidt Futures announced the launch of “AI2050,” an initiative that will support exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Eric and Wendy Schmidt are committed to funding $125 million over the next 5 years, and AI2050 will make awards to support work conducted by researchers from across the globe and at various stages in their careers. These awards will primarily aim to enable and encourage these AI2050 Fellows to undertake bold and ambitious work, often multi-disciplinary, that is typically hard to fund but critical to get right for society to benefit from AI.

I was particularly interested to see the following items listed in their Hard Problems Working List:

What follows is a working list of hard problems we must solve or get right for AI to benefit society in response to the following motivating question:

”It’s 2050, AI has turned out to be hugely beneficial to society and generally acknowledged as such. What happened? What are the most important and beneficial opportunities we realized, the hard problems we solved and the most difficult issues we got right to ensure this outcome, and that we should be working on now?”

...

2. Solved AI’s continually evolving safety and security, robustness, performance, output challenges and other shortcomings that may cause harm or erode public trust of AI systems, especially in safety-critical applications and uses where societal stakes and risk are high. Examples include bias and fairness, toxicity of outputs, misapplications, goal misspecification, intelligibility, and explainability.

3. Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI. Examples include race conditions and catastrophic risks, provably beneficial systems, human-machine cooperation, challenges of normativity.

...

5. Solved the economic challenges and opportunities resulting from AI and its related technologies. Examples include new modes of abundance, scarcity and resource use, economic inclusion, future of work, network effects and competition, and with a particular eye towards countries, organizations, communities, and people who are not leading the development of AI.

...

8. Solved AI-related risks, use and misuse, competition, cooperation, and coordination between countries, companies and other key actors, given the economic, geopolitical and national security stakes. Examples include cyber-security of AI systems, governance of autonomous weapons, avoiding AI development/​deployment race conditions at the expense of safety, mechanisms for safety and control, protocols and verifiable AI treaties, and stably governing the emergence of AGI.