Reducing internet usage and limiting the amount of data available to AI companies might seem like a feasible approach to regulate AI development. However, implementing such measures would likely face several obstacles. E.g.
AI companies purchase internet access like any other user, which makes it challenging to specifically target them for data reduction without affecting other users. One potential mechanism to achieve this goal could involve establishing regulatory frameworks that limit the collection, storage, and usage of data by AI companies. However, these restrictions might inadvertently affect other industries that rely on data processing and analytics.
A significant portion of the data utilized by AI companies is derived from open-source resources like Common Crawl and WebText2. These companies have normally already acquired copies of this data for local use, meaning that limiting internet usage would not directly impact their access to these datasets.
If any country were to pass a law limiting the network data available to AI-based companies, it is likely that these companies would relocate to other countries with more lenient regulations. This would render such policies ineffective on a global scale, while potentially harming the domestic economy and innovation in the country implementing the restrictions.
In summary, while the idea of reducing the amount of data AI companies have to work with might appear feasible, practical implementation faces significant hurdles. A more effective approach to regulating AI development could involve establishing international standards and ethical guidelines, fostering transparency in AI research, and promoting cross-sector collaboration among stakeholders. This would help to ensure the responsible and beneficial growth of AI technologies without hindering innovation and progress.
Firstly, it’s essential to remember that you can’t control the situation; you can only control your reaction to it. By focusing on the elements you can influence and accepting the uncertainty of the future, it becomes easier to manage the anxiety that may arise from contemplating potentially catastrophic outcomes. This mindset allows AGI safety researchers to maintain a sense of purpose and motivation in their work, as they strive to make a positive difference in the world.
Another way to find joy in this field is by embracing the creative aspects of exploring AI safety concerns. There are many great examples of fiction based on problems with AI safety. E.g.
“Runaround” by Isaac Asimov (1942)
“The Lifecycle of Software Objects” by Ted Chiang (2010)
“Cat Pictures Please” by Naomi Kritzer (2015)
The Matrix (1999) - Directed by the Wachowskis
The Terminator (1984) - Directed by James Cameron
2001: A Space Odyssey (1968) - Directed by Stanley Kubrick
Blade Runner (1982) - Directed by Ridley Scott
etc, etc.
Engaging in creative storytelling not only provides a sense of enjoyment, but it can also help to spread awareness about AI safety issues and inspire others to take action.
In summary, finding joy in the world of AGI safety research and enthusiasm involves accepting what you can and cannot control, and embracing the creative aspects of exploring potential AI safety concerns. By focusing on making a positive impact and engaging in imaginative storytelling, individuals in this field can maintain a sense of fulfillment and joy in their work, even when faced with the possibility of a seemingly doomed future.