The current debate was triggered by an open letter initiated on 22nd March 2023 by The Future of Life Institute, an independent non-profit organization (https://futureoflife.org). The letter sought support for their call:
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Within a month, they collected over 30000 signatures where heavyweights like Elon Musk, Steve Wozniak, Yuval Noah Harari, Yoshua Bengio, Andrew Yang, Chris Larsen, Jaan Tallinn, and Craig Peters and several other influential personalities are lending their support to this call.
- Concerns raised and reasons cited by the Future of Life-:
- Prominent AI researchers have identified a range of dangers that may arise from the present and future generations of advanced AI systems if they are left unchecked.
- AI systems are already capable of creating misinformation and authentic-looking fakes that degrade the shared factual foundations of society and inflame political tensions
- AI systems already show a tendency towards amplifying entrenched discrimination and biases, further marginalizing disadvantaged communities and diverse viewpoints
- The current frantic rate of development will worsen these problems significantly. As these types of systems become more sophisticated, they could destabilize labour markets and political institutions, and lead to the concentration of enormous power in the hands of a small number of unelected corporations
- Advanced AI systems could also threaten national security, e.g., by facilitating the inexpensive development of chemical, biological, and cyber weapons by non-state groups
- The systems could themselves pursue goals, either human- or self-assigned, in ways that place negligible value on human rights, human safety, or, in the most harrowing scenarios, human existence
- The Institute followed up with a Policy Brief (issued 12th April 2023, last revision 19th April 2023) providing policymakers with concrete recommendations for how governments can manage AI risks.
It believes implementing these recommendations, which largely reflect a broader consensus among AI policy experts, will establish a strong governance foundation for AI.
The policy recommendations:
- Mandate robust third-party auditing and certification
- Regulate access to computational power
- Establish capable AI agencies at the national level
- Establish liability for AI-caused harms
- Introduce measures to prevent and track AI model leaks
- Expand technical AI safety research funding
- Develop standards for identifying and managing AI-generated content and recommendations
- A petition from LAION for Securing our Digital Future:-
Large-scale Artificial Intelligence Open Network (LAION), a non-profit organization based out of Germany (https://openpetition.eu) launched a petition on 29th March 2023.
- It calls upon the global community, particularly the EU, USA, UK, Canada, and Australia for “support in its urgent mission to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open-source foundation models”.
- LAION’s Suggestion: This facility, analogous to the CERN (the European Organization for Nuclear Research) project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs), operated by experts from the machine learning and supercomputing research community and overseen by democratically elected institutions in the participating nations.
- Their Objective: This monumental initiative will secure our technological independence, empower global innovation, and ensure safety while safeguarding our democratic principles for generations to come.
By embracing this cooperative framework, we can simultaneously ensure progress and the responsible development of AI technology, safeguarding the well-being of our society and the integrity of democratic values.
- LAION’s opposing view to the call from Future of Life:
- The proposition of decelerating AI research as a means to ensure safety and progress presents a misguided approach that might be detrimental to both objectives
- It could create a breeding ground for obscure and potentially malicious corporate or state actors to make advancements in the dark while simultaneously curtailing the public research community’s ability to scrutinize the safety aspects of advanced AI systems thoroughly
- Rather than impeding the momentum of AI development and shifting its development into underground areas, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment
- The commonality in the views or objectives:-
Countries must act swiftly to secure the independence of academia and government institutions from the technological monopoly of large corporations such as Microsoft, OpenAI, and Google. Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few.
Both petitions have received their share of backlash.
- Few experts talk about how regulating research based on possible future harm is a bad idea, and regulating products might be the right way forward.
- It will remain immensely challenging for people to not fall for hype discourse of the extremes.