Key points:
• A seasoned lawyer, Steven A. Schwartz, used ChatGPT for legal research and ended up with fake court cases in his legal brief, illustrating the risks of AI misuse.
• High-profile tech leaders signed a declaration claiming that “Mitigating the risk of extinction from AI should be a global priority,” showing the growing concern about AI risks.
• Despite these risks, the tech industry continues to develop powerful AI systems, raising questions about their responsibility and the need for regulation.
Summary:
The article tells the story of lawyer Steven A. Schwartz who used the AI program ChatGPT to assist in his legal research for a case, only to find that the AI had fabricated the legal precedents cited in the brief. Despite his three decades of legal experience, Schwartz was unaware of the potential for AI to generate false content.
Meanwhile, many in the tech industry, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic, have signed a declaration emphasizing the existential threat of AI and calling for global prioritization to mitigate this risk. They have also proposed regulatory interventions to manage the risks of increasingly powerful AI models1.
Nevertheless, the author points out the paradox of continuing to build such potentially dangerous technology while acknowledging its risks. The author suggests that tech industry leaders may be too captivated by the technical challenges to consider the potential consequences of their work, or perhaps they are too beholden to the corporations they serve to halt the development of these powerful AI systems.