The Pope and US regulators warn about AI risks: Law Decoded

The head of the Catholic Church warned humanity of AI’s potential dangers and explained what needs to be done to control it.

Nowadays, everyone has an opinion on artificial intelligence (AI) and its potential risks. Even Pope Francis — the head of the Catholic Church — warned humanity of AI’s potential dangers and explained what needs to be done to control it. The Pope wants to see an international treaty to regulate AI to ensure it is developed and used ethically. Otherwise, he says, we risk falling into the spiral of a “technological dictatorship.” The threat of AI arises when developers have a “desire for profit or thirst for power” that dominates the wish to exist freely and peacefully, he added. 

The same feeling was expressed by the Financial Stability Oversight Council (FSOC), which is comprised of top financial regulators and chaired by United States Treasury Secretary Janet Yellen. In its annual report, the organization emphasized that AI carries specific risks, such as cybersecurity and model risks. It suggested that companies and regulators enhance their knowledge and capabilities to monitor AI innovation and usage and identify emerging risks. According to the report, specific AI tools are highly technical and complex, posing challenges for institutions to explain or monitor them effectively. The report warns that companies and regulators may overlook biased or inaccurate results without a comprehensive understanding.

Even judges in the United Kingdom are ruminating on the risks of using AI in their work. Four senior judges in the U.K. have issued judicial guidance for AI, which deals with AI’s “responsible use” in courts and tribunals. The guidance points out potentially useful instances of AI usage, primarily in administrative aspects such as summarizing texts, writing presentations and composing emails. However, most of the guidance cautions judges to avoid consuming false information produced through AI searches and summaries and to be vigilant about anything false being produced by AI in their name. Particularly not recommended is the use of AI for legal research and analysis.

Read more

Leave a Comment