This is how the Pope thinks about AI: Vatican issues guidelines
It is one kind of manual on the ethical use of AI, which was designed in a partnership between Pope Francis and Santa Clara University’s Markkula Center for Applied Ethics. To this end, both parties have founded an organization called the Institute for Technology, Ethics, and Culture (ITEC), writes the site Gizmodo.
ITEC’s first project is the very handbook entitled “Ethics in the Age of Disruptive Technologies: An Operational Roadmap”, which guides the tech industry on the many unanswered questions of ethics in the areas of AI, machine learning, encryption, tracking and… should advise more.
According to Father Brendan McGuire, pastor of St. Simon Parish in Los Altos and advisor to ITEC, the initiative is the culmination of a longstanding interest from the church. “The Pope has always had a broad view of the world and of humanity, and he believes technology is a good thing. But as we develop them, it’s time to start asking the deeper questions,” he says in an interview with Gizmodo.
Editor’s Recommendations
According to this, technology managers from Silicon Valley have been coming to him for years and asking for help on ethical issues. While many advocates, academics and observers focus their efforts on appeals to regulators, the ITEC handbook takes a different approach, according to the report.
Rather than waiting for regulations from governments, ITEC hopes to provide guidance to people in tech companies who are already grappling with the toughest questions of AI.
“A consensus is emerging around things like accountability and transparency, with principles that align across organizations,” said Ann Skeet, senior director of leadership ethics at the Markkula Center and one of the authors of the handbook.
More generally, the book advocates infusing technology and the companies that develop it with values from the start, based on a set of principles, rather than fixing problems after the fact. One of the very general basic principles in the handbook for companies: It must be ensured that actions serve the common good of mankind and the environment.
According to the report, the handbook is divided into seven guidelines, including “respect for human dignity and human rights” and “promotion of transparency and explainability”. These seven guidelines are then broken down into 46 specific, actionable steps that are accompanied by definitions, examples, and actionable steps.
The principle of human dignity deals with data protection issues, for example. The handbook calls for a commitment “not to collect more data than is necessary” and says that “collected data should be stored in a manner that optimizes privacy and confidentiality protections”.
Furthermore, companies should consider special protective measures for medical and financial data and focus on the responsibility towards the users: inside, not only on legal requirements.
The manual does not yet appear to be concrete and mature as an aid. Measures will be necessary sooner or later. Just months after OpenAI released ChatGPT, the company’s CEO, Sam Altman, has already met with US President Biden and testified before the US Congress on how AI should be regulated.
Father Brendan explains that the possible existential threats posed by AI are serious, but that AI’s near-term problems deserve just as much attention. “Big guard rails are absolutely necessary and countries and governments will implement them when the time comes,” he said. For him, at least, this book plays an important role in accelerating the approach to design and consumer adoption. The cooperation partners would have tried to put the companies in a position to meet the required standards well ahead of time.
Incidentally, Father Brendan was not able to give any insight into the question of whether the Pope has already used ChatGPT or not in the interview.