Progress Becomes Development When It Is Governed By Ethical Choices

by Davide Maniscalco

“If there is such a thing as “algorithm-ocracy” there must also be “algorithm-ethics.” Archbishop Vincenzo Paglia, President of the Pontifical Academy for Life, has no doubt.  “Technology must be at the service of humanity, if not, humanity faces ever-greater dangers.”  The Vatican is asking itself questions about the future of artificial intelligence.  We read that the European Commission will issue a general “White Paper” on February 19 about the future of artificial intelligence next February 19, but in a more focused way the Catholic Church is insisting that society take responsibility for the basic “ethical questions” affecting technological progress.  Together with the cyber-giants, Microsoft and IBM, the Church has taken up the challenge of uniting the already-active scientific, political and social communities in the task of building a new and fair “digital humanism” based on human values.

In a recent interview with Davide Maniscalco for OFCS.Report, Archbishop Paglia discussed the role of the Catholic Church in the landmark reality-shift that we are experiencing.  “Progress becomes development,” he says, “when progress is governed by ethical choices, and for this reason it is extremely important to clarify the meaning of ‘ethics.’”  Precision in arriving at that definition, he continues, leads to consistency in our use of the expressions “ethical dimension” and “anthropocentric approach.”  He emphasizes that emergent technologies must also be convergent and lead to a digital revolution that is the fruit of a “renAIssance,” a new humanism.  “We are preparing a ‘Call for Ethics,’ that will result in an in-depth examination of the effects of new technologies, of the risks they present, of possible regulatory structures, and of academic responses.”  The presentation being prepared will address the challenges in the fields of ethics, government regulation and healthcare and will be discussed, together with Microsoft and IBM, during the workshop entitled “RenAIssance: for a Humanistic Artificial Intelligence” to be held next February 28, at the Pontifical Academy for Life.

The Archbishop recently answered some questions.

How far can technology go and what is the limit beyond which evolution can no longer be considered progress?

Today’s situation is unprecedented.  We are living out the last phase of what we can be called “the changing of an age.”  For example, in the last seventy years we have produced technology that can lead to total nuclear destruction, but once we realized the military possibilities of this technology, we eventually came up with agreements to correct the situation.  We are also dealing with developments in the economy, in particular as it is affected by unwise consumption of energy resources.  Yet another worldwide threat is the destruction of the climate and consequently, some fear, of humankind.  Society had been put on notice of the climate threat but had not responded.  Then four years ago government leaders met in Paris to create a defense.  Now, after Paris, our awareness has risen, even though our sensibility is still quite weak.

Even more worrisome, today, with “emergent” and “convergent” technologies we can effectuate isolated interventions on humanity itself, but we run the risk not only of destruction but also of “self-dehumanization,” so much so that there is even talk of “post-humanism” or “augmented man.” Likewise, we see the possibility of creating a sort of new slavery based on a market economy where “big data” is the new “big oil.”   This situation calls on us to take a “leap forward” that is moral and political of course, but is in reality simply human.  Faced with the continuing development of technology, if we do not take decisive and adequate action, we risk a new implosion whose consequences we can only imagine.  There is at least a risk that inequalities resulting from economic and industrial activities could be more serious to the extent that they those activities are controlled by algorithms rather than people.

What are the ethical concerns related to technological developments in medicine and science?

There are a number of them.  Beginning with the most obvious and simple, last year, at the Annual Meeting of the Pontifical Academy for Life, the Japanese scientist Hiroshi Ishiguro spoke of human cloning, and he argued that the current human condition is being the last one that will be biological and organic.  I find this a chilling prospect.  It is true that technological development can offer significant possibilities in the care of the person, but as a man of supernatural faith I cannot fail to remind us all that technological progress, which does come from man, is possible only because man, who has a supernatural destiny, has received from God Himself the power, the ability and intelligence to be the author and master of technology.  Progress is the fruit of our labor, but what must we keep in mind?  We must be aware of our limitations.  In the Bible’s Book of Genesis God told Adam and Eve that they could eat of everything that was in the Garden of Eden except the “forbidden fruit.”  Here was a Divine affirmation of our need to be aware of our limits.  If we are not, we become like the pagans’ Prometheus and think we are God’s equal.  It is one thing to know and care for life, it is another to know and create that life.  Not everything that can be done must be done. We must face progressive technological development in a wise, intelligent way, knowing that technology is the servant of man, and not the other way around.

Thinkers have been giving us this admonition since the 1950’s.  Heidegger himself, when he said only God can save us, was not speaking about our Christian God, he meant rather the god of reason, he meant that only reason can save.  He meant: “Be careful, technology must serve man, not vice versa.”  As long as discoveries help to know, to heal and to be always “human,” we are on a path of development that I would call worthy.  It’s only when discoveries compromise and manipulate what is human that we must be on our guard.  From what I can see, the most important point is this:  Technology is advancing so fast compared to our awareness of its ethical and anthropological implications that we risk falling behind, and not being able to pump the brakes on a car that is already on a dizzying downhill ride.  You have to be on board from the beginning, you can’t try to hop in when it might already be too late.  This is why the Pontifical Academy for Life takes on these issues.  It wants them to be studied in a wiser, more nuanced way.  There is one more point I want to emphasize.  When we speak of the person, we necessarily speak of the human family as well.  In a time when technology is global, when connections and commerce are global, we must remember that technology is to be at the service of all humanity.  Integral development embraces both individuals and the entire human family.  All progress, all development must be judged in this light.  If not, unjust discrimination weighs more heavily on everyone, and that leads to widespread slavery for the benefit of just a few.

The Holy See, in fact, wants to concentrate on the possible question of discrimination related to technology development.  Inequalities produced by economic development are serious, more so than can be countered by the comforting mantra that “a rising tide lifts all the boats.”  As it has turned out, some boats have indeed been lifted but others have almost sunk.  This divergence, which is one aspect of the crisis in contemporary society, can result in one group becoming almost slaves to the other.  Today, big data can make some boats always ride higher than others.  For example, with facial recognition there are no more rich and poor, only absolute tyrants and their powerless subjects.  The new gold—big data—can bring destruction, and it’s for this reason that I believe we need to redouble our ethical commitment.  Calling for things like wealth redistribution doesn’t work.  We need to grow our awareness, grow the rule of law, grow our involvement.  As far as the Catholic Church is concerned, we can no longer be content with giving advice from the outside.  We have to be on the inside to be able to understand the possibilities as well as the dangers of new technologies.  Here the problem is not only about the end use of various technologies.  We need to consider the various responsibilities involved in research, testing, production, distribution, personal use and, with urgency, the special uses of technological and recognition devices.

How much is this responsibility felt and how worrisome is the risk of a sacrificing the freedom of human self-determination?

I believe that the “invasion” of algorithms is in some ways unstoppable.  Some people even speak of an “algorithm-ocracy.”  Having said that, it is clear that if we want to save human dignity, democracy, the polis in the broadest sense, we must, to use a motto from the French revolution, maintain the primacy of liberty, equality and fraternity.  We must be able to enrich the dignity of man and of the human family, and not let it be subjugated.  So, if there is an “algorithm-ocracy,” there must also be an “algo-ethics” to weave technology and human dignity together, producing a cloth with warp of algorithms and woof of ethics.  In this sense, liberty, equality and fraternity are three concepts that are both secular and Gospel-based, and can be understood by everyone, believers and non-believers alike.  As the Catholic Church, we should act as “sentinels” keeping watch over the formation of great masses of data, taking care that liberty must be ensured, equality must be fostered, and fraternity must be a goal.  Technology must advance, and not bury, all each of these values.   This means that technology must be surrounded and permeated by all aspects of society—politics, economics, ethics, religion, labor, all institutions, every science.  Collegial interaction is essential.  Science and social realities must also be convergent.  A common path is absolutely necessary.  For my part, I am encouraged by that fact that, as an Academy, we embarked on our study of artificial intelligence not on our own initiative but because we were requested to do so by one of the largest multinationals.  The new President of Microsoft, Brad Smith, came to us and said, “We’re engineers, and we’re ‘condemned’ to bring a new product to the market every week.  We know we can attain excellence, in medicine for example, and much of what we do is even amazing.  Really though, we don’t want to just organize a conference with ‘the Vatican.’  We want to ask you to accompany us in what we do.”  In that request some might see ambiguity—perhaps just an endorsement?—but I know there is also such a thing as a “spiritual algorithm”—trust.  Artificial intelligence may be just mathematics, but trust is not mathematics.  Trust convinced me that, having total and unconditioned freedom, we could accept his challenge because the consequences of technological development are so important that we, as the Catholic Church, cannot stay on the sidelines.  The example of the nineteenth century “Social Question” is appropriate.  At that time, the Catholic Church, as industrial development ended the peasant economic structure, felt the responsibility of entering into the world of work with new Gospel wisdom, new thinking.  Now, in like manner, we can no longer stand on the outside just looking in.  If you are on the outside, you can’t understand; and if you finally do get in, you might be too late, and the paradigms you were expecting might have changed.  If you are not directly involved in a reality, understanding it is very difficult.  You’re in another world where you can speak, come up with rules, even use the jargon, but you will still be an outsider, not listened to, ineffectual and useless.  In my opinion failing to get involved when we can is sinful.

What could have been the reasons why Microsoft requested the involvement of the Academy? An ethical dilemma, something that shocked them?

I believe that Microsoft management (and IBM’s as well) is different.  They have taken the initiative on moral questions and have been more directly concerned with humanistic issues.  They have done so because they are very aware that without clarity in these areas, their businesses are at risk.  Microsoft President Brad Smith gave me an example.  He said that on the question of facial recognition, which is a technology that his competitors can develop as well, if they don’t all reach consensus on a regulatory structure, they will all fall into an abyss.  Conflicts arising out of cyber-technology are more dangerous and easier to foment than even those arising out of nuclear capabilities.  In addition, the need for legal and ethical rules increases in direct proportion to the growth in technological capabilities.  Both Microsoft and IBM have recognized the need to accept shared ethical, legal and educational frameworks.  Artificial intelligence questions are similar to climate questions, but in some ways might be more urgent because they affect individuals immediately rather than through the environment.  This is one reason why a stronger ethical as well as legal commitment is needed.  Without structure, individuals will have great difficulty managing the technical revolution that is taking place.  There is a felt need to work in unison or, if not, to have real control of key issues.  It is clear that international ethical and legal regulation in the employment of algorithms needs to be adopted.   Otherwise, anarchy in the management of big data becomes a real threat, even a threat to the survival of a corporation.  Since the challenges presented by technology and those presented by the climate seem similar, perhaps a single response to both could help:  combine humanism with a recognition of nature’s and technology’s risks, especially the risk of nuclear disaster.  The fear of destruction from one cause or the other may awaken us to a need for shared morality.

Is world aware of all this?

Microsoft and IBM are, but not the whole world.  I am convinced that, as with nuclear and climate threats, if we do not meet the challenges of artificial intelligence head on, we will not really understand them. It is true that attention is increasing, but we need to take a combined ethical, legal and academic leap forward.  However, since technology grows very fast and responsibility less so, we must speed up on the responsibility side to help governments become more effective, not just in the advancement of technology but in helping governments to achieve a deep understanding of polis.  Incidentally, let me add that I think Europe has a special role to play in this situation because it has a humanistic patrimony that neither the Far East nor the New World has.  My use of the term humanism has not been accidental.

At a concrete level, Europe is considering a ban on the use of facial recognition technologies in the public or private sector for a period of time (three to five years), during which a methodology for assessing the impact of these technologies and possible measures to mitigate risks can be identified and developed.  More broadly, we are waiting for the February 19 European Commission White Paper with respect to the future of artificial intelligence.  In addition, the presence in the Academy’s upcoming Artificial Intelligence Workshop of a Member of the European Parliament, David Sassoli, and of a representative of the UN Food and Agricultural Organization, as well as the interest of other governments through their embassies, demonstrates that we have struck a resonant chord.  I thank God for this progress.  The faster things move, the more we will encourage ethical and political commitment.

How important are individual institutions in all this?

Individual institutions are an important way to ensure that society does not think its problems can be solved by reliance on a “Savior.”  Thank God, there is only one Savior, and He has already saved us.  The question to be dealt with is that the various individual institutions, and they are many, do not maintain effective relations among themselves and are acting in a non-coordinated manner.  If each one did its part cooperatively, the result would be greater awareness of the challenges that artificial intelligence presents, and a greater level of effective coordinated activity.

As the Annual Meeting of the Pontifical Academy for Life next February 26-28 approaches, is the growth of the Academy’s ordinary, honorary, corresponding, and young researcher membership a sign of a new approach in the Catholic Church to the ethical challenges posed by technological progress?

These appointments are an implementation of the Pope’s strategy, that is, a broadening and deepening of our understanding of all life-related questions.  What is new is not the Academy’s purpose—Catholic doctrine, Gospel wisdom about the great gift of human life, continues to be the deep inspiration of our commitment.  It illuminates all aspects of human experience and is the foundation of the Culture of Life.  The good news of the Gospel about human life is offered to all as a source of inspiration and as a theme of cultural, political and social dialogue, but particularly to those who do not think exactly like us but who, like us, have life and human society at heart.”

OFCS.Report