Going Deeper

AI Is a Tool, Superintelligence Is Fiction

AI Is a Tool, Superintelligence Is Fiction

This article was originally published at:


I recently had the chance to read two completely different articles about AI by Prof. John Lennox and Prof. Klaus Henning in the last KCF magazine 1&2/2022. However, in these differing accounts I missed the synthesis between both articles, which left the reader somewhat disoriented in their antithetical remarks. As director of the Institute for Faith and Science, however, I am precisely looking for orientation in such questions and would therefore like to introduce some theses into the discussion.

Five theses for correctly understanding the place and role of AI in our world.

Thesis 1: Christians should celebrate creativity and use technology

Technology is an expression of the creativity we have been given as images of God. Klaus Henning's article quite rightly encourages Christians to embrace new technological possibilities and ask themselves how they can use them in service to the kingdom of God. As was the case with letterpress printing, so today digitization and AI offer a wide range of opportunities for this. On the one hand, through new opportunities to spread the Gospel through new media, and on the other hand, through new ways of helping to meet human needs, to alleviate suffering, to support the weak.

Thesis 2: AI is and remains a tool, while superintelligence remains science fiction.

However, Christians should not be impressed by false promises or fears. AI will never develop consciousness. AI can wonderfully process algorithms, even flexible algorithms that adapt to problems. But if you ask an AI why it performed a certain action, you will not get an answer (unless it was programmed in by the programmer). 

AI has no reasons for its actions, it just works off the programmed-in rules. This is a fundamental problem of AI, the so-called "freedom of theory". These rules are purely mechanical or electronic causes for the AI's actions. However, the fact that this action can sometimes look as if the AI can make conscious decisions is not because the AI itself has consciousness, but because of the genius of the programmer.

God's Word presents human beings to us as living souls created in the image of God. This dignity, which is still today the only reliable basis for the person status, never comes to an AI, even if we should construct it in our image...

Eric Butler on Unsplash

Thesis 3: AI is not a person and should not be given the legal status of a person 

AI enormously enhances the capabilities of the human using it, e.g. by quickly evaluating a huge amount of data, but it does not assume responsibility because it is not a person. A person has a will, mind and feelings. An AI can only simulate all of that if it is built to be suitably intelligent. Like a zombie, it has no "inner life" of its own, no feelings. For it is a machine, not a living soul (or does man have the power to create living souls?). 

It also has no will, but only carries out the will of the one who programmed it, even if the latter programmed it with the possibility of "flexible decisions" - or more correctly "different possibilities of the calculation result". AI only calculates numerical values, but has no insight into the content meaning, the semantics of a data set. Image recognition works completely different for AI than for humans. Humans recognize a "coherent object with a meaning", AI merely assigns patterns and can easily be deceived, e.g. by a noise pattern hardly perceptible to humans, (cf. the "popular" captcha tests on the Internet). 

Since AI has no will of its own, it cannot legally be given the status of a person. Imagine a self-driving car running over a pedestrian who was behaving perfectly correctly on the road. In the court case, the car is found guilty because, after all, it is a legal entity in its own right. Instead of a jail sentence, the car simply receives a system update and drives on the road again after a week... Would this bring justice to the relatives of the deceased?

Thesis 4: AI opens up new means of power that are morally ambivalent

Yes, we can say that the digitization of marketplaces, for example, has fostered the exchange of global goods and services. It has vastly expanded the options available to consumers. But it has also rewritten the rules. The digital marketplace at Amazon, Alibaba, Ebay, etc. is not a public space, but the market itself is suddenly a company. And the company sets the rules. If this company now decides that certain goods, e.g. books with certain religious content, may not be offered on this market, then one cannot sue in court. Likewise, if this company determines the best-selling products from data analyses then it can in turn offer them itself cheaper and undercut all competitors. If a company itself becomes both a market and a player in the market, one can hardly assume fair market conditions.

John Lennox has already addressed the great dangers posed by the collection and analysis of data by AI in the hands of totalitarian governments and monopolistic companies in his article. Privacy is a fundamental right of every person, which is being increasingly restricted by the growing possibilities of AI. After all, anyone who does not agree to the data protection regulations does not take part in the "game" either and is left out. But ultimately this world and the market is not a game, but the place where God's justice should become concrete. 

This becomes all the more problematic because in a value-plural world no uniform moral understanding of the global actors can be assumed either, as the Ukraine war shows us quite tangibly.

Eric Butler on Unsplash

Thesis 5: AI is blind to its own mistakes and needs human supervision

The central problem of AI is that it has no moral concept itself, cannot know what it "SHOULD" do. So it is not surprising that chatbots can become radicalized in their statements because they have fallen into corresponding social media bubbles, or that an AI discriminates against women and people with a migration background on the basis of precisely this identity when granting loans on the basis of the trained data sets. The AI doesn't recognize that and can't recognize that. It takes a human to do that. It is a fundamental problem how to teach morality to an AI at all, since AI lacks semantic, rational and moral insight. Morality is more than just working off rules. Therefore, an "ethically sane" person (and if it is a legal entity in the form of a corporation) must always be responsible for the consequences of AI.

In this sense, enthusiasm for the new possibilities of AI should always be balanced against the fundamental intrinsic limitations of machines and the moral ambivalence that such innovations bring. Humans have been given the mandate to rule and subdue the world (Genesis 1.28, and to do so in the sense of the biblical shalom) - and this includes AI.

Letter to the Editor KCF Magazine 1/2-2022, pp. 32-34:

Is AI the gamechanger of our way of life? (Klaus Henning)

From Dr. Alexander Fink,

Director of the Institute for Faith and Science

www.iguw.de 

info@iguw.de

Thumbnail and cover photo by Eric Butler on Unsplash

Posted 
Sep 27, 2022
 in 
Going Deeper
 category

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI Is a Tool, Superintelligence Is Fiction

This article was originally published at:


I recently had the chance to read two completely different articles about AI by Prof. John Lennox and Prof. Klaus Henning in the last KCF magazine 1&2/2022. However, in these differing accounts I missed the synthesis between both articles, which left the reader somewhat disoriented in their antithetical remarks. As director of the Institute for Faith and Science, however, I am precisely looking for orientation in such questions and would therefore like to introduce some theses into the discussion.

Five theses for correctly understanding the place and role of AI in our world.

Thesis 1: Christians should celebrate creativity and use technology

Technology is an expression of the creativity we have been given as images of God. Klaus Henning's article quite rightly encourages Christians to embrace new technological possibilities and ask themselves how they can use them in service to the kingdom of God. As was the case with letterpress printing, so today digitization and AI offer a wide range of opportunities for this. On the one hand, through new opportunities to spread the Gospel through new media, and on the other hand, through new ways of helping to meet human needs, to alleviate suffering, to support the weak.

Thesis 2: AI is and remains a tool, while superintelligence remains science fiction.

However, Christians should not be impressed by false promises or fears. AI will never develop consciousness. AI can wonderfully process algorithms, even flexible algorithms that adapt to problems. But if you ask an AI why it performed a certain action, you will not get an answer (unless it was programmed in by the programmer). 

AI has no reasons for its actions, it just works off the programmed-in rules. This is a fundamental problem of AI, the so-called "freedom of theory". These rules are purely mechanical or electronic causes for the AI's actions. However, the fact that this action can sometimes look as if the AI can make conscious decisions is not because the AI itself has consciousness, but because of the genius of the programmer.

God's Word presents human beings to us as living souls created in the image of God. This dignity, which is still today the only reliable basis for the person status, never comes to an AI, even if we should construct it in our image...

Eric Butler on Unsplash

Thesis 3: AI is not a person and should not be given the legal status of a person 

AI enormously enhances the capabilities of the human using it, e.g. by quickly evaluating a huge amount of data, but it does not assume responsibility because it is not a person. A person has a will, mind and feelings. An AI can only simulate all of that if it is built to be suitably intelligent. Like a zombie, it has no "inner life" of its own, no feelings. For it is a machine, not a living soul (or does man have the power to create living souls?). 

It also has no will, but only carries out the will of the one who programmed it, even if the latter programmed it with the possibility of "flexible decisions" - or more correctly "different possibilities of the calculation result". AI only calculates numerical values, but has no insight into the content meaning, the semantics of a data set. Image recognition works completely different for AI than for humans. Humans recognize a "coherent object with a meaning", AI merely assigns patterns and can easily be deceived, e.g. by a noise pattern hardly perceptible to humans, (cf. the "popular" captcha tests on the Internet). 

Since AI has no will of its own, it cannot legally be given the status of a person. Imagine a self-driving car running over a pedestrian who was behaving perfectly correctly on the road. In the court case, the car is found guilty because, after all, it is a legal entity in its own right. Instead of a jail sentence, the car simply receives a system update and drives on the road again after a week... Would this bring justice to the relatives of the deceased?

Thesis 4: AI opens up new means of power that are morally ambivalent

Yes, we can say that the digitization of marketplaces, for example, has fostered the exchange of global goods and services. It has vastly expanded the options available to consumers. But it has also rewritten the rules. The digital marketplace at Amazon, Alibaba, Ebay, etc. is not a public space, but the market itself is suddenly a company. And the company sets the rules. If this company now decides that certain goods, e.g. books with certain religious content, may not be offered on this market, then one cannot sue in court. Likewise, if this company determines the best-selling products from data analyses then it can in turn offer them itself cheaper and undercut all competitors. If a company itself becomes both a market and a player in the market, one can hardly assume fair market conditions.

John Lennox has already addressed the great dangers posed by the collection and analysis of data by AI in the hands of totalitarian governments and monopolistic companies in his article. Privacy is a fundamental right of every person, which is being increasingly restricted by the growing possibilities of AI. After all, anyone who does not agree to the data protection regulations does not take part in the "game" either and is left out. But ultimately this world and the market is not a game, but the place where God's justice should become concrete. 

This becomes all the more problematic because in a value-plural world no uniform moral understanding of the global actors can be assumed either, as the Ukraine war shows us quite tangibly.

Eric Butler on Unsplash

Thesis 5: AI is blind to its own mistakes and needs human supervision

The central problem of AI is that it has no moral concept itself, cannot know what it "SHOULD" do. So it is not surprising that chatbots can become radicalized in their statements because they have fallen into corresponding social media bubbles, or that an AI discriminates against women and people with a migration background on the basis of precisely this identity when granting loans on the basis of the trained data sets. The AI doesn't recognize that and can't recognize that. It takes a human to do that. It is a fundamental problem how to teach morality to an AI at all, since AI lacks semantic, rational and moral insight. Morality is more than just working off rules. Therefore, an "ethically sane" person (and if it is a legal entity in the form of a corporation) must always be responsible for the consequences of AI.

In this sense, enthusiasm for the new possibilities of AI should always be balanced against the fundamental intrinsic limitations of machines and the moral ambivalence that such innovations bring. Humans have been given the mandate to rule and subdue the world (Genesis 1.28, and to do so in the sense of the biblical shalom) - and this includes AI.

Letter to the Editor KCF Magazine 1/2-2022, pp. 32-34:

Is AI the gamechanger of our way of life? (Klaus Henning)

From Dr. Alexander Fink,

Director of the Institute for Faith and Science

www.iguw.de 

info@iguw.de

Thumbnail and cover photo by Eric Butler on Unsplash

Posted 
Sep 27, 2022
 in 
Going Deeper
 category

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.