Navigation: This page '' ---> External Thinking ---> Main Page. HELP. Admin. Contact.

The Artificial Intelligence Question:
A Dooyeweerdian Resolution

Computer = Human ?

The Artificial Intelligence (AI( question is whether computers can, in principle, be like human beings.

Introduction and Summary

Ever since it was posed, two camps have formed, saying "Yes" and "No". The debates are interminable, never reaching resolution, and with each side seeming to talk past each other. This page briefly examines the nature of the debate - or in fact, debates, because there seems to be at least three ways in which the debate is fought. Each of the three seems to be relate to one of Dooyeweerd's ground-motives. The debates may be summarised as:

Summary of Grounds for Debates on the AI Question
Humans Computers Ground-motive Typical author Aspect pair
Mind Matter Greek: Form-Matter Many Analytic
Biological causality
Understanding, semantics
Physical causality
Symbol manipulation, syntax
Scholastic: Nature-Grace Searle Organic
Free Determined Humanistic: Nature-Freedom Alan Newell Psychic
Aspectual functioning (S) Aspectual functioning (O) Biblical (Creation, Fall, Redemption) Dooyeweerd All aspects

(Click links for explanations.)

Dooyeweerd's fourth ground-motive, that of Creation, Fall, Redemption, might bring resolution to the debate, because is allows a pluralistic idea of meaningfulness, which Dooyeweerd explored to generate a suite of aspects. Those are the aspects mentioned in the table.

The rest of this page discusses the AI question and explains how that table is arrived at.


On Comparing Computer to Human Being

The famous Turing Test suggests we answer the AI question by comparing the behaviour of computer and human. We are faced with a screen and we enter questions, to which responses are given. On the basis of the replies, we have to work out whether the responder is a computer or human. If we cannot, when the responder is a computer, it has passed the test.

However, the Turing Test restricts us to symbolic expressions, which privileges explicit over tacit, background knowledge, and excludes whole ranges of tacit knowledge from the comparison. Even if tacit knowledge is unearthed (for example using Dooyeweerd's aspects to reveal hidden issues using e.g. MAKE), the test leaves open what kinds of behaviour must be compared. More problematically, the test gives little insight into why (on what grounds) computers can, or can never, be like humans. Like all behaviourist tests, it is fundamentally limited.

It is the question of the grounds on which computers might be like humans that has occupied most of the debates. Three main grounds for debate may be detected.

In each, two properties, X and Y, are compared, which are exhibited fundamentally by humans and computers respectively. The AI question is to what extent and in what ways X = Y.

"Computers are matter; Humans are mind"

A common debate is about the substance of the computer and human, in the Aristotleian sense: X = mind (human) and Y = matter (computer). Proponents of strong AI argue that mind can be reduced to matter, is 'only' a label for matter functioning in particular ways. Opponents deny this.

What is sometimes hailed as a middle ground is the notion emergence, that out of complex functioning of matter emerges patterns of behaviour that could not be predicted from the laws of matter alone. "Could not be predicted" is problematic, however, because it conflates "could never be predicted in principle" with "we cannot predict it because it is too complex".

Whatever emergence means, a deeper problem remains: Only by already knowing beforehand what mind is, can we recognise the newly emerged behaviour patterns as mind. What is meant by 'mind' is 'smuggled in'. Ultimately, most notions of emergence involve smuggling in meaningfulness from outside, because emergence is presupposed rather than critically probed.

The debate based on matter and mind remains unresolved.

"Computers operate physically; Humans operate biologically"

Some Roman Catholic or other religious thinkers argue that human beings possess and are animated by some kind of 'divine spark', which computers lack. Debates over this make explicit appeal to dogma.

A similar reasoning may be found among secular thinkers like Searle [1990], though employing different terminology. His 'divine spark' is biological causality, which operates in humans, as opposed to physical causality, by which computers operate. Biological causality enables genuine understanding and the processing of information; physical causality enables mere manipulation of formal symbols.

But Searle did not offer any sound basis for saying that the computer manipulates formal symbols that would preclude saying the computer processes information. If we open up the (physical) case of a computer we find no symbols there being manipulated! On what grounds does Searle claim that symbols operate by physical causaaity? On what grounds does "information processing" require biological causality? Searle did not say. He held this as a dogma.

Searle's argument is part of his Chinese Room thought-experiment, with which he tries to demolish the strong AI case. The responses and Searle's rejoinder are reviewed in Boden [1990]. The debate over divine spark or mysteriously distinct causalities continues unresolved.

"Computers are determined; Humans are free"

In a third debate, X is non-determinacy and Y is determinacy; humans are free, but computers are determined. Some merely hold as a dogma that all freedom is illusory, others suggest that even physical behaviour is non-determined, and yet others resort to philosophical idealism, claiming that determinacy is illusory.

Newell [1982] tried a more sophisticated argument. Having argued for the irreducibility of the knowledge level to the symbol level of computers, Newell argued that the behaviour of a system (computer or human) that is determined at the symbol level can be indeterminate at the knowledge level. His argument was that if we define knowledge contained within statements as the total of whatever could be deduced from them (!) then this is infinite, so a knowledge level description of the agent and its behaviour will be 'radically incomplete' and must be augmented with some symbol level description (knowledge of its algorithms). This argument proved "rather hard to understand" [Newell 1993, 33]. Newell's definition of knowledge as infinite logical closure is so completely at variance with everyday experience of what knowledge is that its veracity must be questioned. Other problems with Newell's argument are found in Basden [2008].

Since Newell restricted his argument to the observer's ability to predict the agent's behaviour, and ignored the issue of whether behaviour can be non-determined as such, his argument does not really address the AI question. The debate around determinacy continues unresolved.

Dooyeweerdian Critique of the AI Debates

In all three debates, we have two characteristics, exhibited by humans and computers respectively, X and Y. Supporters of strong AI, wanting to argue that computers can be like humans, hold X can be reduced to Y or can emerge from Y by some means that requires no outside help. Opponents of strong AI, wanting to argue that computers never can be like humans, hold that X and Y are always incompatible. All three debates are ideological in nature, as Colburn [2000, 80-81] put it:

"If the idea that mental processing can be explained by computational models of symbol manipulation is an AI ideology, there seems to be a countervailing ideology, embodied by Searle and many others, that no matter what the behavior of a purely symbol manipulating system, it can never be said to understand, to have intentionality."

Dooyeweerd [1979] argued that debates like these, which refer to fundamental ideas like mind or freedom, are driven by ground-motives that influence what the thinking community believes the alternatives are. The three AI debates reflect the three dialectical ground-motives that have driven Western thought over the past 2,500 years -- those of form-matter, nature-grace and nature-freedom -- positioning computers and humans at their opposing poles. This alignment is shown in Table žnoc-t-AIgmvs, columns 1-4. The table also contains material referred to below.

Dooyeweerd argued that dialectical ground-motives constrain theoretical thought, at a deep level, because they treat the polar oppositions as inescapable 'truths', which form the grounds on which debates take place. So supporters of strong AI are faced with the challenge of arguing either that one pole is illusory, or that it can be reduced to the other -- or they just ignore one pole -- but they are unlikely to succeed because the terms of debate already presuppose the polar opposition they are trying to deny. Opponents of strong AI are equally challenged because, though they have the dualism on their side, it is not a truth but a presupposition, and everyday experience is not constrained by that presupposition. It throws up examples of the power of computers that embarrass them.

Dooyeweerd argues that these dialectical presuppositions are expressions of the Immanence Standpoint, which has misled philosophical thinking in the ways we try to understand the nature of things. If Dooyeweerd is correct, it is not surprising that this has led the AI debate over the past 50 years into blind alleys. This is why the debate remains unresolved. Here is the table again:

Summary of Grounds for Debates on the AI Question
Humans Computers Ground-motive Typical author Aspect pair
Mind Matter Greek: Form-Matter Many Analytic
Biological causality
Understanding, semantics
Physical causality
Symbol manipulation, syntax
Scholastic: Nature-Grace Searle Organic
Free Determined Humanistic: Nature-Freedom Alan Newell Psychic
Aspectual functioning (S) Aspectual functioning (O) Biblical (Creation, Fall, Redemption) Dooyeweerd All aspects

A Dooyeweerdian Resolution of the AI Debates

The three AI debates are over whether X and Y equate to, emerge from or differ from each other. What is common to both sides in all three debates is that X and Y are treated as self-dependent or foundational, that on which all else depends and which needs no reference to anything other than itself. Whether it is substance, causality or behaviour, it is treated as 'being-in-itself'.

Supporters of Heidegger might immediately interject that the AI question might be answered by reference to 'being-in-the-world': the computer's users and developers as its 'world'. Though this contains important insight, it makes it difficult to understand possibilities, including the possibility of being like or surpassing humans. So it cannot truly address the AI question.

In a short section on the dynamic nature of the reality of things, Dooyeweerd [1955,III, 109] sums up these approaches. One seeks a foundation in "a supposed theig-reality", the other seeks it in "the whole temporal world", both of which "is to seek it in a 'fata morgana', a mirage". As an alternative to both, Dooyeweerd suggests grounding the being of the computer in meaningfulness.

X and Y may be treated as aspects in which computers function. Each debate may be seen as over a pair of aspects, as shown in column 5 of the Table. Since each aspect is irreducible to others, it can be tempting, when driven by a dualistic ground-motive, to set them in opposition to each other.

In reality, however, the aspects are not opposed but work together in mutual inter-dependency and analogy, and each allows us to say something valid about both computers and humans. Seeing computers in terms of aspects allows us to capitalize on multi-aspectual understanding. Instead of monistically seeking a single ground for supporting strong AI, or a dualistic ground for opposing it, Dooyeweerd provides a pluralistic ground for understanding AI philosophically. This which understands computers as multi-aspectual beings, with multi-aspectual functioning as subject and object in relation to human functioning and aspectual law.

This provides a new basis for discussing both the difference and similarities between humans and computers. The similarity lies in that it is genuine functioning. The difference lies in whether the functioning is subject or object. Consider the following statements:

As genuine aspectual functioning both (1) and (2) are valid. As subject-functioning, (3) is valid but (4) is not valid, because only humans can function as subjects in the analytic aspect (which I am here taking 'think intelligently' to be). However, (5) is also valid, because it explicitly identifies that the subject-functioning is human functioning while the computer's activity is object-functioning.

Each aspect enables a different type of 'causality' (repercussion), which accounts for the fundamental difference between biotic and physical causalities. Though Searle holds the dogma that human and computer are fundamentally different, in Dooyeweerd, as we have seen, while difference is maintained and accounted for, in terms of subject-functioning, the similarity is also maintained and accounted for, in terms of aspectual functioning as a whole.

Chinese Room: Basden [2008] argues that all of them miss something obvious. What Searle believes is his 'killer' challenge -- where is the understanding of Chinese? -- has an obvious answer that all seem to overlook.

Benefits of a Dooyeweerdian Approach to the AI Question

Thus Dooyeweerd enables us to address the AI question, "Computer = Human?", not by discussing only "computer" and "human", but by discussing "=".

How might such a Dooyeweerdian approach to the AI question benefit us? If we ask what effect the current AI debate have, we might answer: they provide entertaining discussions in the media, they motivate and directs funding to some research into how to make computers more like humans or surpass them (which might divert funding from other things), or, when opponents are in the ascendency, they might foster some cynical complacency about what computers can do. Do we see much more than those? Is this not the kind of contribution that ideological debates afford?

The Dooyeweerdian view, by contrast, affords grounds for more precise deliberation, based on Being-as-meaning. Reference to aspects can bring clarity to debates by separating out what we mean by 'intelligence', 'understanding', etc. and by identifying what is meaningful in each ideological position and setting them in relation to each other, so that we no longer "talk past each other" [Colburn 2000]. Philosophically, Dooyeweerd's law-subject-object idea lets us understand the computer in relation to users and developers without obscuring it, and each aspect opens up different area for debate about whether computers might be equal to or surprass human beings. In this way we can take the humans into account while also allowing us to consider what possibilities computers offer. Moreover, since each aspect contains innate normativity, Dooyeweerdian thought might facilitate the current move to discuss ethical issues of AI. In these ways, a Dooyeweerdian view can move the AI debate closer to the everyday experience of users and developers.

The Dooyeweerdian approach can provide specific motivation and stimulation for a range of AI thinking and research, which may be coupled with research into the development of ICT features, ICT use and ICT development.

Example: That computers have been demonstrated to equal or surpass humans in the games, Chess and recently Go,may be explained, not by some general, amorphous 'intelligence', but by reference to the analytic and formative aspects, and to the fact that this is computers' object-functioning, which inherently involves human beings (developers and users). Such computers are able, in object-functioning, to make distinctions, process these and structure them, and hence 'learn' and 'plan'.

Example: That Google search can find items with similar meaning to the words we used demonstrates some capability in the lingual aspect, again object-functioning, which rests on the human subject-functioning of developers and users. In linking to a globality of meaningfulness ICT, as a connected network rather than as individual computers, might surpass humans.

The possibility that computers might surpass us need not threaten human life, aspirations or responsibility, nor our mandate to open up the potential of the aspects, for two reasons. One is that equalling or surpassing is confined to specific aspects. Though the range of such aspects might widen in future, full functioning in each aspect involves not only dependency on earlier ones, but also functioning in later aspects. Full, rich functioning in an aspect involves functioning in all others. This is why computers might play Chess but not yet be able to fully creative in real life.


Basden A. (2008) Philosophical Frameworks for Understanding Information Systems. IGI Global Hershey, PA, USA. ( ISBN: 978-1-59904-036-3.

Boden MA (ed.) (1990) The Philosophy of Artificial Intelligence, Oxford University Press.

Colburn TR, (2000) Philosophy and Computer Science M.E. Sharpe, Armonk, New York.

Dooyeweerd H. (1955), A New Critique of Theoretical Thought, Vol. I-IV, Paideia Press (1975 edition), Jordan Station, Ontario.

Dooyeweerd H, (1979), "Roots of Western culture; Pagan, Secular and Christian options", Wedge Publishing Company, Toronto, Canada.

Newell A. (1982) "The knowledge level" Artificial Intelligence 18:87-127.

Newell A., (1993), "Reflections on the Knowledge Level", Artificial Intelligence, 59:31-38.

Searle J (1990) Minds, Brains and Programs pp. 67-88 in Boden MA (ed.) The Philosophy of Artificial Intelligence, Cambridge University Press; first published 1980 in Behavioral and Brain Sciences 3:417-24.

This page, '', is part of a collection of pages that links to various thinkers, within The Dooyeweerd Pages, which explain, explore and discuss Dooyeweerd's interesting philosophy. Email questions or comments would be welcome.

Written on the Amiga and Protext.

Copyright (c) at all dates below Andrew Basden. But you may use this material subject to conditions.

Created: 4 November 2016 Last updated: