An Important Mathematical Oversight

The original intention for this website was to encourage public awareness of an historical medical crime, one that has remained a tightly-kept British state secret now for more than five decades. The matter is of enormous public interest, not least because the motivation behind the crime itself was that of advancing scientific research into areas that would come to provide the seminal knowledge behind much of the technological progress of the last half-century. My investigation into the matter inspired a parallel enquiry into some of the fundamental principles that underpin that scientific and technological impulse.

There are therefore two principle concerns of this website, and if there is acknowledged to be a substantive connection between them, that has inevitably to do with late 20th Century developments in science and information technologies, and more broadly with the idea of an burgeoning technocracy – the suggestion of a growing alliance between corporate technology and state power – one that might be judged to have atrophied the powers conventionally assigned to liberal-democratic institutions. This link therefore serves as a segue to emphasise the equal importance, to my mind, of what is going on in the Xcetera section of the site, so that that section should not appear, from the point of view of the other, as some kind of afterthought.

Xcetera is concerned with a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or ‘integral’, properties. It is argued that this conventional understanding of integers, which is the one widely held amongst mathematicians and scientists adopting mathematical principles, is the cause of a significant oversight with regard to changes in the relations of proportion between numerical values, i.e., when those values are transposed out of the decimal rational schema into alternative numerical radices such as those of binary, octal, and hexadecimal, etc.

On the page: The Limits of Rationality it is argued that the relations of proportion between integers are dictated principally by their membership of the restricted group of characters (0-9) as defined by the decimal rational schema; and that corresponding ratios of proportion cannot be assumed to apply between otherwise numerically equal values when transposed into alternative numerical radices having either reduced (as in binary or octal, for instance) or extended (as in hexadecimal) member-ranges.

This is shown to be objectively the case by the results published at: Radical Affinity and Variant Proportion in Natural Numbers, which show that for a series of exponential values in decimal, where the logarithmic ratios between those values are consistently equal to 1, the corresponding series of values when transposed into any radix from binary to nonary (base-9) results in logarithmic ratios having no consistent value at all, in each case producing a graph showing a series of variegated peaks and troughs displaying proportional inconsistency.

These findings are previously unacknowledged by mathematicians and information scientists alike, but the import of the findings is that, while the discrete values of individual integers transposed into alternative radices will be ostensibly equal across those radices, the ratios of proportion between those values will not be preserved, as these ratios must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or ‘potentiality’) of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of ‘integral’ values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.

Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. While the assessment of quantitative and qualitative differences at the level of the observable world necessarily entails assessments of proportion, the digital encoding of those assessments ultimately involves a reduction, at the level of machine code, to the form of a series of simple binary (or ‘logical’) distinctions between ‘1’ and ‘0’ – positive and negative. The process relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption. Logic is assumed to operate consistently without limits, as a sort of ‘ambient’ condition of information systems.

In the Xcetera section I am concerned to point out however that the logical relationship between ‘1’ and ‘0’ in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits limited to two members. It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that, despite its apparent simplicity, may well come as a surprise to many mathematicians and information scientists alike).

As the proportional relationships affecting quantitative expressions within binary are uniquely and restrictively determined, they cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.

The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific ‘integral’ numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

So that’s some of what Xcetera is all about.. If you think you’re ‘ard enough!

[ PDF version ]

[ PDF version ]

[ PDF version ]

[ PDF version ]

PDF DOWNLOADS

Download my 173-page
report: Special Operations
in Medical Research

[pdf – 1.9MB]:

Download my Open Letter to the British Prime Minister & Health Secretary
[pdf – 363KB]:

The Limits of Rationality
(An important mathematical oversight)

[863KB]:

Radical Affinity and
Variant Proportion in
Natural Numbers

[2.53MB]:

Mind: Before & Beyond Computation
[643KB]:

Dawkins' Theory of Memetics – A Biological Assault on the Cultural
[508KB]:

Randomness, Non-
Randomness, & Structural Selectivity

[616KB]:

Is Artificial Intelligence a Fallacy?

..or is your dog more intelligent than your smartphone?

The quest for a working model of artificial intelligence began in earnest with Alan Turing’s cracking of the Enigma Code towards the end of WW2. That project had enlisted a mechanical apparatus – the Enigma Machine – in facilitating laborious complex logical operations upon coded enemy communications in a fraction of the time it would take a human, or team of humans, to perform the same operations. But Turing’s experimental concept of “thinking machines”1 went much further than this, in theory. Turing’s idea was that all human intellectual operations could ultimately be broken down into series of logical procedural steps, and the implication of this idea (in the 1940s) was that what limited machines from an ability to make convincing approximations to human intellectual activity was not a limitation in principle, but only one of the means available. By projecting forward advancements in technological materials and apparatus, such limitations might be overcome, at least in theory.

Turing opens his 1950 paper Computing Machinery and Intelligence with the question: “Can machines think?”2 The question implicitly begs a working definition of the concepts machine and thinking. With regard to the latter definition Turing anticipates a degree of resistance from common consent that so customarily human an activity as thinking might be conceived as an attribute of inanimate objects, and, rather than engaging in absurdity, abandons any further effort over the difficulty of the definition. Instead of risking a possibly fruitless meditation on the nature of thinking, he devised an experimental scenario, now known as the Turing Test, in which he proposed that an imaginary computer (one which in Turing’s era was, and still remains, technologically unachievable) plays an “imitation game” against a human competitor, responding to a series of questions from a remote interrogator. If the computer succeeds in responding ‘naturalistically’ enough to the interrogator’s questions so that the interrogator is unable to reliably identify the origin of the responses as either human or machine, then, in Turing’s terms, it has come to deserve the attribute of a “thinking machine”:

“May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”3

Turing’s “imitation game” (which we might just as well characterise as a deception game) was intended to explore the capacity of machines to think in terms of an analogue, or functional equivalent, of human intellectual processes. But it was designed from a premise that consciously eluded the question of what are the delimiting characteristics of human thinking. In the absence of the technological advancements necessary to the fulfilment of his proposition, there is no way to test the pragmatics of Turing’s bold idea. Moreover, Turing perceived no practical limit upon his projections for those advancements in the future design and operational capacity of the imagined machines, which tends to place the privileged scenario within the category of science fiction. Framed in such concordant terms, the proposition becomes a self-fulfilling prophecy.

In view of Turing’s open anticipation of an objection that he nevertheless simply disregarded, his impulse seems conceited. If instead we allow ourselves the trouble of a precautionary objection over the differences between the imagined machine process, which manages to deceive of the appearance of thought, and the authentic human process upon which it is modelled, we might elucidate, rather than elude, the enquiry over the question of what it is to think. Turing’s apparent abandonment of scientific rigour with respect to the pragmatics of his proposition suggests that his motivation for sidelining these objections was influenced by an overriding anthropomorphic impulse – which is to say that he equates the imaginary appearance of thinking in machine behaviour with its human analogue unproblematically, with as little concern for reality as the way in which Thomas the Tank Engine, for example, is depicted with the imaginary features of a human facial expression.

As a matter of common consent, the term thinking implies what we as individuals experience as embodied consciousness. This includes a variety of faculties, including: reasoning, recollection, imagining, desiring, calculation, reflection, association, etc. There is an experience, common to us all, of what it is to be a ‘thinking being’, but which also appertains to all our experience, including what we witness as evidence of thought in others. Therefore, we are unable to achieve a clinical distance independent of our own processes of thought to be able to consider thinking (or its corollary the mind) as a discrete phenomenon in isolation. The question requires a self-reflective approach; i.e., one which refers its conclusions back upon the limits of its observations. This is possibly the reason for Turing’s unwillingness to approach the difficult question of the meaning of what it is to think, as there is no outside of the mind from which one could approach a definition with any prospect of empirical certainty.

The mind receives sense data as ‘inputs’ in the form of physical, bodily processes. The expressions or ‘outputs’ of thought processes in speech and gesture are also physical motor processes. The processes of thought, as well as the appearance of thinking, are therefore necessarily structured somatically. In a human context this is what gives to thinking its authenticity. This is perhaps what Spinoza meant when he wrote that: “[T]he object of the idea constituting the human mind is the body”4. We do not (normally) have direct access to another’s intimate thought processes, but we receive the other’s motor outputs as our own sensory inputs, usually involving a degree of linguistic codification. However, we can anticipate reciprocally the cognitive processes that have given rise to these expressions, by reverse analysis and identification with our own familiar processes of cognition. This depends upon our implicit awareness of a fairly restrictive array of parameters defining cognition, but which ultimately make it possible for individuals to cooperate and interact as a species.

Independently of this shared capacity there would be no grounds for establishing the norms, morals, codes of behaviour and understanding, necessary to the formation of any form of social contract. In order for such an ethics to be meaningful, and binding, thinking as the basis for action must take the form of embodied consciousness as a limiting principle; i.e., as one that is universally comprehended through the shared physical (somatic) structure and parameters which give rise to, and set limits to, our ability to think. The transfer, anthropomorphically and in science fictional terms, of an analogue of human thinking to inanimate objects, suggests no conceivable executive role for such an intelligence in which we could ever have any confidence in its ability to make an ethical decision. The question posed by Turing in his opening gambit: “Can machines think?” is therefore one for which there is no ethical legitimacy.

In the decades following Turing’s influential publication, there was a significant impetus towards the development of a model for artificial intelligence, and its academic discipline, Cognitive Science – encouraged by its association with technological advancements in solid-state electronics and semi-conductors. The academic origins of this and of Turing’s own theoretical and technological program arose out of the largely Anglo-American empiricist philosophical tradition of logical positivism, which had emerged prominently in the years following the end of WW1. Logical positivism, or in Bertrand Russell’s definition, logical atomism5, was fundamental to the development of Information Science, and was an attempt to analyse language into its simplest logical and factually ‘positive’ components – a point at which analysis could go no further. Its premise was that everything of meaning in language should be reducible to what is directly given in sensory experience. Natural language is typically ambiguous because it treats as ‘things’ concepts not directly derivable from positive experience (i.e., from direct sense impressions): concepts like ‘honour’, ‘friendship’, ‘life’ etc.; in other words, concepts which represent qualities, relationships, and processes, and which are apprehended by the intuition rather than as sensible objects. In informational terms, the substantive use of such concepts in natural language is ‘metaphysical’ and hence meaningless. Logical positivism sought a method of making language speak without recourse to what it understood as metaphysical ‘pseudo-statements’. Such statements were incongruous to empirical reality – in informational terms, effectively ‘noise’ – they could not be rendered as informational certainties. The correspondence of language with reality was to be found at the level of its simplest irreducible elements – the ‘bare bones’ of subjects and predicates – in Russell’s terminology, its atoms. In this fundamentalism of logic applied to language, logical positivism prefigured a method for rendering natural language into machine-readable informational units.

The supposition by proponents of artificial intelligence was that the rubric of human intellectual operations could be functionally equated with this process of the empirical extraction of logical elements, together with the efficient processing, or computation, of the derived informational content. This construed an ideal form for intelligence in which rational, logical procedures be conducted in their pure state, as isolable informational routines (algorithms), unimpeded by logical inconsistency. In this view, human intelligence actually sets the goal as it is perceived to be, at least potentially, superlative. However, in its everyday operations, human intelligence is usually flawed or hampered in some way – it is typically prone to distraction, and hence to error, or at least impedance (for instance, in the need for error-checking revision). It may also be disrupted by emotive impulse or fatigue. It is tempting therefore to project that a refined form of machine intelligence might eradicate human unreliability in information processing, by the reduction of intelligence to its purely logical, machine readable, constituents.

Within a philosophical framework which generally elevates human rationality above other organically occurring systems, forms of animal intelligence are generally disavowed – animal behaviour usually being construed in terms of the effects of instinctual impulses rather than rational thought processes properly speaking. Instinctual animal impulses could be perceived as the antithesis, or antagonist, of rational thought processes. To some extent conversely then, perhaps human animals might be understood as being affected by residual instinctual drives, which interfere with the straightforward operation of logical processes, so that the rational human animal becomes tainted, in practical terms, in comparison with its ideal machine analogue. As a counterpoint to the characteristic irrationalities and foibles of the human cognitive condition, imagine a race of Vulcans, including Mr Spock, whose sole existential purpose seems to be to remind us mere humans of just how supremely rational we might be if we weren’t so fatally quirky. It is a somewhat tormented perspective which projects a model of intelligence from within us, so as to establish a measure for intelligence against which we must characteristically always come up short. I believe it is partly from this kind of tormented perspective that the impulse to fashion a form of artificial intelligence achieved widespread scientific and commercial support.

Partly, that is, because there were also clear instrumental reasons in support of such a project; not least because, even in their embryonic forms, these ideas had contributed in no small way to the allied victory over the Nazis in 1945. In the high-stakes game of international conflict, the requirement for the smooth, fast, and efficient performance of information processing tasks would henceforth prioritise the quest for machine intelligence.

Key factors in these developments were therefore the influence for the pursuit of military intelligence; the development of sophisticated weapons technology, including nuclear weapons; and the associated drive towards the exploration of space; all situated against the backdrop of Cold War stratagems that so dominated the geopolitical landscape of the 1960s, ‘70s, and ‘80s.

At no point in this process does there seem to have been much, if any, reflective analysis of what constitutes, or what might differentiate, what we commonly experience as human intelligence beyond the logical positivist reduction of cognitive processes to their functional logical components. It was a process driven ineluctably by both military and commercial imperatives, with a characteristic absence of concern either for short-term or for long-term human consequences.

With respect to those concerns, what is overlooked or sidelined in the prospective exchange of human intelligence for machine intelligence? What of the differences between perception and cognition; or between reason and intuition? How do we explain the faculties of association, or of judgement, or the intuitive categories of the understanding, in terms of a mere logical sequence of processing steps?

Take the following scenario as an example: when the sun emerges from behind a cloud, I feel the warmth of the sun on my face. The logical components of this scenario are: the sun shines; I experience warmth on my skin. But the relation of causality between these two events entailed by my cognition: the sun warms my face, is not inferable purely on the basis of their logical content – it requires an intuitive association to be provided by my understanding. A system that operates independently of intuitive understanding must in addition ask a series of questions in order to eliminate any possible alternative explanation for why my skin got warm (I experienced sudden embarrassment; someone fired up a barbeque in my vicinity; etc.), in order to ascertain with only relative certainty that it was due to the sun’s rays. A system operating on the basis of machine rationality is intuitively inert, and therefore its certainty over questions of causality such as the one described can only be relative to the degree to which it is efficiently pre-programmed to assess all of the possible conditions pertaining to any possible scenario in which these two effects may be conjoined; and also to take account of the fact that the sun’s emergence will not always, by necessity, have the effect of warming my skin – it depends on other factors contingent upon the unique situation. Such a pre-programming procedure will take potentially an infinite length of time, and still only ever achieve a degree of probalistic (relative) certainty, in place of the absolute certainty entailed by my response: the sun warms my face, which was available to me in an instant.

Or consider the difference between intuitive judgements of universality and particularity. For instance, imagine a situation where someone expresses hope (in the universal), without expressing optimism (in the particular). A man might say: “I hope to see my children”, and by this statement may be indicating that he expects (with a particular intention) to see his children at some specific point in time, but which may be conditional on some other impending circumstance; or he may equally be implying that he hopes (in general) to see them perhaps against all possible odds (i.e., without any particular expectation). Someone having knowledge of the man’s circumstance, or who was engaged in a conversation with him, would likely know intuitively which particular inference was appropriate. We routinely exercise these kinds of intuitive judgments in almost every aspect of our lives, without necessarily being consciously aware that we are doing so. If we artificially extract the logical content of the statement “I hope to see my children” however, there is no effective means to distinguish which of the two alternatives is inferable. Such abstractions are typical of the way in which information systems treat data objects, and although one could argue it is merely the lack of background information that prevents a ready distinction, the utterance has no logical interpretation unless it is linked to a particular intention. The tendency in fact is for logical interpretations to treat all such statements as cases in the particular, at the expense of the universal.

It seems to me, therefore, that synthetic judgements of causality entailing absolute certainty of the kind in the first example, and intuitive inferences of universality or particularity in the form of the second example, while indispensable elements in most kinds of human intellectual activity, are beyond the scope of computational devices operating on the basis of machine rationality. Against the hyperbolic attribution of the term ‘artificial intelligence’, to accurately define the method of operation of such systems one would need to resort to a more prosaic and realistic one, such as that of ‘parallel logical processing’, as in the seventy years that have passed since the birth of the concept ‘thinking machines’, no such system has achieved anything approximating to the degree of sophistication inherent in the processes of human understanding with which we are all familiar. While admittedly, a prosaic attribution such as the one suggested lacks all of the excitatory commercial potential commonly associated with that of ‘Artificial Intelligence’, and which serves to blind us to its inherent failings, it allows for an essential distinction to be drawn between human and machine intelligence – one which emphasises the invaluable non-logical, non-linear, characteristics that distinguish the former.

As a theoretical construct, artificial intelligence began as an attempt to model or approximate the faculties of the human intellect, while deprecating its essential defining characteristics. It is a sad consequence that, in the eyes of many, it also became imbued with the potential of surpassing human intellect, while at the same time failing to appreciate the fluidity and sophistication that distinguishes the latter. If we propose that a machine is capable of outsmarting people, yet which in practice fails even to approximate the most common practices of human understanding, we risk bracketing out human critical faculties from the sentient business of our daily routines. The logicist premise of artificial intelligence is one that underestimates human critical faculties, yet within the terms of that premise cannot even match them, as those faculties must be considered to be ‘supralogical’. It remains therefore a fallacy rather than a reality; and one which, for so long as we remain ensnared by its false promises and charms, will inevitably ‘dumb us all down’.

August 2014
(revised: 29 May 2024)

back to top ^

Footnotes:

  1. Turing, Alan, Computing Machinery and Intelligence (October 1950), Mind LIX (236), p.436: http://somr.info/lib/Mind-1950-TURING-433-60.pdf [back]
  2. Ibid., p.433. [back]
  3. Ibid., p.435. [back]
  4. Spinoza, B., Concerning the Nature and Origin of the Mind, in his Ethics (tr. Boyle, A.; ed. Parkinson, G.H.R.), J.M. Dent & Sons, 1989, p.48. [back]
  5. Russell, Bertrand, The Philosophy of Logical Atomism (1918), Routledge, 2010. Modern positivism is a development from Enlightenment empiricism, which emerged as a principally British philosophical tendency in the two centuries prior to the Industrial Revolution, most notably in the philosophy of Francis Bacon, David Hume, and John Locke. For further discussion of positivism in relation to theories of mind, see the later sections of: Mind: Before & Beyond Computation. [back]