Integers and Proportion

In Measure & Rule it was stated that science, as part of its general empirical approach, presents quantitative information as if it were neutrally derived, on the basis of a hypothetical 1:1 relationship between the units of measurement and the object measured. This is the basis of scientific objectivity – the units of measurement are understood as being qualitatively neutral. The mechanical understanding of natural phenomena inevitably involves attention to qualitative criteria, and to assessments and descriptions involving the use of verbal, sometimes figurative, language; which entails a degree of subjective interpretation. However, we can resolve the uncertainty implicit in qualitative description so long as the units by which we measure natural objects remain proportionally invariant. Proportional invariance in quantitative assessment is guaranteed where numbers are taken as indices of absolute value. Hence the importance of the concept of numbers as integers, where value appears as an intrinsic, self-contained property: having no bearing or influence upon, and likewise being uninfluenced by, other adjacent or proximate values. As I have argued elsewhere (see the Analysis section of Radical Affinity etc., and other pages in this section), proportional invariance is not only counter-intuitive, but the principle is undermined empirically when one observes that, in the comparison of the logarithmic differences between, for instance, octal exponentials in sequence, when represented graphically, those values do not produce a horizontal straight-line (as they do within the decimal series), but instead a series of variegated peaks and troughs.

While I do not profess to a complete or adequate understanding of the vagaries revealed in the series presented in Radical Affinity etc., an initial (and unavoidable) interpretation of these findings must be that, in the comparisons of indentical numerical values across diverse number-radices, such discrete values, rather than behaving according to the definition of integers, with intrinsic self-contained properties (and by implication in relations proportionally consistent with other discrete named values), seem to exhibit proportional characteristics which derive solely and uniquely according to the particular base (or radix) in which they are currently represented. These characteristics must therefore be determined extrinsically (in contradistinction to the definition of an integer), and the determining factor must be (as it is the principle of variance which defines the current working number-radix) the restrictive array of available digits through which values may be represented (0-9 in decimal, 0-7 in octal, for instance).

In the Analysis section of Radical Affinity etc., I outlined a 'technical' critique of proportional invariance (or 'rational proportionality', as it is frequently referred to there), on the basis of the varying relative frequency of individual digits with respect to the range of total available digits prescribed by the terms of the current working numerical radix. For instance, a '1' (or a '0') in binary exhibits a higher relative frequency than its corresponding instance in decimal, or in any other numerical radix. Similar relative comparisons can of course be made between other available digits in any combination of radices. The case of binary is exemplary as it opens up the possibility of interpreting the either/or relationship of '1/0' in terms of the 'true/false', or 'positive/negative' distinction; hence, a distinction based on qualitative properties (at their simplest), rather than strictly quantitative ones. If this tendency towards qualitative potential is extrapolated in principle from the binary example, according to the varying relative frequency of digits across a diversity of number radices (without necessarily appearing in such clear-cut terms as in the binary example), then we have an analytical basis for understanding the qualitative potential of 'integers' (if indeed we are content to continue with this designation, with its overtones of 'intrinsicness'), and a justification for a rejection of their conventional interpretation as bearers of intrinsic (absolute) value.

Does the issue of relative frequency exhaust this critique? In Radical Affinity etc. I had also outlined an 'epistemological' critique of rational proportionality, which considered the nature of integers as concepts (rather than as phenomenal objects), and their rootedness in the requirement for the cognitive organisation of concrete categories, and their association thereby with customary expectations of scale with respect to those individual object-categories. The implication being that unless we consider integers as entirely abstract entities, and assume for them some kind of transcendental objectivity, they will always bear some qualitative dependency in relation to the categories they organise and measure. While the Platonic role of integers as abstract entities may satisfy the discourse of pure mathematics, it cannot be applied to the measurement and quantification of real-world objects without at the same time sacrificing the ability to maintain adequate categorical distinctions, based upon customary criteria of use and scale.

There is a further aspect to the 'technical' critique referred to above, but which concerns the issue of an integer's relative position, rather than its frequency. The conventional definition asserts integers as stable 'wholes' of fixed intrinsic value, so that the movement between each integer and the next is a series of rational 'jumps' in value, each equal to the unit '1'. This permits individual expressions of value to be considered as stable isolable wholes, whose values are unaffected by adjacent or proximate values. If, however, we consider a process of the simple sequential counting of objects, this implies a cognitive process (rather than simply a mechanical one): each counting instance entails an awareness of the preceding value (by memory), and the succeeding value (by projection); so that a counting instance of, say '5', involves a conceptual oscillation between each of the adjacent values '4' and '6'. In this sense our instance of '5' loses its rational stability as a fixed index of value, and emerges as a dispositional effect of each of its adjacent values. In these terms it should not be discounted that it perhaps also makes an important difference, in terms of its dispositional properties, or its relative potential, whether a '5' is the maximal available integer (as in the case of base6), or whether it occupies a near-median position, as in the case of decimal. Furthermore, and independently of any excercise of consecutive counting we might also wish to consider the dispositional effects of non-consecutive adjacent values within any sequence of digits.

Such an analysis may go some way towards a fuller understanding of the examples of variable proportion exhibited in Radical Affinity etc.; but it should already be clear that these features of numerical behaviour cannot be explained in terms of unbounded rational principles – only by acknowledging that rationality operates within strictly circumscribed limits – the ratios pertaining between integers within a decimal system, while internally consistent, are not proportionally consistent with those pertaining within alternative numerical radices. We are accustomed, for most quantitative purposes, to working within the decimal rational schema, but we misconstrue the conditions of proportionality operating restrictively within the decimal schema, if we assume that those conditions operate universally across diverse numerical radices, or, as a correlate of this assumption, that rational proportionality is somehow given in the very act of measuring.

This should not perhaps be too surprising to us when we consider that numbers are primarily conceptual tools of the human understanding, deriving originally from the need to quantify and manage objects having at least notional concrete existence, and which therefore entail qualitative dependencies. Numbers do not independently occupy any physical dimensions – they are mental concepts, and are therefore bound by intuitive as well as rational processes. What makes the idea seem at first strange (irrational, illogical) is the legacy of nearly 400 years of modern scientific attempts to reduce Nature to a set of objective-mechanical principles, and the requirement of an absolute system of measurement in order to achieve this. The twin imperatives of measurement and calculation predisposed Enlightenment Science and its descendents to caricature the qualitative determinants of numbers as superstitious, fanciful, and archaic.

Logarithms – The Imperative of Calculation

Logarithms were first developed as late as the 17th century by John Napier (1614), and later by Leonard Euler.1 The inspiration for this development was the desire to make complex numerical calculations simpler and more manageable, i.e., by reducing geometrical math progressions (number series based upon multiplication or division) to arithmetic progressions (those based upon addition or subtraction). In the absence of electronic aids to calculation, this made it possible to systemise hand-written complex calculations (and especially, for instance, financial calculations of compound interest) by reference to logarithmic tables. So, where one needs to perform complex calculations involving large integers, large differences between values of x may be expressed as much smaller differences in terms of Log(x).

The function relies on the principle that numerical differences can be reduced or 'crunched' according to a set of essential ratios. The most useful of these ratios are common logarithms (log10), binary logarithms (log2), and natural logarithms (loge) – e being the 'irrational' mathematical constant developed by Euler, defined as the "asymptotic quadrature of the hyperbola", and which is approximately equal to 2.71828 (logex is the inverse of the exponential function: ex). The logarithmic roots of diverse number bases (logb) are understood to be perfectly derivable from 'common' (decimal) logarithms, according to the formula: logbx=log10x/log10b.

The function of logarithms is perhaps best illustrated through the idea of scaling invariance, and the application of this principle to the observation of self-similarity in the repetition of natural forms in ascending or descending scales, i.e., as fractals. As a fractal replicates, or cascades, its entire global structure throughout the distribution of its parts, logarithms may be used to express the proportional identities relating successive magnifications of the fractal – for example, in the ratios of scale between the segmented chambers of the Nautilus shell. Indeed Nature may exhibit such regularities of proportion across scale in the physical properties of natural forms organically and inorganically; however, such features are characteristics of the structural and developmental properties of natural (physical) forms. On the other hand, the direct declension of common (decimal) logarithmic ratios of proportion, as transcendent properties of abstract numerical scales, across diverse number radices, is not a physical observation, but a metaphysical tendency (and one which the exercises in Radical Affinity etc. now show to be empirically invalid). That is, it is not so much a reflection of proportional invariance, but an enforcement of it. After all, an integer, understood as a self-contained entity, does not by itself occupy any physical dimensions. Hence it must be understood primarily as a conceptual item, formed out of the requirement to organise qualitative categories with respect to intuitive criteria of scale and use. Proportional invariance, therefore, asserts qualitative indifference a priori, in a way which is radically, and destructively, counter-intuitive.

The Historical Context

Napier's development of the logarithmic principle was the result of his spending most of the last twenty years of his life calculating:

N = 107(1−10-7)L

for values of N between 5 and 10 million, in order to reveal L as a ratio, which he called the 'artificial number', and later 'logarithm' (literally meaning 'proportional number'). So, the logarithmic principle, and the later development of Euler's constant which it engendered, entailed from the beginning the (exclusive) employment of the seventh power of ten, and its inverse: 10-7 (Napier later went on to employ 10-5 also). Of course, within the decimal rational schema, each successive power of 10 is proportionally consistent with each other exponential of 10, and the logarithmic difference of each exponential step is equal to 1. So Napier had no good reason to suspect that his selection of the 7th power would introduce any quantitative bias into the equation. However, when one examines the distributions of the (derived) logarithmic differences between sequential members of each exponential series in number radices from binary to nonary (base9), for values corresponding to the decimal 10, the 7th power in each case is found to be proportionally inconsistent with its neighbours in all cases. In fact, for at least 50% of the radical correspondents of 1010 in bases 2-9, the logarithmic difference between the 6th and 7th powers forms a significant peak of higher elevation in the series, hence affirming the qualitative uniqueness of the 7th power in the context of (1010)b (see the subsection: Variation Factors, and subsequent Analysis, in Radical Affinity etc.).

It should be noted that the historical development of Napier’s method occurred at a time of revolutionary changes in terms of mathematical and of philosophical understanding. In seeking to enhance the mechanical understanding of Nature and the Universe, Enlightenment Science sought an antidote to centuries of classical (Aristotelian) philosophical thinking, in which certain knowledge of natural forms and processes depended essentially upon the exercise of the intuition. In Classical understanding, knowledge derived from sense-data was incomplete without the addition of deductive reasoning involving intuitive, or universal, principles. Empirical science, from the 17th century onwards, sought to disgorge this dependency by establishing new paths to certainty based upon an atomistic approach to knowledge – the progressive measurement and tabulation of raw empirical data. This required a complete revision in methodology, involving a severing of the connection between knowledge and intuition. Reliable, mechanical knowledge of cause and effect in Nature should only now be acquired through the painstaking enumeration of every conceivable experimental instance of a known phenomenon, to arrive at certain knowledge of its principle causes by inductive elimination. Hence, the inductive methodology established by Francis Bacon sought to establish a tabula rasa for scientific knowledge through the method of observational parsimony, eschewing teleology and the acquired habits of syllogistic reasoning – in fact, an attempt to eschew any input from the mind save for its clinical ability to observe and record the data provided from direct sense impressions.2

In this context, a theory which acknowledged an intuitive dimension to the understanding of numerical quantity would have been regarded as archaic or superstitious – as counter-revolutionary, and an unnecessary hangover from the dark-ages. Such a theory would have detracted from the primary instrumental function of numbers in facilitating progressive measurement and quantification, and the establishment of empirical certainty over the mechanics of true causes and effects.

In view of this enthusiastic revisionism in the development of early modern science, we can understand the urgency, and the temptation, of the reductions to scalar identities through which logarithms seem to facilitate the rational understanding of Nature, and of numbers. However, we should understand this not only as a revelation of new knowledge previously inhibited by the habits of thought instilled in philosophical doctrine (Bacon's "idols of the mind"3), but also as the forging, by force of will, of a new kind of indifference, through the subordination of qualitative relations and the dominance of quantitative ones. A consequence of this indifference is that it precipitates a radical shift – a fault-line – between the 'human' and the 'scientific'; between the mechanical-physical and the philosophical-ethical-spiritual; between the rational and the intuitive; and which will henceforth render these as incompatible discourses.

January 2013

  1. Napier, John, Mirifici Logarithmorum Canonis Descriptio ("Description of the Wonderful Rule of Logarithms"), in: Ernest William Hobson, John Napier and the invention of logarithms, 1614, Cambridge UP, 1914. [back]
  2. Bacon, F., Novum Organum, Or True Directions Concerning the Interpretation of Nature (1620), Constitution Society: (accessed 18/01/2015). [back]
  3. Ibid., Aphorisms [Book One], CXV. [back]

back to top ^