An Important Mathematical Oversight
The original intention for this website was to encourage public awareness of an historical medical crime, one that has remained a tightly-kept British state secret now for more than five decades. The matter is of enormous public interest, not least because the motivation behind the crime itself was that of advancing scientific research into areas that would come to provide the seminal knowledge behind much of the technological progress of the last half-century. My investigation into the matter inspired a parallel enquiry into some of the fundamental principles that underpin that scientific and technological impulse.
There are therefore two principle concerns of this website, and if there is acknowledged to be a substantive connection between them, that has inevitably to do with late 20th Century developments in science and information technologies, and more broadly with the idea of an burgeoning technocracy – the suggestion of a growing alliance between corporate technology and state power – one that might be judged to have atrophied the powers conventionally assigned to liberal-democratic institutions. This link therefore serves as a segue to emphasise the equal importance, to my mind, of what is going on in the Xcetera section of the site, so that that section should not appear, from the point of view of the other, as some kind of afterthought.
Xcetera is concerned with a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or ‘integral’, properties. It is argued that this conventional understanding of integers, which is the one widely held amongst mathematicians and scientists adopting mathematical principles, is the cause of a significant oversight with regard to changes in the relations of proportion between numerical values, i.e., when those values are transposed out of the decimal rational schema into alternative numerical radices such as those of binary, octal, and hexadecimal, etc.
On the page: The Limits of Rationality it is argued that the relations of proportion between integers are dictated principally by their membership of the restricted group of characters (0-9) as defined by the decimal rational schema; and that corresponding ratios of proportion cannot be assumed to apply between otherwise numerically equal values when transposed into alternative numerical radices having either reduced (as in binary or octal, for instance) or extended (as in hexadecimal) member-ranges.
This is shown to be objectively the case by the results published at: Radical Affinity and Variant Proportion in Natural Numbers, which show that for a series of exponential values in decimal, where the logarithmic ratios between those values are consistently equal to 1, the corresponding series of values when transposed into any radix from binary to nonary (base-9) results in logarithmic ratios having no consistent value at all, in each case producing a graph showing a series of variegated peaks and troughs displaying proportional inconsistency.
These findings are previously unacknowledged by mathematicians and information scientists alike, but the import of the findings is that, while the discrete values of individual integers transposed into alternative radices will be ostensibly equal across those radices, the ratios of proportion between those values will not be preserved, as these ratios must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or ‘potentiality’) of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of ‘integral’ values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.
Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. While the assessment of quantitative and qualitative differences at the level of the observable world necessarily entails assessments of proportion, the digital encoding of those assessments ultimately involves a reduction, at the level of machine code, to the form of a series of simple binary (or ‘logical’) distinctions between ‘1’ and ‘0’ – positive and negative. The process relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption. Logic is assumed to operate consistently without limits, as a sort of ‘ambient’ condition of information systems.
In the Xcetera section I am concerned to point out however that the logical relationship between ‘1’ and ‘0’ in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits limited to two members. It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that, despite its apparent simplicity, may well come as a surprise to many mathematicians and information scientists alike).
As the proportional relationships affecting quantitative expressions within binary are uniquely and restrictively determined, they cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.
The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific ‘integral’ numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.
So that’s some of what Xcetera is all about.. If you think you’re ‘ard enough!
An Important Mathematical Oversight
One of the concerns of this website is a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or ‘integral’, properties. It is argued that this conventional understanding of integers, which is the one widely held amongst mathematicians and scientists adopting mathematical principles, is the cause of a significant oversight with regard to changes in the relations of proportion between numerical values, i.e., when those values are transposed out of the decimal rational schema into alternative numerical radices such as those of binary, octal, and hexadecimal, etc.
In The Limits of Rationality it is argued that the relations of proportion between integers are dictated principally by their membership of the restricted group of characters (0-9) as defined by the decimal rational schema; and that corresponding ratios of proportion cannot be assumed to apply between otherwise numerically equal values when transposed into alternative numerical radices having either reduced (as in binary or octal, for instance) or extended (as in hexadecimal) member-ranges.
This is shown to be objectively the case by the results published at: Radical Affinity and Variant Proportion in Natural Numbers, which show that for a series of exponential values in decimal, where the logarithmic ratios between those values are consistently equal to 1, the corresponding series of values when transposed into any radix from binary to nonary (base-9) results in logarithmic ratios having no consistent value at all, in each case producing a graph showing a series of variegated peaks and troughs displaying proportional inconsistency.
These findings are previously unacknowledged by mathematicians and information scientists alike, but the import of the findings is that, while the discrete values of individual integers transposed into alternative radices will be ostensibly equal across those radices, the ratios of proportion between those values will not be preserved, as these ratios must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or ‘potentiality’) of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of ‘integral’ values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.
Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. While the assessment of quantitative and qualitative differences at the level of the observable world necessarily entails assessments of proportion, the digital encoding of those assessments ultimately involves a reduction, at the level of machine code, to the form of a series of simple binary (or ‘logical’) distinctions between ‘1’ and ‘0’ – positive and negative. The process relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption. Logic is assumed to operate consistently without limits, as a sort of ‘ambient’ condition of information systems.
I am concerned to point out however that the logical relationship between ‘1’ and ‘0’ in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits limited to two members. It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that, despite its apparent simplicity, may well come as a surprise to many mathematicians and information scientists alike).
As the proportional relationships affecting quantitative expressions within binary are uniquely and restrictively determined, they cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.
The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific ‘integral’ numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.
[ PDF version ]
[ PDF version ]
[ PDF version ]
[ PDF version ]
PDF DOWNLOADS
Download my 176-page
report: Special Operations
in Medical Research
[pdf – 1.93MB]:

Download my Open Letter to the British Prime Minister & Health Secretary
[pdf – 365KB]:

The Limits of Rationality
(An important mathematical oversight)
[863KB]:

Radical Affinity and
Variant Proportion in
Natural Numbers
[2.53MB]:

Mind: Before & Beyond Computation
[643KB]:

Dawkins' Theory of Memetics – A Biological Assault on the Cultural
[508KB]:

Randomness, Non-
Randomness, & Structural Selectivity
[616KB]:

Concealed Devices
If it is accepted that the MRI scan evidence shown on the title page of this section (and in my 2nd MRI scan) reveal the presence of objects in my neck that are non-biological in origin, and have therefore been placed there surreptitiously, it follows that the reasons and methods of this undertaking were covert (for suggestions of the possible medical and technological imperatives behind this enterprise, see the page: Technological Imperatives). It also follows that the objects implanted were intended to remain in place permanently, and to remain undiscovered during my lifetime. It would have been an essential prerequisite therefore for the objects to remain concealed, not to be detected coincidentally at some later date through standard x-ray procedures.
The first medical MRI scanner was developed in 1972, and MRI scanning did not become routinely available as a medical procedure until many years later. At the time of my operation, in 1967, medical imaging techniques were limited to x-ray procedures. It seems reasonable to conclude that, in the planning and design of this research proposal, some method may have been employed in order to prevent detection by standard x-ray procedures.
It is possible, I understand, to make an object invisible, or ‘pseudo-transparent’ to x-ray detection, by employing methods of optical diffraction on a minute scale. A functional property of x-rays is that they travel in straight lines. That is, they consist of linear sequences of sine waves of extremely small (sub-atomic) wavelength. When a solid object intercepts the x-ray beam, x-rays are absorbed in proportion to the density of the object’s material structure and prevented from reaching the photosensitive plate, and what we see is a negative image of the object (i.e., the denser an object, the brighter its x-ray image). Objects of lesser density will have a higher degree of transparency and will appear less distinct.
Certain forms of polyimide resins, chemically similar in origin to ‘Teflon’ (developed by Du Pont in the 1960s), when combined with certain ‘fillers’ of a microscopic cryptocrystalline structure, may be employed to influence the x-ray beam by enhanced diffraction of the sine waves, continually bending and scattering the x-rays at the sub-atomic level. This can create an effect of pseudo-transparency to x-ray vision, as objects appear as continually changing or ‘scintillating’, and therefore impossible to ‘fix’ radiographically. The cumulative effect of myriad x-ray diffractions at the sub-atomic level can be employed to ‘cloak’ an object from detection by, in effect, ‘bending’ the x-rays around the object and hence artificially diffusing its actual density. The effect is somewhat analogous to that of an extremely out-of-focus object in front of a camera lens – if an out-of-focus object is moved increasingly closer to the lens, without adjusting focus, the object’s penumbra may become so broad and diffuse that the object effectively disappears from view (in the case of x-rays, the property of focus is irrelevant, but the widening of the penumbra is achieved through enhanced diffraction).
“There are various kinds of interaction between X-rays and matter. X-rays are absorbed in dependency on the density of the material. At an interface of two materials they are slightly refracted. Due to their small wavelength, which is in the order of the inter-atomic distances, they are diffracted at a crystal lattice. By defects, they are diffusely scattered. X-rays can be used to stimulate fluorescence.”1
By employing such techniques of enhanced diffraction, hard, solid objects, conventionally detectable by x-rays, can be made to behave radiographically as if they were soft-tissue, such as is only effectively revealed by an MRI scan. Any solid object, metallic or otherwise, could be coated in a layer of polyimide, incorporating crystalline filler, to assist in concealing it. Polyimide is noted for its chemical stability and strength, has uses in semiconductor manufacture (x-ray lithography), and is employed in the manufacture of antennae used in NASA spacecraft. Polyimide, or flouro-polymer resins, are also employed in the manufacture of surgical implants, due to their inherent bio-compatibility.
In a recent patent application entitled: Matte Finish Polyimide Films and Methods Relating Thereto, E. I. Du Pont De Nemours & Co.
describe some of the functional purposes of polyimide ‘coverlays’ as follows:
“Broadly speaking, coverlays are known as barrier films for protecting electronic materials, e.g., for protecting flexible printed circuit boards, electronic components, leadframes of integrated circuit packages and the like. A need exists however, for coverlays to be increasingly thin and low in cost, while not only having acceptable electrical properties (e.g., dielectric strength), but also having acceptable structural and optical properties to provide security against unwanted visual inspection and tampering of the electronic components protected by the coverlay.” (my emphasis)2
Although this quotation offers only a general technical description of some of the properties of polyimide films, the references to its “structural and optical properties” in offering “security against unwanted visual inspection” confirms in principle the possibility of employing this material for the purposes I have suggested above – i.e., to facilitate the concealment of a surgical implantation which, if its disclosure were not prevented, would reveal a medical and ethical atrocity.
September 2018
back to top ^
Links:
Footnotes:
- From: High-resolution radioscopy and tomography for light materials and devices, Lukas Helfen, Tilo Baumbach, Fraunhofer Institut für Zerstörungsfreie Prüfverfahren (IZFP), EADQ Dresden. John Banhart, Heiko Stanzick Fraunhofer Institut für Fertigungstechnik und Angewandte Materialforschung (IFAM), Bremen. Peter Cloetens, Wolfgang Ludwig, José Baruchel, European Synchrotron Radiation Facility (ESRF), Grenoble: http://www.ndt.net/article/wcndt00/papers/idn823/idn823.htm [back]
- United States Patent Application Publication, US2011/0177321 A1, July 2011: http://www.somr.info/lib/US20110177321.pdf [back]