Abstract
Re-reading the abstract of the CTMU (4.1 in Major Papers) this week was a little eye opening - caught things a little deeper than I had beforehand. I’m not really sure how much I agree that a well formed theory of reality is necessary for the goal of doing science on a practical level any more than an understanding of engineering is necessary to drive a car, or even fix one. That being said I see the point that an understanding of Reality is a foundation for science; that the sciences are embedded in an understanding of Reality whether implicit or explicit, and that that understanding shapes what science is capable of.
Right away Langan says characterizes science as observational or perceptual and that insofar as it is it requires an underlying theory of Reality for which perception is the model. Is science observational or perceptual in nature? Yeah, pretty much. It might not be common to phrase it that way but the typical MO is to do an experiment to test a hypothesis - watch the world do something and pay attention. There are other aspects of scientific method as any elementary school student may be taught like the actual making of the hypothesis, the guessing part, but the experiment and data collection and analysis parts play a larger role and characterizing these as perceptual in nature seems completely reasonable to me. More than that, science is done by scientists all of whom (hopefully) are perceptual beyond mere data collection. Every measurement of science ultimately comes down to some experience registering in the consciousness (or perhaps proto-consciousness) of an observer, which would be perception. That characterization seems to fit ordinary life as well (and if memory serves would be satisfactory to Wheeler). Then we say that whatever the Theory of Reality is, perception is the model of it. A model being a mapping between theory and universe. This seems to suggest what you’re doing right now as you read this is connecting the Theory of Reality with the Universe of Reality. Whatever that means. But a theory for which your perception is the bridge between it and the universe is necessary for the (ultimate, theoretical) grounding of the act of science - and quite possibly any relation to the universe at all. Pretty weighty idea but cool enough.
Langan goes on to say “information is the abstract currency of perception”. I don’t remember for sure but I think Raymond Yeung in his “A First Course in Information Theory” mentions that a bit in information theory, a unit of of information equivalent to a reduction in uncertainty by one half, and a bit in computer science, an object that can take one of two possible values, are subtly different. Langan seems to be using the two fairly close together but sticking with the definition from information theory and applying it to the sort of bit computer science uses, or other such objects. Effectively picking the resource for quantification rather than the quantified object as the grounding. Where it gets a little ambiguous whether by error or subtly or something else, is extending information to self-processing information. This seems to favor the quantified object over the resource of quantification, the thing who’s uncertainty is reduced rather than the thing that’s reducing the uncertainty. Perhaps he’s aiming at merging the two together with self-processing information. In Information and Coding theory the existence of a channel with a sender and a receiver is assumed. The sender encodes messages in something the channel can submit, like a voltage representing a 1 or 0, and then submits it through the channel to be decoded - possibly with noise involved. From the receivers perspective then, it receives a voltage which conveys an amount of information - in this case one bit. The receiver also needed to “pre-accept” the voltage by having an appropriate sensory apparatus to detect voltage levels. For the receiver then, the abstract information - the “abstract currency of perception” - reduced the uncertainty in the state of the sensory apparatus from every value it could’ve taken to the value it did take. That’s all well and good, Bro, but like, y’know, the information and the informed, quantifier and quantified, are still pretty separate and shit. So, like, whatever man. Why are we talking about self-processing information or putting the quantifying resource and the quantified object together if all this can be explained by a sender who sends a specific value that just happens to correspond to some value of an abstract measure we made up that’s useful but not necessarily real? A couple things come to mind. First, information reductionism seems preferable to material reductionism. It seems like the way of the future, seems more useful, strictly speaking it seems simpler because if you have material you have information but perhaps not the other way around, making information more general. Also information reductionism doesn’t seem to have the same issues explaining origins as material reductionism - no material, no nuffin; no information, infinite uncertainty and possibility. (yes, my punctuation is atrocious, I’ll get there some day). Second, it’s different when the system is self contained and must characterize all differences between sender and receiver on it’s own - be they spatial, temporal, material, syntactic, whatever. In this case it seems that keeping the quantifying and the quantified separate just doesn’t work anymore. Can a meta-substance that reduces it’s own uncertainty scale from ontological to cybernetic? That is, can it scale from self-description to description between two separate entities (even though they may be distinctions made within the same self?
Langan says perception is the model of reality theory and associates this extension with a “limiting form of model theory identifying mental and physical reality”. Even without getting quite to the extension, associating model theory with information theory is an interesting mix. In model theory there’s theory, universe, and the mapping between them. Generally it’s used to show the consistency of some mathematical systems based on the assumption that some other system or theory is consistent. For example Euclidean Geometry is a model for Non-Euclidean Geometry, so if Euclidean Geometry is consistent, so is Non-Euclidean Geometry. In Information and Coding theory a code is really just a mapping from one alphabet into another alphabet. For example digital cameras encode images into 1s and 0s that a computer can read. Later on in the CTMU Langan suggests information theory is in line for 2 extensions, the same two extensions Propositional Logic takes from Propositional Logic -> Predicate Logic -> Model Theory. Looking at both Information Theory and Model Theory side by side this does seem like a natural extension (I’m guessing in addition to the extension from information -> infocognition).