When Kevin Kelly interviewed me about The Information for Wired, he asked me to define the word, and I was unprepared. I did some hemming and hawing (which he mercifully omitted). I see it continues to trouble him. Others have asked me the same question, and I continue to hem and haw. You might think I would have it figured out by now.
The problem of definition runs as a a minor thread throughout my book. The very idea that a word has a definition is surprisingly new—barely 400 years old. You might think it is obvious, but it is not. People managed to use words for millennia without worrying too much. John Locke felt it necessary to explain in his Essay Concerning Human Understanding:
Definition being nothing but making another understand by Words, what Idea the term defined stands for.
In the very first English dictionary, Robert Cawdrey’s Table Alphabeticall in 1604, we see that defining words is not so easy. I quote a few of my favorite Cawdrey definitions (in their entirety):
crocodile, [kind of] beast.
vapor, moisture, ayre, hote breath, or reaking.
theologie, divinitie, the science of living blessedly for ever.
The word information isn’t in Cawdrey’s dictionary. Our authority, the Oxford English Dictionary, now requires 9,400 words for its entry—a multitude of definitions—as I discussed here.
Words are not meant to be pinned to the mat like butterflies. Also in The Information I explore the ancient dream of a perfect language, a dream of Gottfried Leibniz, of the Esperantists, of logicians like George Boole and Bertrand Russell. One imagines God’s own dictionary, described by the novelist Dexter Palmer this way: “one-to-one correspondences between the words and their definitions, so that when God sends directives to his angels, they are completely free from ambiguity.”
That dictionary does not exist. Our language is a thing of infinite possibility. We learn to live with ambiguity and with choice.
So I can give information a nice, short, epigrammatic definition. It’s suitable for tweeting. But it’s not complete. And it’s not final.
April 11, 2011
Dear Mr. Gleick:
Thanks for the fine book. (And I am reading all of it!) I couldn’t get my e-book autographed when you were in Portland, Oregon. Maybe someone should issue passports for e-books (book-ports?) so we can get a stamp or a signature when we travel through your books. My P.S. is my take on your talk.
Best,
Louise Andrews
P.S.“March 18, 2011
Saw one of my favorite authors speak at Powell’s Sunday. James Gleick. Science writer. His 1987 book Chaos one of my all time favorites. New book: The Information: A History, a Theory, a Flood. I got the Kindle version, and went to the crowded Powell’s. After introducing and reading some of the book, he was asked some pretty incredible questions by the crowd. First guy a total Portland weirdo (and I say that with affection): “Have you seen the Futurama movie about (something about people having an “info port” on their bodies)?”
Gleick was a combination of “happy” to answer, but “saddened” because this yes-no question had the answer “no” from him. Later on someone, a white-haired gentleman asked if you really had to read the whole book, or could you just skim around and learn enough. Gleick was rightly, mildly indignant. Answer: You must read it all; until writers no longer write books, he said. Until everything can be said in 128 characters. He slammed the book on that topic.
In another part of the session one guy asked to ask a follow up question. Oh, a follow up question, said Gleick, and we all recognized the meme of a presidential press conference. When he took what he said was one last question, the woman added, after he answered, “And a follow-up question?” We all grinned at her end run around the end of the talk.
When I left Powell’s, putting my slim volume of a thousand books away into my purse it felt sad, like when your favorite movie ends and you don’t want that world to come to an end. But I’ve moved beyond the physical world of “books and people” places like Powell’s, via eBooks, into a new fantasy, or a new reality. ” –From the journal of Louise Andrews
A lot of the confusion comes from different contexts and the implied message destination.
Shannon placed great importance on how much the message’s intended “destination” already knows (or could probabilistically guess). As you quote Gleick, “Information is surprise.”
In normal human conversation, we take the “destination” to be another person, and people know some things already. If I already know the Packers won the Superbowl, you telling me this again doesn’t convey any information.
But if you take the context to be, e.g. a hard disk, you can’t say that the disk already “knows” anything. Every bit it records is, for *the disk*, a complete surprise. So if a 100GB disk is filled with the digits of Pi, this is a lot of information in the context of the hardware, but very little information for a mathematician who knows the algorithm. (And here is the bridge to “compression as knowledge,” which is a fascinating subject in itself!)
At the other end of the spectrum is something I joked about with friends when we studied Shannon: For an Omniscient God there can be no information: nothing can be a surprise when you know everything already.
[ Shannon’s paper is is surprisingly readable, and is available online here:
http://cm.bell-labs.com/cm/ms/… ]
Wanderigng Stan , you say that:
I understand the point you are making (which is quite good). But in another sense this is not entirely true. The other person has communicated that a) they know the Packers won, and that b) they believe you might be interested in this information, and c) you can confirm from another source that indeed they did win. This distinction is also not possible by your hard drive or other recording media without some type of processing.
It’s important to stop thinking that what we are describing is the state of a system but rather that we are spending our lives constantly refining what we believe to be our knowledge of the state of a system.
James thank you for pulling together so many great concepts which I have been pondering about for years. If only I had read your book in high school before I studied Computer Science where I had to wait years to get a muddy picture of it all. Your work has inspired me to start a web series which is inspired by James Burke’s Connections but deals with concepts instead of inventions. Each story follows one problem through time and explores how humans have tried to solve it over and over again.
I’m thinking of doing an episode inspired by your latest book, but wonder what would be the best way to phrase the “problem” behind The Information….
If you’re interested my program is featured here: http://alturl.com/84udb
Your biggest fan,
Brit
I am law trained and taught in it for many years. Science and math do not come easily, so I have been grateful to your books for keeping me alive to my time and its history. In the past I played with an idea about boundaries, distinguishing lines/cleavages [Vacuum] from rivers [Live]. In your observation that everything we care about lies between empty strings and random strings, I was pleased to find the mother of all Live Boundaries. Thanks for everything, alx
“The Information” adds the fascinating layer of historical development to our tech laden lives circa 2011. I too feel that Shannon veered oddly in dismissing the issue of meaning, and it may be years or decades before science can explain the transmission of meaning from one human to another.
The concept that most helps me address meaning is “face value data”, which is the starting point in a representation process. If B is data, and A is some relatively more primitive data, and B represents A, then the meaning of B is A (that which it represents).
But, what is the meaning of A? The process of tracing meaning will reach a dead end exactly where the representation process first began. The data at the start is “face value”. This data just appears out of nowhere, with no explanation. The stream of primitive states arriving at the retina is an example of face-value data. The individual states are not imbued with any coded meaning impressed on them by an outside agent. Rather, the individual states carry no meaning — whatever can be gleaned from the datastream is by virtue of spatiotemporal patterns that unfold. It was fantastic to see this idea broached in regards to Ray Solomonoff and inductive inference.
In this model, we have the beginnings of a good theory of meaning, in that each individual continually crunches its incoming datastream into a growing symbol system, where symbols represent patterns received so far in the input stream. The term “data phenomena” is a little more scientific than “patterns”, and gets to the essential relationship: mental symbols represent seperable phenomena digested from past sensory stream. So, the meaning on any one of these mental symbols is the pattern it represents. The symbols can be stacked on top of each other, but tracing back downward for meaning, you always arrive back at a flow of primitive states…raw sensory data.
There are several implications of this theory that take us in a very different direction from where Shannon and Co. dropped us off:
1) Purposive communication between humans is reconceptualized as a process of embedding patterns in the “physical” media which form the entranceway to sight, sound and touch. There is no external codebook required to join this network and make sense from it….because the medium of exchange is face value (fully decoded into raw patterns).
2) Infants first learn primary environmental patterns, and once established, can begin associating with them language utterances, themselves patterns. Language is a set of overlaid patterns experienced in correlation with primary (nonverbal) patterns, that in time leads to an effective system for evoking memory (or imaginative recombination) of primary experiences…a system for indexing experience. The semanticist S.I. Hayakawa said it best: “The meanings of spoken phrases are the situations in which they have previously been heard”.
3) The value or meaning of the individual’s current inputstream is subjective, based on extent pattern experience (and current goals). The internal symbols used to record meaning are unique to each individual, and inaccessible (directly) to other individuals. So, the idea of swallowing a “knowledge pill” extracted from someone else’s head is highly infeasible, as would be a Spock-like brain meld. (That is not to say that cloning of the entire individual is impossible…with robots it happens every day.)
See what reading this book did to me? It has has dredged up the person I was 30 years ago.
James, I think it’s a monumental achievement to have brought our rich informatic heritage together in one place. My only regret is that the book lacks a good ending, but this is just a consequence of writing about a fast moving target (the Information Science field) that is increasingly swamped with short-term, lucrative, incremental opportunities. I look forward to JG chronicling the next 5 or so information revolutions during our lifetimes, among them a strong theory of meaning. Bravo.