A mind is like a parachute
It might save your life,
but you have to know how to use it first.

Saturday, June 30, 2012

How do you count soup?

If I handed you a pot of soup and asked you to count out 27 for me, you might ask me, "27 what, spoonfuls?"

"No," I'd answer, "27 soup."

"27 SOUP?"

"Yes, please."

"I can't do that," you'd say.  "It makes no sense."

"Okay, I see. Then give me 27 flavor units."

And at this point, you'd probably shove the pot of soup back at me and storm out. You're so easily annoyed.

But do you see the problem here?  I have a huge pot of soup. I know it is soup because I can see and taste it.  It is held in a pot, but the pot is not the soup.  It is just the vessel for the soup.  The good stuff is the soup sloshing around inside.  And I know it is there, and I know that it has flavor.  It has lots of different flavors all mixed together, actually.  And I want to count the soup flavor.

So now let's look at another problem I have.  I want to calculate how many thoughts my brain can hold and how many thoughts it can have in a second.  I know that there is such a thing as thought.  It is here in my brain.  It comes in many flavors.  They are all sloshing together and I want to count them.

Well clearly that's just going to be a dead end.  You can't count thought any more than you can count soup.  You could put some arbitrary measurement in place and count spoonfuls of soup or completed thought tasks (like pronouncing the word that appears on a screen), but you can't simply separate thought into its component parts any more than you can assign a soup a "flavor unit".

Here's an interesting problem.  Try not to think of the number 4.  Take a minute and focus on not thinking of the number 4.  I'm not saying think of something besides the number 4, I mean specifically instruct yourself to stop thinking of the number 4.  Every time the number comes into your mind, bat it away and focus on how you're not supposed to be thinking about it.

If you're anything like me, this rather confusing process starts out a little slow but gets faster.  You may be surprised by how few times you actually get to dismiss the number in a span of a few seconds.  Maybe you could do it once or twice a second or even three to four times a second it you're really into the flow.  But time ticks by amazingly fast as you focus on not thinking of the number 4.  This suggests some interesting things.

In the first place it suggests that there is a huge disconnect between conscious puzzle solving thought and the speed of synaptic firing in the brain.   Neural synapses are widely reported to be capable of 100 times a second though in practice the figure seems closer to 30 times a second.  And we know from dreaming and being scared and exercising and lots of other activities that our minds are capable of making decisions and adjustments at incredible speeds under certain circumstances.  But for good ole run of the mill puzzle level concentration, thought does not come close to approximating 100 times a second.

To get a sense of the maximum horsepower of the brain we could calculate the theoretical maximum number of synapses that could occur in a second.  Sources vary on a number of metrics, such as how many neurons we have, how many synaptic connections these cells have, and how fast exactly a neuron can fire, but one estimate I read was 100 billion neurons each with 7000 connections on average.  The number of neurons seems pretty uncontroversial, but I've seen as few as 1000 connections per neuron.  On the other hand I've seen suggestions that the fire rate of a synapse could be 200 per second.

So let's just pick a number on the low side and come up with some number that tells us at least in some small way something about the raw firepower of the brain.  A common convention seems to be to assume 100 trillion (100 million million) synaptic connections in the brain.  At a fire rate of 30 times a second, you could produce 3000 million million or 3 billion million synapses a second if your brain were ever in the unlikely state of "firing on all cylinders".   To put this impossibly large number into context we could try a couple images.

Here's a fun one.  Imagine that every person on the planet who is not already completely bald shaved his or her head and then took each hair and cut it into four pieces.  Each person places that hair into a shoe box which they hold in front of them.  Then one day at the stroke of midnight GMT, every person on the planet throws the entire contents of the box into the air, in that one instant, each piece of hair floating in the air all over the globe would represent one synaptic firing in the maxed out brain we calculated above.  400,000 pieces of hair for each of over 7 billion people.

Another one...  There are about 500 billion grains of sand on a beach volleyball court --- 8 meters wide and 16 meters long and half a meter deep.

If each grain of sand represented a synapse from our above calculation, it would take 6,000 volleyball courts worth of sand to reach the number of synapses in one second in the brain.

So we get that it's a big number.  But it is also a meaningless number in a lot of ways.  It gives us a sense of just how many synapses the brain is capable of, but it vastly overstates how many times synapses in a normal brain will fire under normal circumstances.  It is a little like seeing the speedometer labelled all the way up to 200.  It is not ever going to get there.

And of course another important consideration for our purposes is that the vast majority of synaptic firings in the brain are not devoted to the kind of thinking we are focused on.  We are not so much concerned with the "total wealth" of the brain as we are its "disposable income".  In other words, all the activity in the brain that pays the rent and keeps the lights on (keeps the heart beating and the lungs breathing, and monitors and controls all the basic body functions including response to sudden danger), is activity that is not really available for conceptual organization of data.

But for fun we can assign some tiny percentage of total brain power and assume a very low load (slow firing rate) just to see how many synapses we may associate with a thought puzzle.  Now this is all completely bogus in any literal sense, but it can be instructive nevertheless.  We have no set description of how many synapses might be required to make one symbol or token or thought, much less how many discreet thoughts it may take to solve a puzzle.  We simply don't even know how the electrochemical process in the brain translates into conceptual understanding.  But we know that it does.  This pot has soup in it, even if we don't know how it's made.

For the sake of argument, lets just assume the "4 puzzle" takes .001% of our brain's total neurons (1 in 1 million) and that the firing rate is a leisurely 10 per second. Furthermore of the 1000 possible connections each neuron can make, we will use just 10%.  We established before that it takes us maybe half a second to "not think about 4".  That would imply 50,000,000 synaptic firings.  50 million synapses to perform the puzzle.

What's the point of making blind guesses to put numbers to something that we don't even understand the underlying mechanism of?  Because it shows us just how vast the brain's capacity is.  Even using scaled down assumptions about how much brain power would be required to focus our thought for a half a second, we end up with a very large number.  We could obviously be off by several orders of magnitude.  But if our estimate is too low, that's not a problem, since what we're trying to do is get a sense for just how complex the synaptic process may be.  On the other hand, it is possible we guessed too high.  But even if we were off in two factors by a power of ten (for example only 1 in 10 million neurons is involved and they only make 1% of possible connections), we still end up with 500,000 synapses to get the single puzzle completed one time.  It makes no difference how well we have guessed at something that doesn't happen as literally as we are assuming it does anyway.  What we have shown is that no matter what the mysterious process actually is, the brain is capable of generating a great many synapses for each task.

This concept will be more meaningful when we pull it over to the hive-mind analogy and compare our results for the brain with our results for the planet if each human were a little neuron.

And that's where we're going next as we finally get to tie some of these concepts together.

No Neanderthal Mime Can Run That Fast

Okay, take a look at this.  If we look at the history of communication in human culture over the last 100,000 years, we see an obvious trend towards higher message speed and greater area of effect.  (If we call the "area of effect" the "force" of the idea, we could borrow from physics and note the speed x force = Power.  This means we could talk about the "power" of a message.  That's something worth getting back to, but I don't want to go all physicky right now.)

On some day 100,000 years ago there was some important information that needed to go around like "don't drink the water, it is bad".  I'm almost certain that at some place on the globe 100,000 years ago someone somewhere was warning someone else not to drink the water.  But even if that is not true, lets assume it is for now.  Small band of nomads putting down stakes and trying to make a go of it in farming.  Little House on the Prairie, Neanderthal Edition.  Only something pollutes the water.  Someone gets sick.  The clan genius determines it is the water.  Now if this were Neanderthal science fiction, the nerdy Neanderthal would run around warning everyone and no one would listen and the whole clan would die.   No one ever listens to the scientists in science fiction. Probably has something to do with how the nerds who write these books feel about the way society treats them.   Yeah, off topic, I know.  Look, it's my blog.  But you're right.  Neanderthal science fiction has nothing to do with communication and thinking.

And you know what, while we're on the topic of being off topic, I may as well point out that Neanderthals were probably hunter gatherers and likely didn't do much farming, northern ice age snow people that they were (although they did cook vegetables).  So the notion that Ma and Pa Neanderthal settled down to do some farming is ridiculous.  The sentence should read.  "Little House on the Prairie, Homo Sapiens Edition."  But we're all Homo Sapiens.  So that doesn't really convey the long time ago feeling I was going for.  100,000 years ago there were both Homo Sapiens and Neanderthals moving about, but only the Homo Sapiens were farming.  So, no, it's not accurate.  But this isn't a science report, it's a long (and terribly off topic) exposition on the changing nature of communication.  And it only get's longer when I stop to get technical, so I'm not gonna do that.  Cuz you know what?  There was no Neanderthal science fiction either!

So the water is bad and nerdy genius Neanderthal does what?  Jumps on his blog and warns everyone? Takes out a classified ad?  Activates the emergency broadcast system? No.  He probably just tells someone.  Neanderthal language was primitive, but even in the unlikely event that it couldn't express information as basic as "water bad" he could pantomime it.

The point is he could communicate this important information, but not very efficiently.  Maybe there would be a group mime meeting at night at the fire and they could all start off with the warning about the water before practicing being blown away by imaginary wind or trampled by invisible Mastodons.  Yeah, mime was different back then.

But more than likely there were basic words for good and bad and for water and food, so "water bad" seems like a pretty easy message to sell even 100,000 years ago.

Fast forward (please) to a time when written language is in wide spread use.  The written word itself goes back to about 3000 BC.  But we're going to skip over all that and jump to a time well after the the printing press was invented and into a period when newspapers and pamphlets were being produced.  Why the huge jump?  Well this particular example relies on ways to get messages out to ordinary citizens, and so even while we could assume that someone somewhere could post a sign at the bad watering hole saying, well, "bad watering hole" we could get into some pretty crazy distractions about whether the average person could read such a sign or whether they may rely on symbols to convey what was bad or taboo and frankly it is just not that important.  What is clear is that communicating the basic safety message was time consuming yet essential.  Whatever method they may have used, it would have been local and relied on a mix of verbal and symbolic information.

So that is why we are jumping to 1770 in the good ole U.S. of... err, the colonies.

Now if we assume that some well or stream had been determined to be unhealthy we could actually imagine a public notice might be posted or at the very least a mention of the issue could take place in the local newspaper.

Now, you might think you know where this whole thing is going, but you may be a little surprised at the end.

It is clear what I am describing is that over the history of communication, messages become more efficient.  The speed of information improves as does its reach.  Where the Neanderthals had to rely on word of mouth spread one-to-one or one-to-many if they were at a fireside mime meeting, the colonists could rely on both word of mouth and the one-to-many channel of the newspaper or public bulletin.

So of course we need to take a second to look at the time involved in spreading the message via newspaper.  In a minimum of a day or two and sometimes several days longer, the information could be reported to the editor, included in the paper, and distributed to the readership.  Considering that many hundreds of people could all be warned about the same thing in this time, this is quite an improvement over Neanderthal Marcel Marceau, fake drinking, doing the international sign for choking, hopping from one foot to the other, and falling down and playing dead.

In just over 100,000 years we made some decent progress in getting the word out.

But then something happened.  It turns out that getting the word out more quickly -- that is, increasing the speed of communication -- has a drastic impact on the rate of science progress.  When scientists around the world can share ideas more quickly, the technology which is a result of that science improves more rapidly as well (to say nothing of the role of faster communication in improving education).  Technological progress has a tendency to beget more progress for a number of reasons. Not the least of these is Recursion.  But for our purposes it is interesting to look at how technological progress produced faster and more effective ways to communicate.  From the printing press, to the telegraph, then telephone, radio, television, internet.  Sure, we all know the story by now.

But recently we've entered a new phase of this progress, and this is one that is not generally appreciated in its entirety.  The telephone provided for very fast one-to-one communication, and the overseas telephone call was a social breakthrough of the 20th century, joining people separated by huge distances in a very personal way that was orders of magnitudes better than the best one-to-one alternatives that preceded it, the telegraph and the posted letter.   And radio and television took the basic model of the newspaper's one-to-many form and blew the doors off of it.  Now thousands, even millions of people could be informed from one common source in real time.

In the early days of the Persian Gulf War in January 1991, Gen. Normal Schwarzkopf told a reporter in a press conference, "We're getting our information from CNN just like everyone else."  Whether that was completely accurate is beside the point.  What his comment makes clear is that communication had come a long way since our colonial newspaper warning people not to drink from Miller's Brook.  In fact, in just 225 years, realtime combat information had gone from a guy riding on horseback yelling, "The British are coming!" to millions of people all over the globe getting live reports beamed via satellite from a battlefield thousands of miles away.

100,000 years to go from some Neanderthal grunting around the fire to a one-to-many model that informed hundreds or even thousands of readers at once with a message transmission time of just a couple days.  Then in less than 250 years, we have a one-to-many mode that informs millions of people at one time with a message transmission lag measured in minutes.  How can you get faster than that?

The answer when we return.

Seriously, "Hive-mind"?

Did I actually say "Hive-mind"  a while back?  What is this Star Trek?  We are not bees, or Borg, we are individual human beings.  But in order to make the best progress thinking about interrelationships among complex systems, sometimes it helps to oversimplify.  As long as when we get to the end we remember that our conclusions were based on a simplified model, we can get as fast and loose as we like with the technicalities.

Analogies are important to cognition.  Douglas Hofstadter said the analogy is "the motor of the car of thought."  (Yes, he's very clever like that, using an analogy to describe analogies.)

Analogies themselves are worthy of much more discussion, but for now it is sufficient to say that they can sometimes clear a path in our thought process that may otherwise be a harder slog.  From an analytical perspective, analogies are blue prints, laying out the relative relationships of key components to scale.  An analogy is not the thing it refers to any more than a blueprint is the building it describes.  You can't live in a blue print, but you can spot important relationships in components of the building, and you can work forward from the blue print until you have the real thing.  And, very importantly, you can fold it up and take it with you.  Analogies are the travel size versions of complex relationships.  Easy to carry but still essentially the real deal.

So when I say "Hive-mind" I am simply using a shorthand for the collection of all of our human experiences and the impact these experiences have on our collective activity.  We need not function with a collective higher purpose or enjoy some sort of explicit interconnectedness of thought.

Come and See My Perception Collection

In his book, I am a Strange Loop, Mr H. (Hofstadter is persistently difficult for me to spell for some reason), describes the human mind as a collection of recursive patterns which are collected and form bigger concepts which are themselves recursive, etc. until one arrives at a cognitive level where consciousness and, further up, higher intelligence resides.  This is exactly my kind of exploration because it is not terribly concerned about the actual architecture of the grey matter in our skulls (the hardware of our computer, if you will) but simply with how the mind forms "symbols", and symbols join to become "tokens", and tokens join to become "thoughts", and thoughts join to become "concepts" etc.  In other words, he is more concerned with developing some sort of working model of the software that inhabits our brain -- the programs we run in order to think.  It is an obvious and somewhat dangerous analogy to compare our brains to computers for several reasons I hope I will get to later, but it is a pretty good starting point for thinking about thinking.  Especially the recursive nature of software and of the human brain.

Recursion can get complicated, but in a nutshell it is taking a process and performing it on itself and then repeating that process for an arbitrary amount of time.  The classic example is the Morton Salt girl.   The picture above appears on the blue package of salt.  It shows the girl holding a blue package of salt. On that package is presumably a picture of a girl holding a blue package of salt with a picture on it of a girl....  and so on.  Recursion always ends with, "and so on."

A more mechanical example is taking a mirror and pointing it at another mirror, you will see a hall that seems to twist into infinity.  That would be "endless recursion".  Endless recursion is nice in theory, but in reality, as Darwin's mathematical friends once explained to him (when he suggested the earth may be infinitely old), "infinity is very strong medicine and not just a big number." [c.f. Coming of Age in the Milky Way]  No, for practical reasons, our recursions will all be quite finite thank you very much.

This is a Two Way Street My Friend

But the interesting thing about Mr. H's strange loop theory is that he stopped at the level of the individual.  This is probably because his chief interest lies in how individual minds function.  But the analogy he sets forth, and the mechanisms involved, map very well through small collections of human beings through larger groups, to nations and finally the global "hive-mind".  In other words the symbols and tokens that he sees arising out of recursive processes within the mind repeat through collections of human beings as well.  So while we could easily describe a human mind as being a collection of perceptions, we could just as easily describe a society along the same terms.  This has profound implications for the thought process.  Because each individual thought has an impact on the collective associations of thoughts in a community in very much the same way each concept in the brain has an impact on the collection of concepts.  And it is a simultaneously churning, top down and bottom up process in the mind.  The overall complex thoughts impact the way concepts and symbols are formed and are organized and these in turn impact the higher order thoughts which are produced.

But wouldn't you know, the same thing applies on the community level as well.  The mores and norms of our culture impact on our thoughts while our thoughts simultaneously impact on these mores and norms.

But here's where it starts to get really good.  The method of communication in the brain is both a physical and conceptual one.  The neurons actually carry electrical impulses around the brain but it is the mind which shuffles and organizes the symbols they represent.  The mind can organize itself very quickly and modify its activity in an instant based on the information it receives.  This is because electro-chemical reactions in the brain take place very quickly.  And the number of possible signals is immense.   From wikipedia:

The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons

Now any one synaptic firing is not going to be noticed enough to have any impact on the human level, but with so many combinations taking place at such great speed, it is very clear that the complexity of conceptual organization could be very great and could be modified very swiftly.

Of course we don't know in any objective sense whether our brains function quickly or slowly.  Logic dictates that animals' brains simply allow them the capacity to respond to threats efficiently.  Any mental configuration which did not allow for changes in motor function and risk assessment quickly enough to avoid danger would never survive.  It does not matter what the "net speed" of our thought is, only that it is fast enough for our environment.  In an intergalactic context, our thinking may be glacially slow, but it is fast enough for our local competitors and that is all that counts.  To use a simple basketball analogy, if the local YMCA basketball league champs never have to face the Boston Celtics on the court, it does not matter that they pass the ball like old men.  All they need to do is pass the ball fast enough to compete against the other old men playing on the teams they meet in the gym.

All of this brings up some interesting questions that I want to speculate on.
-- How fast can we process data?
-- How fast can we think?
-- How many neural firings does it take to make a "thought"?

We can't necessarily nail down these answers, but we can do some interesting speculation and find that it leads to some interesting implications for the "Hive-mind" concept.  That will be for next time.  another time.

Context is Everything -- Essential Dry Goods

Now I know what you may be thinking.  "I thought timing was everything."  Yes, it is.  Timing is everything.  But timing is just a specialized case of context, so good on you.  Now, if you don't mind I'll continue.

What is context?  Context is the environment surrounding a fact or event (any "signal").  It is the time.  It is the place.  It is the collection of surrounding information (signals).  Hell, it's even the collection of the surrounding contexts, if you want to get technical.

So why does this matter?  Because if ANYTHING matters, it only matters BECAUSE OF CONTEXT.  There is no meaning without context.  (Okay technically there may be no meaning at all, but if there is any perception of meaning it is because of the perceived context.)

Well this is sure a laugh riot so far.

Ah, but that's a good example.  If this is boring, it is not because it is boring per se, but only because it is boring in relation to anything else you might do that would be more fun (like sorting pennies by date, maybe).  The act of reading this blog can only be evaluated in context -- that is in relation to the the time of day and your location and how hungry you are and what other choices you have about filling your time.

So, why is that "everything"?

Because when we think, the very act of thinking constantly registers the context of what is happening.  Our advanced ability to put things "into context" is probably the most important feature that separates us from other thinking beings.

What's that again?

The experience of collecting data is not thinking, it is simply perception.  It is the PROCESSING of this data that is thinking.  And processing data requires us to create context in order to organize this data (all we see and hear and feel and touch and smell and taste, etc.... "etc.?" Yes, there are more senses, and we'll get to that later).  So we think about what we perceive and we produce context in order to organize data into what we call "reality".  Context, is in a nutshell that "reality".  It is the interrelationship of all the data we receive.  And forming context for the data we are receiving?  Well that's what thinking is.

Putting ourselves as Human Beings into Context

So did I just say a moment ago that animals can't put things into context?  Well if I did, I did not mean to.  In fact by definition if I called them "thinking animals" then I meant to imply that they take perception and put it into context.  What I meant to suggest is if there were a way of actually measuring true intelligence, it would likely have a direct relationship with the brain's ability to create and store a mental construct around perception that is called "context".  The more complicated the context, the higher the intelligence required to produce it.

But it's not necessary to get into a theoretical discussion about how much context animals can or can not create.  We have a perfectly adequate example of unsophisticated context producing brains that we can draw upon -- babies.

In early childhood development, when a baby can not see something (say a ball), it does not exist.  Very soon, though, the child develops enough context around an object to put an imaginary place holder in their mind for where the ball OUGHT to be, even if it is no longer visible.  As long as the ball turns out to be where the imaginary ball placeholder was assigned, there is no longer any mystery to the idea that something need not be visible to be real.  Now of course babies, and even adults, can forget.  Our context models are subject to decay over time, so a baby will not necessarily be surprised to come back the next day and discover the ball is NOT where it had been assumed to be.  In fact there may be no memory of having created the imaginary ball placeholder to begin with.  (Why any of this is relevant will become clear once we have established sufficient, well, context.)

Still later in life, toddlers can experience the same kind of separation anxiety with their mother that they might have experienced with the ball.  Well, obviously not the same kind of anxiety, because for one thing the idea of "mother" by the age of two has developed a great deal of context.  Mother is not a ball.  She is too many things to list.  In just over 700 days, the human brain has created rich and interrelated context around a great many physical objects and family members.  These contextual components (the sum of which is the child's "reality") can include emotional associations as well as associations reserved for specific times of day or when other specific sensory data are observed (the smell of hot cereal or the warm breath of the dog, for example).

Yet even with all of these sophisticated contextual constructs, the toddler can respond to the departure of mother with urgency and desperation.  There is no effective countermanding context (at least not at first) of the fact that mother always leaves in the morning and comes back later in the day.  The important context is the immediate context, and that is that she is leaving.

One could go on at length about all of the sophisticated thinking that must have taken place to order the universe of the toddler.  That the child is learning how to speak, how to say "no" (and defining her  boundaries in the process), how to anticipate regular occurrences (context in relation to time), how to count, etc., is truly a remarkable accomplishment of learning for one who has had so little experience with thinking.  It seems ridiculous to try to portray this tiny genius as stupid.

And yet in an adult context, the intellect of a toddler doesn't even rank high enough to be called stupid. That is how vast the eventual context, the "reality construct" of the adult human mind becomes.

Human Identity

But the most incredible component of context is the construct we assemble that places our selves in relation to the rest of the world.

Cogito ergo sum.  "I think, therefore I am" would in the current context (there's that word again) become:

"I put my perceptions into context.  Part of that context is "me", therefore I exist."

And it is interesting to note that both of these conceptualizations suffer from the same unsatisfactory shortfall.  They both suggest that logically if this thing we call thinking is taking place that someone must be present to do the thinking.  But both are silent about the reality of anything that is being perceived.  If you believe you exist, or have context, or think, then you must exist to have this belief.  But whether anything you perceive actually exists is not proved.

But wait, there's more!

If you accept your existence today, we will throw in a bonus gift.  Because there is still much more to come about how we think ABOUT context actually creates its own context.

And if we allow ourselves the shortcut of assuming other people in the world are real, we get to explore how what we think affects the thinking of those around us.

Ground rules and basic foundational concepts are truly the "dry goods" of  the Concept Mini-mart.  But we have get past them to get to the candy aisle.

Friday, June 29, 2012

3 Important things about conspiracy theories

I love me a good conspiracy theory.  I probably spend more time than I should watching You Tube videos about the illuminati, the Masons, JFK, UFO's and 911.  One could easily mistake me for a nut job from not too far off.  (It's a common mistake, but I'm actually a legume.)

But before you jump ship and move on to some lovely scrap booking blog, please give me a moment to explain.

Conspiracy theories are important.  They are especially important if you're interested (as I am) in how we think about the world around us and how we organize the tiny set of experiences and data we have as individuals into a (usually coherent) concept of reality.

So there are three things we need to accept about conspiracy theories right off the bat:

1) They offer an explanation of something.  Usually that something is a dreadful event (the assassination of JFK or 9/11) or an unknown, potentially unknowable, concept (like the existence of life elsewhere in the universe or how our global society functions as it does).

You don't often hear a conspiracy theorist utter the phrase "I don't think we can know that" or "That's impossible to tell."  The reason is that the purpose of a conspiracy theory is to provide answers.  Our minds are pre-disposed to seek meaning.  Blaming someone for believing a conspiracy theory is a little like blaming a dog for humping your leg.  They may not have all the facts straight, but they are acting on natural impulse.  Seeking to order the world around us (forming context), is the most basic function of thought, and successful conspiracy theorists tend to have very orderly views of the world.   So in some sense they are to be envied, not frowned upon.

2) They could be true.  Now this one is hard for a lot of folks to swallow or may I even say comprehend, but it is crucial for a critical thinker to accept this.  And since I will probably get around to arguing sooner or later that critical thinking is the only fully functioning or effective thinking there is, I may as well come out and say that if you can not accept that a given conspiracy theory may be true, you are not capable of effective thought.  Don't be ashamed.  There's a lot of that going around.  And there is a cure.  But more on that later.  For now we will assume you do not suffer from the disease of a closed mind.

Now a critical thinker is an animal that has grown pretty comfortable with the idea that some things are either unknowable or not yet knowable, or for extra credit the notion that we can not even know some of what we don't know -- Donald Rumsfeld's famous "unknown unknowns".  So it can be hard at first for a critical thinker to accept a neat package of answers to some very difficult questions.  But my greatest frustration with otherwise powerful minds is when they think that they "know" something.  Men like Douglas Hofstadter,  the godfather of thinking about thinking have performed very embarrassing stunts of closed mindedness (as for example when he explains his falling out with other "skeptical inquirers" over his refusal to debunk certain claims of paranormal on the grounds that they violated common sense).

I will certainly be revisiting Mr. H many times in the future, thinker of thinking that he is, so I don't need to litigate this case right now.  Suffice it to say that an honest intellect will never stop accepting that what he may view as common sense may in fact make no sense at all.  Scientific history is full of reasonably intelligent folks making perfectly excusable mistakes about the nature of reality because they could only see the world through the frame they had built around it.  The Godfather should know this more than anyone, though, and as a result, his failure to accept anything as at least "possible" (or at least possible enough to be disproven scientifically) stings a bit.   When a mind as flexible as his shows the outer edge of its range, it fills me with the same sadness I feel for the aging basketball player who runs down the court on a fast break and gets his dunk stopped by the rim when he fails to jump high enough.  (That is what basketball players call, "getting blocked by Father Time.")  We hate to see our heroes confront their limits and fail.

So what does all this open-mindedness have to do with a system that by definition is a tight little package of answers?  Well it is the double negative problem.  If we are unwilling to accept a system of tightly wrapped solutions on the face of it, we are merely accepting our own tightly wrapped notion that we can tell what is true and what is false without exploring the underlying facts.  If you're going to allow your own preconceptions to rule how you think, you may as well make it easy on yourself and stock up on the microwavable answer packs that the conspiracy theorists sell.  They are in aisle seven of the Concept Mini-mart, under "Ready Made Answers"  (the freezer aisle).

No, as thinkers, we need to do a bit better than that.  I don't recommend over indulging on conspiracy theories, but enjoying a quick one every now and then won't hurt anything.  Some of them are pretty delicious.  The internet has changed everything.  This is not your father's JFK conspiracy.

2b.) Part of every conspiracy theory is almost always true.  I honestly don't know if the qualifier "almost always" is even necessary, but what the heck.  We are not trying to be evangelical about what is true, after all, and there may well be a conspiracy theory in the wild that has not one iota of truth in it.  (But that concept would leave me to believe the theory itself would be unintelligible, because it would not even be obvious what true events it was seeking to explain.)

And as open minded people really into this whole thinking thing, we need to recognize that partial truth is sometimes as good as we can get when we're dealing with perception.  So I for one do not mind sifting through a long list of 911 links looking for a few kernels of objective fact that spawned the rest of the explanation.  And lest I make it sound like I don't really take these theories seriously, I have to admit that on more than one occasion I have accepted that I can not explain why what the theory posits must be false.  In many cases it is obvious that at least something in the "official explanation" is false.  Even if that doesn't have to mean the conspiracy theory is true, it does mean that it is no more false than the "official" version.

Now, the logical conclusions drawn by some of these theories (and yes sometimes these conclusions are perfectly logical if you accept the underlying premises involved) can be worlds apart from my daily reality.  But that is part of the point.  If I can't entertain what life would be like if these theories were true or if I were someone who believed them to be true, then I am not doing a very good job thinking about how perception shapes reality am I?

3) Conspiracy theories are perfect petri dishes of perception.  If a doctor wants to find out if you have strep throat, she will do a culture.  In the olden days that involved sticking a swab in the back of your mouth, rubbing the material onto a bright pink culture dish, and waiting 24 hours to see what grew.

The petri dish is the reality TV of the germ world.  It is not a natural environment but some sort of idealized natural environment where we can see what would happen if these crazy germs were left in a house together.  Well conspiracy theories provide, for our own observation, the intellectual equivalent of a petri dish.  Confusing complexities are washed clean from our environment.  All we are left with is an idealized world that runs on clear intent with predictable consequences.  As such we get to see what happens when we lock various ideas together and see how they get along.

To a conspiracy theorist, every thing that has happened (pertaining to the subject of the theory) is the result of intent.  If Princess Diana died it was because someone wanted her dead.  If Lee Harvey Oswald died, it was obviously to cover up the answers he could have given us as to what "really" happened.  There are no unfortunate circumstances leading to unforeseen consequences.  No hole in the blocking line was the result of a missed assignment to the conspiracy theorist -- it was because the quarterback must be tackled on that play in order for the greater plan to unfold.

If that is not a model of perception shaping reality, I don't know what is.  And that is why I will spend a great deal of time --- quite possibly some of it on this blog -- staring at the patterns in these petri dishes.  If in the process I start sounding like a nut, remember that a peanut is legume.  If you want to call it a nut just because it sure seems like one, that's fine.  But you're still technically wrong and I'd appreciate it if you remembered that.

So what about all that thinking?

This blog originally claimed to be set up as a place to think about thinking, complete with all the pompous pointlessness that endeavor implies.  And yet it looks like there has been no thinking at all going on.  What gives?

Well I had to rework a lot of my original approach to things and I ended up organizing some ideas offline and taking a new look at the whole damn concept.  Because it's not really thinking I want to think about it is reality.  And how our perception of it shapes the way we THINK about it.

Oh, yeah, well that's more original.  Look, every simpleton with a year of college and a blog wants to call himself a deep thinker on the ways of life.  I get that.  But this is different.  And in order to explain why it's different, I will need to take some time.  

Because I have had this nagging concept floating around in my brain for three or four years now.  It collects and disperses like a kind of persistent fog.  At times it seems like I am on the verge of piecing the whole thing together in some sort of cosmic revelation and other times it seems like a disjointed jumble of thoughts no more coherent than the conversations of people in a crowded movie theater before the lights to go down.

In the past couple years I have been introduced (well not personally because he's dead) to Marshall McLuhan and his awesome reflections on how media influence our perception.  This discovery has been serendipitous because I have been trying to piece together how this new thing we call social media reshapes every aspect of our lives.  Now don't get me wrong, "social media" is just a crappy shallow fad on par with disco music.  But it is also a profound reorganization of how we communicate.  And how we think.

What does Twitter or Facebook have to do with thinking?  Well, if you accept that how we think and what we think about are greatly influenced by the things we see and hear, then a great deal.  In ways few people really appreciate, the ability to tweet a message to an unlimited number of people across the globe in a matter of seconds is profoundly re-organizing the way our Hive-mind behaves.  And our Hive-mind is merely the collection of all of our little minds.

And so the personal act of thinking -- even thinking about thinking -- is now more than ever before influenced by the technology that allows us to express information of the most critical importance and (more often) chatter of the most inane kind.  

The very idea of how we think about the world (and our place in it) is increasingly influenced by the new tools of social media.

Now when Marshall McLuhan said "The medium is the message" he was not simply saying, for example, that the message the television is best at imparting was a self referential one (i.e. "watch more television").  Rather he was pointing out that the very media we use to communicate affect what we communicate about.  

Let's look at another technology to make that point more clear.  If the "medium" were the lightbulb and we were to say "The medium is the message" we would not just mean that "The light bulb conveys the message of light".  No, we would also have to consider the impact that light has on our lives.  All of the changes in our society caused by the convenience of electric lighting  are all part and parcel of the message that the light brings to us -- the information that a lightbulb conveys.  So the first lightbulb did not simply proclaim, "Let there be artificial light."  Instead, it screamed, "I am about to change you and the society around you in ways you have not even begun to contemplate because you have not yet lived with electric light."  Yeah, Lightbulb is wordy like that.  But in his proclamation comes the proof that, "the medium is the message."  The thing that carries the information is loaded with communication we have not yet even learned to hear.

Something as innocent as a tweet, a 140 character message in a bottle (granted a bottle that goes everywhere and all at once) is announcing to us, "I will bring down governments, reshape your economy + transform the very way U think about the world around U #TwitterTransformation".

 Yeah, Twitter's even more pompous than Lightbulb.  But he has a point.  And that's worth thinking about.

You can't argue with a sock puppet

This is going to come up later, so I want to get it down right now.

This Wired Article suggests that the military is quite keen to get in on the social media scene.  Not just to detect when "rumors" get started in battle zones (or maybe an uncomfortable truth or two leaks out) but also how to respond to these "memes" in real time.  Now I don't like the word "meme" and I will talk at length at some point about why I think merely "signal" or perhaps "social signal" is a better word, but "meme" is the going concept these days, and since this is just a concept mini-mart, I have to stock what sells.  [For anyone who doesn't know a meme is "an idea behavior or style that spreads from person to person within a culture".]

So the goal is to be able to see what's out there in the social media space and get out there and respond to it (or spin it) as necessary.  This can include the creation of "sock puppets" to spread the word.  A "sock puppet" as the name implies is a fake account which is controlled by another person (or piece of software) which acts as a mouthpiece for someone else.  A computerized "sock puppet" response system would use a bunch of artificial accounts (which appear to be genuine) to "shout down" any unwanted viewpoint or message.

To give a crude example, let's say that a village school was hit with a drone instead of the intended target (for whatever reason whether bad intel or targeting error or what have you).  Many kids died. If there is a social media footprint in such an area, the reaction could be swift and severe.  The local net could blow up with all kinds of outrage.  "8 children dead. 20 wounded" could be one tweet, with a link to photos of the damage.  "Does this girl look like a terrorist?" could be another, with a picture of a sweet looking (and very dead) four year old.

News travels fast on the web.  But if there were a way to respond, "robo-spin" could automatically step in and produce just enough ambiguity to prevent this from being too big an issue.

Jabavut (sock puppet 1): I was near the school and I saw 3 men leaving the bldg just before the explosion.
PeaceMoon7 (sock puppet 2): Really?  Did you see what they looked like?
Jabavut (sock puppet 1): They were dressed as police officers, but my guess is they were Taliban.
Shpun11 (sock puppet 3 -- Shpun is the Pashto word for "Shepherd" lending an air of authenticity to the fake identity):   They have done this many times.  Disguises to Police
SunWarrior750 (sock puppet 4):  We need to find out exactly what happened.  This is awful.
HopeMission16 (sock puppet 5): Jabavut, I just missed you today,  I am so glad you are safe!

This conversation could be created instantly and plugged into the emerging social media response.  It provides a false account to produce ambiguity, an inquiry that validates the original observation and provides greater "detail" to the false account.  A "local commentary" is added, complete with broken english.  Then a demand for "the Truth" prompts the casual reader to reflect on the many possible things which might have taken place and gives them all a sudden equivalency they would not otherwise have, and finally, the last sock puppet merely feigns interest in the well being of another sock puppet, leaving the casual observer to believe that these two people knew one another and that there was at least one other person who could vouch for the legitimacy of Jabavut's "observation".

Scared yet?

Well the issue gets more complex than my example would lead you to believe.  Suppose the "enemy" was engaged in precisely the same thing, and the school had in fact been blown up by locals who wanted to blame an American drone attack?  How would the truth compete with the social media attack designed to spread misinformation very quickly?  The sock puppet is like any other weapon -- it can be used to defend as well as attack.

And any way, as the Wired article points out:

Darpa’s announcement talks about using SMISC [in] “the environment in which [the military] operates” and where it “conducts operations.” That strongly implies it’s intended for use in sensing and messaging to foreign social media. It better, lest it run afoul of the law. The Smith-Mundt Act makes pointing propaganda campaigns at domestic audiences illegal.

Phew, that's a relief.  Because without that law from 1948, domestic audiences (we the people) could be subject to the same kind of misinformation propaganda that the Pentagon might use in a battle zone.  Only, it appears that through a quiet little amendment in the defense authorization bill, that protection against propaganda may be about to die.

Now is the time to think about being scared.