Since I found a job I’ve had less time to think and since I’ve had less time to think I’ve not come up with any quasiphilosophical bullshit and since quasiphilosophical bullshit is like half of this blog, there’s been a sharp decline in the amount of posts lately.
(The other half is me whining about not being able to write. I’m a complex man.)
Nevertheless I got to thinking about something I think about a lot: What is a mind?
Way back when I toyed with the idea of using paper as the storage medium in a teleporting scenario. This was meant to highlight how ridiculous teleportation is to me.
On the other hand, I have no real difficulty accepting intelligence in other than biological media. When you accept that matter is matter, no matter (ahaha) if it’s part of an organism or not, then there are few arguments against intelligence and/or consciousness appearing in other substances than carbon, hydrogen and whatever else we humans are made of (biology fo lyfe yo). It is not unreasonable to imagine life based on silicon, even if it might be somewhat far-fetched.
So the specific elements involved are unimportant. Then what is? The interactions, the structures and the continuous change of those structures. I fully accept that a simulation of a human mind would in all things important be a human mind. Which is why I’m so unnerved by research that has as its aim to create such a simulation.
But if we accept that a computer juggling electrons can contain a consciousness, then we also have to accept those electrons as being equally coincidental as the organic matter of the earlier paragraph.
That’s when I started thinking about the difference engine and of steampunk computers. If we can replicate the function of a brain in electronic computer then why not by mechanical means? I mean, that’s what the brain does; knocking particles against each other and whatnot. Remember, the medium is unimportant, only the structure is important.
Such a simulation would be very slow, but it would be a conscious mind. Gear boxes instead of neurons, chains and wires instead of synapses, but still a mind. Just because the machine has no electronics doesn’t make it less capable of containing an AI than a regular old super-computer.
This in itself was fascinating, but I got to thinking that perhaps we could take it one step further, which is where the teleportation thing comes back. In that example I imagined it all very steampunk, with ocean liners and typed sheets of paper and so on.
Now imagine that we have a huge office complex. In this complex we have an immense number of clerks working. As in all large offices, we would have divisions. Let’s imagine the divisions are connected via an internal mailing system. Not all divisions would be connected to all others, but only to those they need to deal with regularly.
Let’s further imagine that every division is in fact reenacting the function of a neuron in the brain, that the mailing system stands in for the synapses. Each individual clerk does their job according to the rules and regulations they’ve been instructed in.
This would be a mind and a consciousness.
I’ve been thinking about it before, the emergence of a consciousness from components, each following a simpler set of rules. For example, I’ve sometimes wondered what would happen when humanity reaches numbers of hundreds of billions, spread across the stars. I considered the neuron and what it is. In the most generic terms, it’s a unit that inputs, processes and outputs data sent to it by other similar units.
How is that different from people taking in information, digesting it and passing it on to others? The main difference is complexity and scale, of both time and distance.
So with enough people communcating, could there arise a consciousness that we, as mere components, would not even sense? A mind that would think across of millennia or more, while its human parts would die and multiply, replacing themselves.
The analogy breaks apart somewhat; the cells of our bodies aren’t close to being sentient, so we can’t really expect this supermind to look upon us as we do single cells. On the other hand we should probably be careful about predicting the behaviour of something of a higher complexity than ourselves.
The most interesting question is of course: Has this already happened? I mean, it’s not given that a supermind built from separate sentients would need the same amount of components as a mind built from non-sentient parts.
Is this what a culture is? Is this what a crowd is? Is breaking up a crowd murder of a supermind? Or is it just putting it to sleep?
How many minds is each of us part of?