I know the brain utilizes biochemical & electrical activity in its neuronal structures, but how can conscious experience emerge out of that?
I just can't seem to imagine where exactly thought and perception might arise from just plain fundamental electro-chemical interactions.
Perhaps binary computational processing might fit a suitable analogy.
Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.
How does consciousness emerge from neurons?
#1
Posted 29 August 2014 - 01:59 AM
#2
Posted 29 August 2014 - 02:47 AM
Consciousness arising out of simple electrochemical interactions is kind of like the amazing things that your computer can do with a large number of transistors that have two states, 'on' and 'off', or one and zero. The key here is "large number". You have an extremely large number of neurons, and neurons can have multiple connections. Neurons are also more sophisticated than transistors. This super large number of neurons is arranged in sophisticated networks that you have spent a lifetime building, pruning, and training. So it's really not simple electrical impulses that are the source of consciousness, but insanely large numbers of impulses running through this sophisticated network. Look up Henry Markram.
sponsored ad
#3
Posted 29 August 2014 - 04:22 AM
Consciousness arising out of simple electrochemical interactions is kind of like the amazing things that your computer can do with a large number of transistors that have two states, 'on' and 'off', or one and zero. The key here is "large number". You have an extremely large number of neurons, and neurons can have multiple connections. Neurons are also more sophisticated than transistors. This super large number of neurons is arranged in sophisticated networks that you have spent a lifetime building, pruning, and training. So it's really not simple electrical impulses that are the source of consciousness, but insanely large numbers of impulses running through this sophisticated network. Look up Henry Markram.
Can't resist the urge to chime in when consciousness is the subject.
The problem with the transistor/integrated circuit analogy is that nothing your computer does causes an unknown non-local property to emerge. Well, as far as we know that is. We only know of the existence of non-local non-detectable-by-senses consciousness because we directly experience it. It is difficult to say whether there are any other similar undetected properties or phenomena out there that we don't observe or know about because we do not directly experience them ourselves, that is, they are not 'Us'!
Same thing with water, fractals, shorelines...all of the emergent phenomena in nature: None result in unexplained undetectable properties. 'Water' can be observed directly, as can the large-scale results of fractals. But, enough neurons integrate and, supposedly, consciousness emerges.
I feel the OP's angst over this and don't feel there is anything close to an adequate explanation at this point.
Edited by Brafarality, 29 August 2014 - 04:23 AM.
#4
Posted 29 August 2014 - 03:48 PM
Doesn't matter how many elements you have in your circuits/neurons it doesn't explain consciousness only information processing.
#5
Posted 29 August 2014 - 07:58 PM
Consciousness arising out of simple electrochemical interactions is kind of like the amazing things that your computer can do with a large number of transistors that have two states, 'on' and 'off', or one and zero. The key here is "large number". You have an extremely large number of neurons, and neurons can have multiple connections. Neurons are also more sophisticated than transistors. This super large number of neurons is arranged in sophisticated networks that you have spent a lifetime building, pruning, and training. So it's really not simple electrical impulses that are the source of consciousness, but insanely large numbers of impulses running through this sophisticated network. Look up Henry Markram.
Can't resist the urge to chime in when consciousness is the subject.
The problem with the transistor/integrated circuit analogy is that nothing your computer does causes an unknown non-local property to emerge. Well, as far as we know that is. We only know of the existence of non-local non-detectable-by-senses consciousness because we directly experience it. It is difficult to say whether there are any other similar undetected properties or phenomena out there that we don't observe or know about because we do not directly experience them ourselves, that is, they are not 'Us'!
Same thing with water, fractals, shorelines...all of the emergent phenomena in nature: None result in unexplained undetectable properties. 'Water' can be observed directly, as can the large-scale results of fractals. But, enough neurons integrate and, supposedly, consciousness emerges.
I feel the OP's angst over this and don't feel there is anything close to an adequate explanation at this point.
Nature's examples of emergent properties are signposts that teach us about the concept of emergence, but they are far far too simple to result in a non-local property. Our computers are getting closer, but they are still orders of magnitude too simple. In the future, that might not be the case. I suppose we will have arguments like "it's not really conscious, it's just pretending to be" until such time that our knowledge of physics takes a major leap and we start to much better understand the nature of consciousness.
#6
Posted 29 August 2014 - 10:00 PM
The polarisation of 2 photons could be said to be emergent and non-local until measured. Now that's at a more base level..far too simple?
#7
Posted 30 August 2014 - 03:01 AM
Present natural science has no theory or model of consciousness i.e. there is no equation or natural law in which there is some quantity C (consciousness) arising from or related to other matter-energy quantities. This major hole points to a fundamental incompleteness of the present natural science. It may well be that we're presently on the entirely wrong track in the way we explain and model natural phenomena.
#8
Posted 03 September 2014 - 07:34 PM
I think the problem is that people have a poor definition of consciousness and mystify it. Strict definitions and strict measurement are required for any science to work.
Consciousness is the faculty that integrates sensory information. It integrates the stream of sensory information into percepts and in humans percepts then into concepts. Seen this way, consciousness is not confusing. I think people misattribute self awareness to consciousness or consider them the same thing. Self awareness of your own consciousness is the mind noticing itself.
Now how the brain works to integrate sensory information is for science to solve. As far as I know, the brain is mainly a highly parallel processeing pattern recognition engine. And the unit of pattern recognition is not the neuron but about 100 neurons arranged in layered columns. Each cortical column is one pattern recognizer. It is much more complex than that but that's the gist. Yet, we still do not fully undestand how it all works yet.
But I hear you say, but how does 'consciousness arise from that?' That's an invalid question. Consciousness is that. That is the integration of sensory information. Self awareness is when it recognizes itself. It processes itself and says oh there I am. Of course, it cannot simulate itself with itself. That's probably why we get the sensation of our consciousness being 'infinite limitless space'.
Edited by Esoparagon, 03 September 2014 - 07:36 PM.
#9
Posted 10 December 2014 - 01:32 AM
Where consciousness comes into the physical world is called an individual mind. It is actually the inverse of what people like Dennett think.
The presence of a seat and steering wheel in a car do not create a driver, a flower does not create a bee, and a faucet does not create water.
You may react to this as being airhead New Age crap, but that is your emotional reaction.
#10
Posted 10 December 2014 - 02:28 AM
It is actually the inverse of what people like Dennett think.
This might just be my emotional reaction, but isn't this kind of like saying that astrophysics is actually the inverse of what people like Hawking think? Appeal to authority? Maybe, but Dennett is a pretty smart dude, dude.
#11
Posted 10 December 2014 - 02:49 AM
I can't or won't go into detail, but the answer may be (sims)sp - this may solve the quantum theory problem.
Think in terms of one accidental earth somewhere else which has say an x number of years head start on our
civilization, and that number could be large. This starts to account for a funky range of stuff from chance to
the historical arguments of predestination vs free will. Cause and effect come into question.
Personally, I just want to see what happens.
#12
Posted 10 December 2014 - 11:41 PM
It is actually the inverse of what people like Dennett think.
This might just be my emotional reaction, but isn't this kind of like saying that astrophysics is actually the inverse of what people like Hawking think? Appeal to authority? Maybe, but Dennett is a pretty smart dude, dude.
Interesting how you rebut your own argument before you even state it.
Plenty of very smart people came up with Epicycles (speaking of astrophysics).
PS I would be happy to decide the issue with Dennett by an IQ comparison, but I don't think he would go for that, since his whole claim-for-fame is tied up in his theory.
#13
Posted 23 December 2014 - 07:15 AM
The subject of intelligence and consciousness is still in a state of scientific confusion. Electromagnetism and aerodynamics were in a similar state of affairs 150 years ago. It's probably going to be several decades until science has a better grasp on these subjects, at that point we will have interesting disciplines like cognitive engineering, neuronanosurgery and so on.
#14
Posted 27 December 2014 - 05:43 PM
#15
Posted 27 December 2014 - 05:52 PM
But I hear you say, but how does 'consciousness arise from that?' That's an invalid question. Consciousness is that. That is the integration of sensory information. Self awareness is when it recognizes itself. It processes itself and says oh there I am. Of course, it cannot simulate itself with itself. That's probably why we get the sensation of our consciousness being 'infinite limitless space'.
Conscious experience is intimately associated with the integration of sensory information, it but is not the same thing as the integration of sensory information. Here is an analogy. With a distribution of electrical charges there is an associated electric field. However, the electric field is not the same thing as the distribution of electric charges. Electrical charge and electrical field are two distinct phenomena, although intimately associated with each other.
Perhaps there is a need to distinguish conscious experience from consciousness in the functional sense. A phenomenal zombie logically could be fully conscious in the functional sense and yet there could be absolutely no phenomenon of conscious experience associated with its functional consciousness. David Chalmers provides extensive arguments for the nonphysical nature of conscious experience and its distinctiveness from associated information processing.
#16
Posted 05 February 2015 - 01:33 PM
I think the problem is that people have a poor definition of consciousness and mystify it. Strict definitions and strict measurement are required for any science to work.
Consciousness is the faculty that integrates sensory information. It integrates the stream of sensory information into percepts and in humans percepts then into concepts. Seen this way, consciousness is not confusing. I think people misattribute self awareness to consciousness or consider them the same thing. Self awareness of your own consciousness is the mind noticing itself.
Now how the brain works to integrate sensory information is for science to solve. As far as I know, the brain is mainly a highly parallel processeing pattern recognition engine. And the unit of pattern recognition is not the neuron but about 100 neurons arranged in layered columns. Each cortical column is one pattern recognizer. It is much more complex than that but that's the gist. Yet, we still do not fully undestand how it all works yet.
But I hear you say, but how does 'consciousness arise from that?' That's an invalid question. Consciousness is that. That is the integration of sensory information. Self awareness is when it recognizes itself. It processes itself and says oh there I am. Of course, it cannot simulate itself with itself. That's probably why we get the sensation of our consciousness being 'infinite limitless space'.
Perhaps you are right and it really is that obvious and it's staring us in the face but we haven't the common sense or appropriate awareness to it apprehend it clearly.
I do wish it was something more complex and profound, something like our neuro-activity being inextricably involved with the alternate dimensions string theorist propose and as a result produce the byproduct of consciousness or even perhaps consciousness is seated there and we are all at present totally unaware of it? Yes, you can call me a dreamer but without a doubt it is enjoyable to imagine such things.
#17
Posted 05 February 2015 - 04:11 PM
You absolutely do not need string theory or quantum mechanics to explain the human mind. This will be testable in a couple of decades. I would recommend Permutation City by Greg Egan for those who ponder these questions. It is quite staggering and it certainly convinced me of the purely materialistic basis of our mind even though it is ficiton.
http://en.wikipedia....ermutation_City
#18
Posted 05 February 2015 - 05:30 PM
But I hear you say, but how does 'consciousness arise from that?' That's an invalid question. Consciousness is that. That is the integration of sensory information.
I think you are exactly right.
People used to think there was a mysterious "life force" that animated living things. Now we understand quite well that life is pretty much just a shorthand for what cells do. There is no need to explain a separate life force. The question of what the life force is has become meaningless.
Similarly, consciousness should be just whatever data processing the brain does. That this corresponds to subjective feelings (qualia) in a person is often posed as a mystery, but what else should these brain states refer to if not subjective experiences of the organism? Objective Platonic truths? Subjective experiences of your dog instead? It only seems mysterious because we don't yet understand in detail how the brain processes information.
Edited by nowayout, 05 February 2015 - 05:38 PM.
#19
Posted 05 February 2015 - 05:36 PM
I would recommend Permutation City by Greg Egan for those who ponder these questions. It is quite staggering and it certainly convinced me of the purely materialistic basis of our mind even though it is ficiton.
A great book by a great author. It is a pity he has recently been wasting (IMO) his talent on boring fake-history-of-science-in-different-primitive-societies type stories that I find exasperatingly unreadable. All his older stuff is fantastic.
#20
Posted 05 February 2015 - 05:55 PM
Perhaps there is a need to distinguish conscious experience from consciousness in the functional sense. A phenomenal zombie logically could be fully conscious in the functional sense and yet there could be absolutely no phenomenon of conscious experience associated with its functional consciousness.
How do you reckon? The "zombie" would need to run some emulation of a brain. To accurately do so, that emulation needs to be equivalent to a brain, and therefore its states will represent the same experiences. To argue that they are different is like saying that a file stored on hard disc is different from the identical file stored on CD. What you are saying is also sometimes called the Chinese Room fallacy, which has been nicely debunked among others by Douglas Hofstadter (see his Gödel, Escher, Bach book, for example - written for a popular audience but nevertheless a very nice book).
We already have an existence proof that conscious machines are possible, by the way. You are one, presumably.
Edited by nowayout, 05 February 2015 - 05:59 PM.
#21
Posted 05 February 2015 - 06:55 PM
I would recommend Permutation City by Greg Egan for those who ponder these questions. It is quite staggering and it certainly convinced me of the purely materialistic basis of our mind even though it is ficiton.
A great book by a great author. It is a pity he has recently been wasting (IMO) his talent on boring fake-history-of-science-in-different-primitive-societies type stories that I find exasperatingly unreadable. All his older stuff is fantastic.
He had some kind of peak in the mid 90's. I agree about that.
#22
Posted 08 February 2015 - 03:14 PM
I think the mystery is the question of why subjective experience (qualia) exists in the first place. Data processing need not include experience. A thermostat processes "sensory" data, yet it does not experience such data. Simply put, the question is: why doesn't all of this go on "in the dark"?Similarly, consciousness should be just whatever data processing the brain does. That this corresponds to subjective feelings (qualia) in a person is often posed as a mystery, but what else should these brain states refer to if not subjective experiences of the organism? Objective Platonic truths? Subjective experiences of your dog instead? It only seems mysterious because we don't yet understand in detail how the brain processes information.
Another interesting conundrum is the relationship of consciousness and evolution. The leading position in nueroscience and philosophy is that consciousness is an "epiphenomenon" of neural processes. An epiphenomenon is "a secondary effect or byproduct that arises from but does not causally influence a process" (Oxford Dictionary). So, as epiphenomenon, consciousness exerts no causal influence. Yet at the same time we know that evolution has selected consciousness and preserved through countless generations. But by all accounts, consciousness is at best unneeded and at worst useless. As I've been taught, evolution eventually discards any trait that does not influence that survival capacity of the organism in a favorable way. So why would evolution preserve an a-causal property?
Edited by Soma, 08 February 2015 - 03:29 PM.
#23
Posted 11 February 2015 - 01:10 AM
You absolutely do not need string theory or quantum mechanics to explain the human mind. This will be testable in a couple of decades. I would recommend Permutation City by Greg Egan for those who ponder these questions. It is quite staggering and it certainly convinced me of the purely materialistic basis of our mind even though it is ficiton.
I would strongly suggest to you that science has shown there is something at the QM level and well beyond associated with the information substrate that brain/mind is connected to..
#24
Posted 11 February 2015 - 03:02 AM
It is amusing that geeks are desperate to believe that fires are caused by firemen. After all, we always see them together !
#25
Posted 11 February 2015 - 02:46 PM
I think the mystery is the question of why subjective experience (qualia) exists in the first place. Data processing need not include experience. A thermostat processes "sensory" data, yet it does not experience such data. Simply put, the question is: why doesn't all of this go on "in the dark"?Similarly, consciousness should be just whatever data processing the brain does. That this corresponds to subjective feelings (qualia) in a person is often posed as a mystery, but what else should these brain states refer to if not subjective experiences of the organism? Objective Platonic truths? Subjective experiences of your dog instead? It only seems mysterious because we don't yet understand in detail how the brain processes information.
Another interesting conundrum is the relationship of consciousness and evolution. The leading position in nueroscience and philosophy is that consciousness is an "epiphenomenon" of neural processes. An epiphenomenon is "a secondary effect or byproduct that arises from but does not causally influence a process" (Oxford Dictionary). So, as epiphenomenon, consciousness exerts no causal influence. Yet at the same time we know that evolution has selected consciousness and preserved through countless generations. But by all accounts, consciousness is at best unneeded and at worst useless. As I've been taught, evolution eventually discards any trait that does not influence that survival capacity of the organism in a favorable way. So why would evolution preserve an a-causal property?
There are two problems with this. First, evolution does not only keep favorable traits. Evolution can keep neutral traits, and it can preserve even negative traits that are too difficult to optimize without major redesign (for example, the vertebrate retinal blood vessels that are on the wrong side).
Second, I think this evolutionary argument is essentially beside the point, because I think it is an error to consider consciousness a trait separate from the processing the brain has do to for survival. Most likely consciousness just IS the processing the brain does. We can assume that the brain has to have some internal model of the environment encoded in the brain state - why do you think that internal brain state is different from the organism's conscious experience of the environment? Occam's razor, and I, would argue that the trait of consciousness IS that brain state.
#26
Posted 12 February 2015 - 02:32 AM
How do you reckon? The "zombie" would need to run some emulation of a brain. To accurately do so, that emulation needs to be equivalent to a brain, and therefore its states will represent the same experiences. To argue that they are different is like saying that a file stored on hard disc is different from the identical file stored on CD. What you are saying is also sometimes called the Chinese Room fallacy, which has been nicely debunked among others by Douglas Hofstadter (see his Gödel, Escher, Bach book, for example - written for a popular audience but nevertheless a very nice book).
We already have an existence proof that conscious machines are possible, by the way. You are one, presumably.
David Chalmers agrees that John Searle's Chinese room argument fails, but this failure is not relevant to the phenomenal zombie argument for conscious experience being a very real but nonphysical phenomenon. In Chalmers' view, the information processes in the Chinese room would actually have conscious experience associated with its information processes. Chalmers' phenomenal zombie arguments relate to LOGICAL possibility rather than to NATURAL possibility.
A useful example concerning conscious experience is seen in the ethics of boiling a lobster alive. The lobster does not sit quietly through the ordeal, but struggles desparately to escape intense pain. The ethical question involved in this is one of whether the lobster's actions are a simply a mechanical reaction to the way it is being stimulated or whether there is an intense conscious experience associated with its pain. The phenomenon of conscious experience must not be conflated with the information processes of consciousness with which conscious experience is intimately associated.
Concerning a file stored on a hard disk and an identical file stored on a CD, the digital information stored on them is truly identical, but this does not make the hard disk identical to the CD. There is more to the hard disk and the CD than just the digital information stored on them. In the case of conscious experience, David Chalmers believes that two systems running identical information processes will actually have identical conscious experiences, according to his principal of organisational invariance. Despite this, David Chalmers argues that the phenomenon of conscious experience is not the same thing as the information processes with which it is intimately associated. There is more to reality is than just information processing. I would go further than David Chalmers would and argue that even two minds with identical information processing could could have conscious experiences that are not identical. I would go even further to argue that two minds that are physically identical could have a fundamentally different phenomenon of conscious experience, due to a nonphysical difference that is not accessible to physical investigation. This is a matter that is crucial to the issue of immortality. Although David Chalmers and I do not agree on this point, we both agree that conscious experience is a nonphysical phenomenon above and beyond the information processes with which it is intimately associated.
#27
Posted 12 February 2015 - 05:41 AM
I would strongly suggest to you that science has shown there is something at the QM level and well beyond associated with the information substrate that brain/mind is connected to..
And what would that be? We live in a quantum world, but our brain is by no means running a quantum computer.
I think Quantum is just fashion word people like to throw around.
I recommend Roadmap to Brain Emulation, it's free and goes in to some detail on the issue of QM and neurons.
http://www.fhi.ox.ac...dmap-report.pdf
Quantum computation
While practically all neuroscientists subscribe to the dogma that neural activity is a
phenomenon that occurs on a classical scale, there have been proposals (mainly from
physicists) that quantum effects play an important role in the function of the brain (Penrose,
1989; Hameroff, 1987). So far there is no evidence for quantum effects in the brain beyond
quantum chemistry, and no evidence that such effects play an important role for intelligence
or consciousness (Litt, Eliasmith et al., 2006). There is no lack of possible computational
primitives in neurobiology nor any phenomena that appear unexplainable in terms of
classical computations (Koch and Hepp, 2006). Quantitative estimates for decoherence times
for ions during action potentials and microtubules suggest that they decohere on a timescale
of 10‐20 – 10‐13 s, about ten orders of magnitude faster than the normal neural activity
timescales. Hence quantum effects are unlikely to persist long enough to affect processing
(Tegmark, 2000). This, however, has not deterred supporters of quantum consciousness, who
argue that there may be mechanisms protecting quantum superpositions over significant
periods (Rosa and Faber, 2004; Hagan, Hameroff et al., 2002).
If these quantum‐mind hypotheses were true, brain emulation would be significantly more
complex, but not impossible given the right (quantum) computer. In (Hameroff, 1987) mind
emulation is considered based on quantum cellular automata, which in turn are based on the
microtubule network that the author suggests underlies consciousness.
Assuming 7.1 microtubules per square μm and 768.9 μm in average length (Cash, Aliev et al.,
2003) and that 1/30 of brain volume is neurons (although given that micotubuli networks
occurs in all cells, glia – and any other cell type! – may count too) gives 1016 microtubules. If
each stores just a single quantum bit this would correspond to a 1016 qubit system, requiring a
physically intractable 210^16 bit classical computer to emulate. If only the microtubules inside a
cell act as a quantum computing network, the emulation would have to include 1011
connected 130,000 qubit quantum computers. Another calculation, assuming merely classical
computation in microtubules, suggests 1019 bytes per brain operating at 1028 FLOPS
(Tuszynski, 2006). One problem with these calculations is that they impute such a profoundly
large computational capacity at a subneural level that a macroscopic brain seems unnecessary
(especially since neurons are metabolically costly).
#28
Posted 12 February 2015 - 06:59 PM
If processing=consciousness, then any degree or processing should correlate with a commensurate degree of consciousness, but we don't know this to be the case. On the other hand, if conciousness is emergent and manifests at a critical threshold of processing complexity, than it would be considered a product. What was not produced at lower levels of processing is now produced at higher levels. There must be some order of processing that is involved, as my brain processes innumerable bits of physiological data while in deep sleep, yet there is no consciousness being produced.Second, I think this evolutionary argument is essentially beside the point, because I think it is an error to consider consciousness a trait separate from the processing the brain has do to for survival.
The question then becomes, is this product active or passive? Is it influential or entirely a-causal. Consciousness-as-ephiphenomenon holds that consciounsess is an uninfluential byproduct of processing. Remember, an epiphenomenon is "a secondary effect or byproduct that arises from but does not causally influence a process" (Oxford Dictionary). A good metaphor for an epiphenomeon is the noise an engine makes. The noise is instrinsic to the process of the engine, yet totally unnecessary-i.e., it could function just as well without it. So, if consciousness is truly epiphenomenon, the functioning of our organism would be fine without it like an engine without the concomitant sound.
Is this the case?
Edited by Soma, 12 February 2015 - 07:17 PM.
#29
Posted 12 February 2015 - 07:45 PM
When you get to this type of topic, most of it rests on semantics.
So a lot depends on how you define "consciousness".
In Europe and the US, it is usually considered to be our thoughts ("I think therefore I am"), while in Asia, it is the underlying awareness that can be perceived in between thoughts. The former can clearly "emerge from neurons" while the latter cannot. (And before you say "sensory input" - I'll point out that that awareness still exists in a Sensory Deprivation Tank.)
sponsored ad
#30
Posted 13 February 2015 - 02:32 PM
In the case of conscious experience, David Chalmers believes that two systems running identical information processes will actually have identical conscious experiences, according to his principal of organisational invariance. Despite this, David Chalmers argues that the phenomenon of conscious experience is not the same thing as the information processes with which it is intimately associated. There is more to reality is than just information processing. I would go further than David Chalmers would and argue that even two minds with identical information processing could could have conscious experiences that are not identical. I would go even further to argue that two minds that are physically identical could have a fundamentally different phenomenon of conscious experience, due to a nonphysical difference that is not accessible to physical investigation. This is a matter that is crucial to the issue of immortality. Although David Chalmers and I do not agree on this point, we both agree that conscious experience is a nonphysical phenomenon above and beyond the information processes with which it is intimately associated.
This seems to be an untestable hypothesis, meaning it is not amenable to science, but a matter of faith. In other words, it is religion.
Does Chalmers discuss any ways his hypothesis could be tested?
It also seems to be an unnecessary hypothesis. What is Chalmers' argument for the necessity of it?
Edited by nowayout, 13 February 2015 - 02:52 PM.
Also tagged with one or more of these keywords: consciousness, emergence, neurons
10 user(s) are reading this topic
0 members, 10 guests, 0 anonymous users