In
this thread, Bruce tries to explain the architecture of the AGI that Novamente is trying to develop as a prototype. This architecture is based on a black-box representation of human
behaviour. I.e. a more or less functional representation of our behaviour, that is not necessarily a model of our brain. Also the video attached to this thread of the presentation of Dr. Ben Goertzel suggests the same, if I remember and understand it correctly.
Reading the introduction of
this paper posted by Zoolander, I came up with the following dilemma as far as up- or down- loading of minds is concerned. Maybe (or even probably) this is not an original thought, I didn't have the time yet to read to much about AGI's and the way the concept of mind up- and down- loading could be implemented. It is also quite similar to the observation Lazarus made in this thread already. But stay with me please.

As far as I understand, the human or animal mind does not function as a computer, at least not as the von Neuman model we are using today. The essence of this in this context is that our minds do
not work like a general purpose processor with general purpose memory, in which a piece of software can be loaded that can in principle carry out any function that receives inputs and produces outputs that are within the scope of the I/O functions that are available. As far as I understand, in our human mind, there is no separation between "software" and "hardware". The software is the hardware and vice-versa. It is the specific configuration of synapses that is grown according to genetic factors and experience. So, if we were able to model and implement the functions of the different building blocks, i.e. the synaptic network components, and "wire" instances of these components into a network equal to a particular mind, the "upload" has been carried out. It could be as "simple" as that. Simulate the different synaptic functions, copy and connect them according to the network of a particular brain, and viola. Ofcource this is an utter over simplified simplification.
The advantage, as far as I understand, of such an approach is that you just have to rebuild the synaptic network using computer models of the basic synaptic functions to recreate all the aspects of the original brain. This could almost be a one-to-one copy, provided we had sufficient insight into all the possible synaptic functions and their connections. And provided that there are not to many variations in synaptic functions. And provided we could implement a model of all of these synaptic functions, i.e. to simulate each of them.
This seems to me a far more manageable approach compared to making a model of our behaviour, with the countless parameters that should be measured and copied into the behaviour model. This is prone to misinterpretation and plain errors. E.g. I consider myself (and for that each other individual) as quite unique. By making a functional behavioural model, how can we be sure that it can ever contain the behavioural parameters of each individual? If we could recreate the synaptic network of a brain in a (software) model, we would automagically recreate all behavioural aspects of it.
???
Edited by brainbox, 24 September 2006 - 09:50 PM.