Mildly offtopic for this forum

edited November 2015 in Everything else
I recently got into an argument with someone on the subject of strong AI - he claims it's impossible and always will be, I claim that getting human-level intelligence could be done using brain scanning and simulation at the very least and made the argument here: http://pastebin.com/sM2PiK7u

Of course i'd also argue that implementing AI using a simulation of a brain is horrendously inefficient, but it should still in fact be possible to do given enough time and resources.

The guy I was arguing with outright refused to point out the flaws in my argument (and I can think of a few myself even - none that can't be defended, but still - there are some criticisms that could be made) so i'm wondering if anyone here has found an actual compelling argument against the possibility of strong AI.
Tagged:

Comments

  • Strong AI is completely possible. Anyone who says it isn't is overvaluing themselves. Sentience and intelligence are not some fundamental mysteries that can only be explained by weak fill-in ideas like souls, they are just gestalt systems created through a complex learning network and a societal structure designed to nurture an intelligence into sentience.

    Humans are not 'special' or 'unique', we're just a product of a learning network placed inside a structure designed to cultivate it.
  • edited November 2015
    I'm certain Strong AI is possible as well. As far as I can tell, system power is the biggest limiter for AI complexity today. 
    However, the idea of a soul actually makes quite a bit of sense @ElectricFeel . It certainly is a gestalt system, but that makes it no less significant. If the body is a biological computer, with DNA and other chemical reactions serving as the storage devices for the duration of the individual's lifespan, what are books, punch tapes, cassettes, floppy disks, Optical disks, flash drives, and quantum memory? They're a form of memory used to preserve knowledge and data. In some cases, the data outlives our physical bodies. 
    From what I gather about souls, as described by religions and philosophers throughout history, is that the "soul" serves a similar function. In a way, a book, or even the memories and stories people retain about us after our death gives us a kind of immortality. Each is a piece of the information that once was that person's existence. Thus, as long as the story that was an individual's life remains to be remembered/observed, we still "live on" in a way. Thus, if an artificial intelligence ever reaches the complexity and development that people recognize and interact with it as another individual, the AI can be said to have "gained a soul" or "become sentient". Whichever suits your taste. 

    It's all about the observer, and how that observer records the data they intake about the AI. Written literature and images can be considered to be the earliest forms of artificial life. They contain "signalling methods" in the form of the content they convey. Their self-sustaining process is the role they play in the preservation of information. Because they were a feasible and (at one time) efficient method for developing civilizations to store information, they survived and were evolved into a "species" better capable of storing information by the governing principles of their universe (humans are these governing principles).  
  • Sure if you define a soul as a desire for legacy and the afterlife as the propagation of your impact on the world that can have meaning.
    A relevant quote on the topic : They say you die twice: once when you take your final breath, and again when someone says your name for the last time.

    The layering of religion on top of the concept is not something I enjoy, and I disagree with the rhetoric various churches place on the idea.
  • I've always considered my physical brain to be my soul if you define soul as that which makes you you.
  • As already said, strong AI is possible since it's just a matter of gradually adding more complexity to current day AI.

    I don't believe you 'soul'/consciousness is synonymous with your brain. A subjective experience is clearly not the same as a physical object. The brain causes consciousness. Consciousness is a feature of the brain. If your brain ends, so does your consciousness. But the two are not the same. But I'm probably just arguing semantics here.

    Going slightly off-topic: I don't believe a strong AI will by default have some kind of sentience/consciousness/soul/mind (i.e. subjective experience). This is what functionalism says. But just because we can't explain consciousness doesn't mean we can just take one attribute of the organ that causes our consciousness (i.e. the information processing ability of the CNS) and equate that to consciousness. Following that logic we'd have to assume any physical object/collection of objects that somehow consistently responds to its environment in a specific way (i.e. processes information) has consciousness. Things get really silly beyond this point. https://en.wikipedia.org/wiki/China_brain#The_thought_experiment

    You can even replace the people of China with any random object, e.g. pots and pans, spoons, pencils, etc.

    inb4 it's a small step from spoons to the stuff our brain is made of (which is a pretty valid argument I suppose). But we KNOW for sure brains lead to consciousness. Attributing consciousness to other hardware without knowing how/why consciousness works is just confusing imo, even IF functionalism is true.
  • edited November 2015
    The above declarative is logically flawed. Saying that a strong ai is possible by just adding more and more complexity to a current ai, is like saying that you can make a flying ferret by continuing to glue more and more feathers to it. I'm being a little facetious, but the pint is that the statement belies a certain "this thing must happen because it must happen" type of logic.  Complexity and consciousness are not synonyms.

    I'm not saying it's not possible, just that this hand waving about possibility due to imperative is exactly the type of things that transhumanists do that people tend to look a bit sideways at.  Or that we do when people start assuming that if you just cram enough hardware into a biological system it will work and tada cyborg future.

    Now, as to the question at hand, 7 is a little weird. Your premise implies that a connectome mentioned in 6 can be used as a functional model. The fact that you start of with if means you know that the premise is shaky. Feathers are used for flying. If enough feathers are glued on a ferret....  I'm not just debate club picking at points here, it's kind of a leap.

    your premise 8 is significantly faulty. Making the claim that human cognition (or we could say biological cognition) is purely a matter of the brain alone, without outside input is just not true. I think therefore I am is all well and good until you don't have the means to define I without input. The body is part of the self. Mutable to be sure, but not really independent. Without senses (and the biological means to sense) there is no decent way to delineate the self from the other. And without a way to express (output) you can literally not express the concept.  You take the brain out of the box and all you have is an inert lump of matter. It doesn't work. Now, say you hooked it up to a life support system, and some inputs, and you figured out how to sync that. And then some outputs and you could make them human understandable. At this point, all you have done is substitute inorganic parts for organic parts. However, those parts are crucial in allowing the brain to function, and therefore, important in cognition.

    9 is based off of 7 and 8 so it's gone. 

    At this point, your conclusions are based on pure speculation. Worse thought is that due to this, you are saying that the biggest issue is more hardware (ie more brain bits).  Premise 8 basically takes the "I am a brain creature driving a meat suit" to the extreme and means that you're hoping that if you just make enough brain models and then they're going to go out and drive a ... computer suit or whatever.  It's really simplistic. Especially if you are talking about modeling a human mind inside of a computer. You can make analogous inputs and outputs, but then you also need to model an environment for these thing to input in. How do you make the internet on a code level, make sense to your fingertips? It's not just a big question that may take years, it's a question of understanding how the brain works. Which we don't really. So how do you model a response to something when you don't know how it works? Especially in a new environment for which you have no previous input models?

    Oddly, none of this disproves the chance of strong AI. All it does is point out that a hypothesis with speculative premises that are not provable is a bad hypothesis.

    In terms of creating a completely alien intelligence that has no connection to our aim ideals or motives, well, sure, that seems possible.  The thing is, that if you design a system to do the thing that you want it to do, then how can you honestly say that it is independently intelligent? If a core premise of design is that it is built safely, or for you, or for your goals, then is it independently intellignet or just a complex tool? Even by programming it to understand human language, you are creating a framework which boxes in it's develop so that it works for you or with you.... 
    EDIT: the concept was put forth originally in the book Ventus and is called Thalience. It's currently used by some people in the AI community about self organizing networks and has to do with being capable of define ones own categories about the world as opposed to having them imposed on you, and is seen as a crucial concept for true AI.

    With all this talk of consciousness, I suggest y'all check out this cool article about the passive frame theory of consciousness. Kind of relates to this conversation. Paper cited at the bottom.
  • I've been waiting for someone to give actual critique of my argument, so thank you.

    "Saying that a strong ai is possible by just adding more and more complexity to a current ai, is like saying that you can make a flying ferret by continuing to glue more and more feathers to it"

    That's not what i'm arguing at all - my basic point is that it should be possible to simulate a brain by building an accurate enough mathematical model of biological neurons and then using a computer to run lots of these models and link them together in the same way as a human brain. This would (and does) work for simulating a computer for example - build a model of the transistors, link the virtual transistors together and you get an emulator that's 100% accurate (as opposed to the more efficient but less accurate methods often used in emulators).

    Of course you're correct that to get any actual functioning you need sensory input and motor output, and for this to be useful you'd probably need a simulated or robotic body, perhaps I should have stated this in my premises - I believed it to be implied.

    "Now, as to the question at hand, 7 is a little weird. Your premise implies that a connectome mentioned in 6 can be used as a functional model. The fact that you start of with if means you know that the premise is shaky"

    It's fairly well accepted in the neuroscience community that the informational content of biological brains is represented by dendrite structure (in other words, by the connectome). I suppose I should also mention synaptic weights here.

    "Premise 8 basically takes the "I am a brain creature driving a meat suit" to the extreme and means that you're hoping that if you just make enough brain models and then they're going to go out and drive a ... computer suit or whatever.  It's really simplistic."

    It does indeed take that to the logical conclusion - because I do not believe that human intelligence arises out of anything other than the physical brain, which is constructed out of neurons. Human brains do of course require inputs and outputs to USE that intelligence, but i've addressed this above.

    To use the transistor analogy, I propose that you could simulate a computer without understanding the full design so long as you know how to simulate transistors and other components and have a full circuit layout. If you understand how neurons work and have enough computing power, it should also be possible to simulate any system built out of neurons and just like our transistor example requires simulating the actual IO, so does the brain example.

    As for conciousness, I consider it irrelevant to the question of intelligence - they're two related but different concepts and I do not claim that we can create conciousness in computers.
  • What do you guys mean with the term Strong AI? I used this definition from wikipedia: "is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can."

    In my opinion we can talk strictly about Strong AI without the need for the words consciousness and intelligence. I don´t know if your first paragraph was in response to my post, @glims but I agree that complexity =/= consciousness. But I do think Strong AI is theoretically possible and (probably) just a matter of time (using the above definition at least!). Current day AIs can't keep a dialogue going for long before the human notices he's talking to an AI. But isn't this just a matter of adding more complexity to this particular AI (linguistics, semantics, syntax, sentence stucture, humour, etc.)? This stuff is progressing very rapidly. The same could be said for every other intellectual task a human can perform. Even if it´s not feasible in the near future, it´s definitely possible. Strong AI imo is just a matter of complexity. Consciousness obviously is not.

    @garethnelsonukYou say you consider consciousness irrelevant. But would our brains work the way they do without consciousness? If they don't, your emulated brain won't function the same as a human brain. Will this imperfect emulation be a strong AI? We know for a fact that our conscious experience can directly influence our brain activity to some degree. How much functionality do you lose if you keep the basic biological/physical rules of the brain but might not be able to emulate consciousness? Emulating a brain to get strong AI (similar to that of our brain) may be very inefficient for this task for this reason. It would be much easier and more controllable to just build software systems that are not based on the brain but built to perform a specific task (like having a dialogue).

    Your transistor analogy isn't very convinving at this point in time. We know transistors are the lowest level we need to emulate in the case of a computer emulation. But in the case of the brain, we don't know which resolution is required for reasonably accurate emulation (i.e. functional emulation). Are cells enough? Which cells? Is the distance between cells of relevance? Cell size? Cell health? Conduction speed? Blood flow? Will you emulate sleep? Or exercise? Hormonal fluctuations? Do we need to go deeper? Proteins? Genetic code? Gene transcription? Ion concentrations? Even deeper than that? Individual molecules or atoms?

    http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf

    Page 11 has a nice table (and was clearly written by a hardcore functionalist!).

    Now, you're clearly talking about the theoretical possibility of emulating a brain. If we assume at some point our tech is so advanced we can emulate every individual atom in a brain and hook this emulation up to an input and output device... This is your starting point, right? That would seem like a pretty good emulation to me, except of course like I said there´s the uncertainty of conciousness and its role in our brain's functioning.
  • edited November 2015
    Edit: this part should be addressed first-
    I wasn't saying that you, @garethnelsonuk, had mentioned complexity. It had just come up in a couple of comments and I was addressing that as well. However, your transistor analogy is just that. Increase a system with pieces that do things to increase complexity until, voila, working computer. 
    Now, you said "This would (and does) work for simulating a computer for example - build a model of the transistors, link the virtual transistors together and you get an emulator that's 100% accurate (as opposed to the more efficient but less accurate methods often used in emulators)." This relates to one of my biggest issues with this thinking. Biological systems are not just very complex computers, unless you are willing to say that really it's all just about pushing atoms around. Then yeah, everything is. Then this whole discussion falls apart. At some point nanotech happens, blah blah blah...
    When you put a transistor down, and it does a thing, it does a thing because we built that thing at one time. We know how it works. Anything we make from it is just a scaled up version of those same principles on a more complex scale. We are working in the reverse direction with biology. We barely understand any of it. Especially the mind and intelligence. So when we model a neuron, it's just a model. It kinda does what we want it to. We don't really get why these things do the things they do when you string them together. It's not a perfect model because it is indepndant of the things that developed neurons, like bio systems. We're reverse engineering technology and it isn't our technology. Beyond that, their is the whole issue of gradients vs bianary, and work that posits that the brain may work like a quantum computer (still iffy, but well about as well researchable as hard the AI you are talking about).

    I feel like @Slach is making some good points about sleep, hormones, etc, and also brings me back to discussing how just a brain is not enough. Our intelligence is a system built upon our physicality. Our limbic , hormonal, and nervous system all play a huge part in how our intelligence has developed.  There is no possible way that you can say that the two are separate without just straight up ignoring neural developmental biology. Our intelligence has developed to move the body, further reproduction, and eat things.  We've gotten really good with it and we use it for other things, but this is the bedrock of developmental bio right here. If we're going to pretend like we've somehow moved past our biology like that and our minds are magically separate, well, that may sound really cool, but it just doesn't work that way.

    When I said that your are implying that the connectome could be used as a theoretical model, i meant just that. It's an implication. It hasn't be done, it doesn't work this way. It's a good tool, but a map not a forest. Even if you have a accurate model of a forest (to take the analogy further) which is really just another forest... a forest isn't a forest without an environment to exist in. You can't have a functioning forest without an ecosystem. To wit, your model doesn't do anything without all the other things that make the model work. A brain is not enough. This isn't just simple input and output. This is an entire body awash in chemical gradients. Huge portions of our functioning go to processing and responding to these. It's all connected.

    You can take all of the hardware that you want transistors and chips, and you can build a computer without knowing what it does. And when you plug it in, it will do just that, nothing.
    You can model every bit of the brain, and press go and it will just sit there and thrash or short or whatever. We don't know the code. As far as we know, intelligence is an emergent phenomenon that comes from complex biological interaction. The one to one mapping sounds like a great tool but just that. If AI happens, it is mostly likely to be an emergent phenomenon that comes from complex digital interaction.


    Now, I am not saying that consciousness is or isn't necessary, or even really thing for all i know. It's just a place holder word and we were all using it. However, if the goal is just to build a really really complex tool that can fool people long enough, then complexity may cut it. However, this isn't anything more than a complex tool. This is why I started using the term independently intelligent and talking about thalience. If you have something that at the core, only does what you tell it to do, then that's not intelligent. If it can ignore you, fuck with you, talk to you, pretend to be fake or real or whatever. Make up it's own words for things and not tel you what they mean cause it doesn't want to tell you, this is independent intelligence. I fell like this is what we call consciousness. Will, or whatever.  All sorts of things have it. Computers don't. This is a code and language level problem with developing AI, not technically a complexity one.
  • As for Consciousness, wouldn't the "I think, therefore I am" principle apply? 

    Would a program that could develop better versions of itself eventually lead to Strong AI, if that program's purpose was to interact with humans and survive in the world?
  • I think therefore I am fall apart in this case, because you need to be able to define I and you can't do that without input. Ans just making something that says it's thinking isn't the same. You would need proof of thought through spontaneous action not defined by the designer, but capable of repeatably through the "desire" of the program independently.  It's hard to define this. There are lots of essays about just this topic.

    Survive in the world, sure. Again, defining interactions with humans as a purpose denotes a tool, not an independent.  Horses do interact with people, horses can be used as tools. It is not necessary that they be either interacting with people or be used as tools for their continued survival (tho one might point out we tend to kill everything that won't be of immediate use to us... sigh). Things that must interact with us for existence are usually cellphones and bacteria who really just consider us their substrate. Tools and unknowing symbiotes.
  • I really think intelligence is just learned patterns that are very effectively. What we see as intelligence is the ability to solve problems very effectively and create patterns. If you saw someone with a severely damaged I/O condition, i.e. Helen Keller, in their unlearned state you might judge them to be retarded, or at least severely mentally degraded. This is just the subjective impression of intelligence given by the inability for a damaged individual to learn the patterns that are necessary for communication.

    Intelligence can be generally defined as " the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within a given environment." [wikipedia]

    The basis for thought arises when you have multiple networks interacting and seeking needs. The necessity of being able to evaluate a statement requires that we consider other viewpoiints and simulate another network within our own. I believe this gives rise to sentience, but the intelligence behind it is just the capability for a general neural network to learn patterns.

    The more effective at pattern learning, the more intelligent. The more effective at interacting in a way that considers and evaluates meaning, the more perceived intelligence.
  • Just to respond to this:

    "You can take all of the hardware that you want transistors and chips, and you can build a computer without knowing what it does. And when you plug it in, it will do just that, nothing."

    Let's say we take a single neuron and can model it perfectly (or at least "accurate enough") and prove the model behaves the same as the biological neuron when given the same inputs etc.

    The neurons are the equivalent of the components in an electronic circuit and the connectome + synaptic weights are the equivalent of the circuit diagram. Modern neuroscience pretty much agrees with this basic description.

    If you had a circuit diagram for a very complex device and had no clue how it works but you did have all the components and could wire them all together then you would still be able to build it (or put it inside a simulator) and it would still behave like it should. To stretch the analogy, how many people don't know the low-level details of how the PCI bus works but know how to open a desktop PC and put an expansion card in the slot?

    Provided we can model the neurons accurately enough and can obtain details of how they're connected and the states of each neuron, it should definitely be possible to recreate the behaviour of a biological brain given sufficient resources.

    Conciousness in the sense of qualia and subjective awareness is something we truly don't understand, but I propose it doesn't actually matter to intelligence. Things we often think of as being caused by conciousness (such as emotional states) are still ultimately reducible to physical states and we need not concern ourselves with the metaphysics. If our virtual brain expresses sadness when given negative emotional stimuli and happiness when given positive stimuli then it will still be behaving like a human mind regardless of whether or not it "really" feels those emotions with an inner awareness.
  • edited November 2015


    This is interesting because while I agree with you and I don't want that metaphysics to have any place in this,  but I do believe that context is important.

    I'm going to talk about your PCI bus analogy. I feel like grinders got the short end of the stick when they think about thinks like this. Most of them don't no how nerves work but they can open up their hand and stick in a magnet. I think we can all agree that the curve to interfacing with the body turns into a screaming upright slope right after that point. And putting a magnet isnt much like a PCI card. Let's say it's more like putting in a USB dongle whose only function is glowing. My point is that your analogy to PCI cards makes more sense in regard to implanting a bluetooth compatible pacemaker, not neurology. We know that stabbing one part of the body responds differently that stabbing another. No one would say that implanting a magnet is like doing brain surgery. So why would you make the analogy that assembling a computer is like modeling a brain? It's not just a matter of scale, they are different system that behave differently (as far as we know).


    You say that consciousness is something we don't understand, and then you propose that it doesn't matter. That just being reductionist. I am assuming that you mean a chemical (or model of a chemical) stimulus as negative and positive stimuli, right? Cause negative and positive stimuli are either a delineation of serotonin and dopamine gradients, triggered by neural impulses in kind of a loop inside the brain. So like, you don't feel happy, you get external input, like a backrub, and that gets translated into a serotonin release. Which is all well and good within context.
    Now, let's take away context. Brain in box is given serotonin trigger. What does it mean, and how does this affect the thought process? What is the purpose of "positive stimuli" if it has no context and how does the brain respond? Are we mimicking the level available, so that if you deplete the serotonin, the brain gets depressed?  If you are depressed, but you are a brain in a box, what positive stimuli do you search out to alter this situation. There is no chocolate and sex. Or are we also making a complete working model of the sexual experience to go with our working model of a brain?  
    Since we respond to these things so strongly, based on our previous neural development, how capable is a brain in a box of choosing what it wants to do when someone can tweak it's stimuli independently of physical reality with their finger on the serotonin dial. Humans in this situation usually develop Stockholm or become mentally unwell or die from drug abuse. So of course, now we are modelling mental illness (which we barely understand) so that we know how to fix these problems. Some have put forth that depression is linked to inflammation. That's not a neural state, inflammation. Are we modeling a response to inflammation? 
    If the answer to any of these questions is "oh, we'll figure it out" then we are back to just gently waving our hands at the conversation with a bunch of techno-optimism. When someone says they have modeled a drosophila brain, they aren't saying that they can smack it in a robot fly and send it off to go buzz about the way you model a OS and put it in a sandbox, it's more like when I say I built a working model of an engine. A useful tool for study, not a powerhouse driving a chevy.
    Again, I'm not saying it may not happen, I'm saying making strong declarative like it is absolutely possible, without properly understanding the situation, is just like promises of meal in a pill. Yeah, we got Soylent, but that's not actually a meal in a pill is it?
  • edited November 2015
    Also, if we want to avoid metaphysics, we should probably avoid discussing a theoretical and yet somehow functional model of a mind we barely understand, designed at an indeterminate time in the future, with technology that is probably not yet available. 

    Because that is the very definition of metaphysics ;)
  • Modern day computer retail offers an interesting what-if, though. Back in the not-so-old days, when you bought a new computer, it came with an operating system disc, which you had to install before you could do anything with it. Not so with the computers of now! Your OS comes pre-installed, with minimal set-up necessary. Who's to say that a perfectly simulated/emulated human brain doesn't function more like the computers of the present day than it does the computers where an OS needed to be loaded? 

    When babies are born, (though some do need a source of human stimulus to trigger this) they cry. They cry when they're hungry, sad, or have some other need. How do they know to do this? Inherent code. Eventually, they teach themselves how to crawl/slide, in order to get from point A to point B. Even if they need some prompting or encouragement, they still initiate and pursue that ability without seeing people crawl around. Another example of inherent coding(This one is a bit shakier though.). 

    Going back to computers, there are some functions that are already included in a computer's design, even if you wipe the OS and brick the BIOS. Namely, the power button. Another is the CMOS reset button. Both function, because whatever circuit they're incorporated is hard-wired to do what it's going to do. 

    I can see why an emulated brain would do nothing but thrash if you didn't replicate the synapses connecting the neurons before you hit go. I don't know a lot about how the human brain develops in the womb, but one or more of a couple things has to happen, or a baby would just thrash without purpose too: Certain synapses automatically are built as the brain is being built, or the "random" firing of the unconnected neurons builds whatever synapses the brain needs to be a normal healthy newborn. 

    In short, the computer that is a human brain either writes its own firmware/OS, or it's assembled with a common OS/firmware based on some blueprint.
  • edited November 2015
    To be clear, by positive stimuli I meant things that bring pleasure.

    If you were to take a scan of my brain and then give it visual input and tell it to use muscle output you could establish two-way communication.

    Then you could give modelled me random bad news and if the model is accurate it would communicate that it finds that bad news sad - even if it's not "really" feeling the emotion of sadness.

    As for the "firmware" - brains are more complex than that, to a large degree the brain functions like a computer where the hardware IS the software and vice-versa, there's not a clean separation between the two. This means that if you copy the hardware perfectly you also copy the software - because they're the same thing. When we learn, we physically change our dendrite structures and synaptic sensitivity etc, so an accurate model of those physical aspects would replicate all the behavioural stuff, memories and knowledge simply because they're exactly the same thing.

    As for the analogy, what i'm trying to get across is this:
    You have a system made out of lots of parts.
    If you know how to model the individual parts and know how they're connected, you can model the entire system.

    It doesn't matter if those parts are biological neurons and the connections are dendrites and axons and neuromessengers or if those parts are transistors and diodes and resistors and ICs and the connections are wires.

    So long as you have an accurate model of the parts and you can link together those models in the same way as the original system, you should get back a system that behaves identically.

    Model a brain like that and there is no reason why it should not behave just like the original brain so long as your model is accurate.
  • edited November 2015
    This comes down to a basic disagreement in our view on biochemical reality and the nature of how things function. And how statements are made about these things.

    Last try. Given that the a system has a number of parts and the premise that not all system function in similar ways,
    If you have a system with a known number of parts and functionality (100%) and a system with an unknown number of parts and functionality (let's be nice and say 10%) is it safe to assume that these two things will function similarly if you apply the model of the first one two the model of the second one? 

    Or we could just agree that since it's all physiochemical interactions, once we are able to make a true working model of physics and chemistry, then we be able to make these models. Which is a completely different statement but comes down to the same thing which is it's a little bold to say that this thing is bound to happen.
  • What's interesting to me is the fact that the human biocomputer is originally defined by our DNA source code. All of our brains are arranged in similar fashions (eg: brain structures, chemical pathways), but all biocomputers function differently, 

    Going further off of @glims' point, I agree that the brain is just a complex state-machine of physiochemical interactions. Consciousness is just the some of the external/internal interactions and experiences of our brain defined by our individual DNA. In my opinion, what defines "human consciousness" is our internal dialogue, our ability to "reprogram" ourselves, and our ability to act contrary to our physiochemical state. The "8 Circuits of Consciousness model" (warning: metaphysics/parapsychology/pseudoscience) appeals to me in regards to left/right brain functions and "sentience". If we were able to simulate these physiochemical interactions, we would be able to model left-brain functions, but I don't believe our simulation brain would accurately model right-brain/non-static functions.

    Touching back to Strong-ai, AI-simulation of human brains could exceed organic humans' in certain areas (eg: speed/memory-space limitations),but organic brains would probably out-perform their simulated counterparts in relational-reasoning/intuition tests purely because of our built-in I/O vs physiochemical abstraction between AI/environment. In my opinion, AI-models that do not model human biocomputers are the only candidates to exceed them in terms of "sentient" consciousness.
  • I always thought the 8-circuit model is the ramblings of a junky personally - I apologise if that comes across too harshly.

    I don't believe that we can act contrary to our physiological or chemical state either. Ultimately all emotional states tie back to particular patterns of neural activity and chemical state in the physical brain, as does all thought and mental activity.

    The reason why brains vary so much is quite clearly due to environmental inputs and genetic variations.

    Why do you believe that an accurately simulated brain would lack intuition and reasoning? Surely if accurate it would have the exact same capabilities as the original organic brain?
  • edited November 2015
    I definitely know where you are coming from in regards to the 8-circuit model; past the 6th-circuit is way too much for me. "Ramblings" accurately describes a lot of ideas from that time-period but interesting perspectives could be gained.

    When I say act contrary to our physiological or chemical state, I mean that we might choose to act differently based on a past experience despite a neurotransmitter state/desire, and are aware of that choice. A dog doesn't realize he should not chase a cat across a street despite being hit by a car multiple times (previous experiences), all he sees is the cat. As humans we can choose to fix/optimize/address a problem or need, or learn/examine it and look at the "big picture". Obviously this is because of higher level chemical interactions within the brain and the environment.

    All I am saying is that a simulated, organic intellience would lack certain levels of reasoning/intuition when tested against a non-simulated, organic biocomputer of similar neuron count, as opposed to an artificial, organic biocomputer, or even an AI not directly inspired by organic intelligence (eg: Deep Dream. Probably not a good example). Sensory information could potentially be lost in the abstraction between reality and the physiochemical simulation. It would just be more efficient to create a new system to interact with the environment, than to accurately reverse engineer our own biocomputer, then accurately simulate it (not recreate it).
  • For those interested in machine intelligence, Google just released Tensorflow (the machine learning software that they use for image recognition and translate) under the super open source Apache Licence.
Sign In or Register to comment.