Mildly offtopic for this forum
I recently got into an argument with someone on the subject of strong AI - he claims it's impossible and always will be, I claim that getting human-level intelligence could be done using brain scanning and simulation at the very least and made the argument here: http://pastebin.com/sM2PiK7u
Of course i'd also argue that implementing AI using a simulation of a brain is horrendously inefficient, but it should still in fact be possible to do given enough time and resources.
The guy I was arguing with outright refused to point out the flaws in my argument (and I can think of a few myself even - none that can't be defended, but still - there are some criticisms that could be made) so i'm wondering if anyone here has found an actual compelling argument against the possibility of strong AI.
Tagged:
Comments
Humans are not 'special' or 'unique', we're just a product of a learning network placed inside a structure designed to cultivate it.
A relevant quote on the topic : They say you die twice: once when you take your final breath, and again when someone says your name for the last time.
The layering of religion on top of the concept is not something I enjoy, and I disagree with the rhetoric various churches place on the idea.
I don't believe you 'soul'/consciousness is synonymous with your brain. A subjective experience is clearly not the same as a physical object. The brain causes consciousness. Consciousness is a feature of the brain. If your brain ends, so does your consciousness. But the two are not the same. But I'm probably just arguing semantics here.
Going slightly off-topic: I don't believe a strong AI will by default have some kind of sentience/consciousness/soul/mind (i.e. subjective experience). This is what functionalism says. But just because we can't explain consciousness doesn't mean we can just take one attribute of the organ that causes our consciousness (i.e. the information processing ability of the CNS) and equate that to consciousness. Following that logic we'd have to assume any physical object/collection of objects that somehow consistently responds to its environment in a specific way (i.e. processes information) has consciousness. Things get really silly beyond this point. https://en.wikipedia.org/wiki/China_brain#The_thought_experiment
You can even replace the people of China with any random object, e.g. pots and pans, spoons, pencils, etc.
inb4 it's a small step from spoons to the stuff our brain is made of (which is a pretty valid argument I suppose). But we KNOW for sure brains lead to consciousness. Attributing consciousness to other hardware without knowing how/why consciousness works is just confusing imo, even IF functionalism is true.
In my opinion we can talk strictly about Strong AI without the need for the words consciousness and intelligence. I don´t know if your first paragraph was in response to my post, @glims but I agree that complexity =/= consciousness. But I do think Strong AI is theoretically possible and (probably) just a matter of time (using the above definition at least!). Current day AIs can't keep a dialogue going for long before the human notices he's talking to an AI. But isn't this just a matter of adding more complexity to this particular AI (linguistics, semantics, syntax, sentence stucture, humour, etc.)? This stuff is progressing very rapidly. The same could be said for every other intellectual task a human can perform. Even if it´s not feasible in the near future, it´s definitely possible. Strong AI imo is just a matter of complexity. Consciousness obviously is not.
@garethnelsonukYou say you consider consciousness irrelevant. But would our brains work the way they do without consciousness? If they don't, your emulated brain won't function the same as a human brain. Will this imperfect emulation be a strong AI? We know for a fact that our conscious experience can directly influence our brain activity to some degree. How much functionality do you lose if you keep the basic biological/physical rules of the brain but might not be able to emulate consciousness? Emulating a brain to get strong AI (similar to that of our brain) may be very inefficient for this task for this reason. It would be much easier and more controllable to just build software systems that are not based on the brain but built to perform a specific task (like having a dialogue).
Your transistor analogy isn't very convinving at this point in time. We know transistors are the lowest level we need to emulate in the case of a computer emulation. But in the case of the brain, we don't know which resolution is required for reasonably accurate emulation (i.e. functional emulation). Are cells enough? Which cells? Is the distance between cells of relevance? Cell size? Cell health? Conduction speed? Blood flow? Will you emulate sleep? Or exercise? Hormonal fluctuations? Do we need to go deeper? Proteins? Genetic code? Gene transcription? Ion concentrations? Even deeper than that? Individual molecules or atoms?
http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
Page 11 has a nice table (and was clearly written by a hardcore functionalist!).
Now, you're clearly talking about the theoretical possibility of emulating a brain. If we assume at some point our tech is so advanced we can emulate every individual atom in a brain and hook this emulation up to an input and output device... This is your starting point, right? That would seem like a pretty good emulation to me, except of course like I said there´s the uncertainty of conciousness and its role in our brain's functioning.
Intelligence can be generally defined as " the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within a given environment." [wikipedia]
The basis for thought arises when you have multiple networks interacting and seeking needs. The necessity of being able to evaluate a statement requires that we consider other viewpoiints and simulate another network within our own. I believe this gives rise to sentience, but the intelligence behind it is just the capability for a general neural network to learn patterns.
The more effective at pattern learning, the more intelligent. The more effective at interacting in a way that considers and evaluates meaning, the more perceived intelligence.
"You can take all of the hardware that you want transistors and chips, and you can build a computer without knowing what it does. And when you plug it in, it will do just that, nothing."
Let's say we take a single neuron and can model it perfectly (or at least "accurate enough") and prove the model behaves the same as the biological neuron when given the same inputs etc.