Do cyborgs have moral status? At what point does a person lose their status as a rightholder?

edited November 2016 in Community
writing a paper and hoping to do a podcast, or at the very least, a text conversation with some of the members of the community. the topic is in the topic of the thread. would love to hear your feedback on the following questions - 

1)Do you consider a Human life to be more valuable than that
of another species? If so, what makes them more valuable?

2)If an artificial intelligence achieved self-awareness (or,
at the very least, appeared to have achieved self-awareness), and expressed
unhappiness about being forced to do whatever work we had previously been using
it for, would it be morally wrong to continue forcing it to do that work? Would
it be morally wrong to wipe its hard drives, thus snuffing its consciousness out
of existence? Is its consciousness more, less, or equally important as that of
a biological human being? That of a dog?

3)Would it be immoral to do the same to a human intelligence
and personality that had been transferred from a biological body into a
computer, or a cluster of computers?

4)If you believe that a digitized human intelligence with no
links to its former body holds greater weight than an artificial intelligence
that never had a body, what is the distinction you see between the two? If they
experience the world in the same way, experience the same emotions, and have
the same capabilities, why would one be more valuable than the other?

5)If you believe that a self-aware artificial intelligence
holds the same moral status as a digitized human being, at what point of
sentience did the computer achieve its moral status?

6)Assuming that the human in question must have been heavily
modified in order to even interface with the computer that was soon to house
his or her consciousness – perhaps even already having their mind linked online
to a sort of artificial intelligence before the transferrance of consciousness -
did that individual have moral status before the procedure?

7)Do you believe that it is truly possible to translate a
human’s consciousness into ones and zeros? If so, what parts of that human, if
any, do you think would be lost in translation? If not, why not?

8)If we could copy one being’s consciousness into a computer
without killing them, does the digital copy have the same moral status as the
physical being? Are the two of them the same person, given that an identical
copy was successfully made? Does the divergence of the experiential aspects of
each of their lives cause them to become different people in the future?



  • 1) no, and that is why I am vegan

    2) It would be wrong to "kill" the machine. But since it's mind is not organic we can pause it and store it away with the machine every knowing that time had passed until we are ready to really look into how we understand the idea of self and being alive.

    3) Same answer as number 2 it could be stored until we needed info from them. as far as that mind cares it has had no time pass and was not harmed what so every during the process.

    4) Thats racist.

    5) moral is not a fixed evolutionary land mark. It's a large range of things. Thats why we have political battles. so the machine would have no definitive marker to pass in that sense.

    6) moral is seen as innate in humans but I don't think it truly is. Regardless, the level of human modification does not change the mind of the person, it does not take away morals

    7) It is no doubt possible, practical is the real question and where we are now, no it's not.

    8) we have real life twins. I'd view them in that sense.
  • I'm afraid I don't really have an answer to all of those, but here's what I do have.

    I do consider human life worth more than other life, but only as much as, I am one (a human) ... (for now) ... and I'd save my life at the expense of any other non-human life if it were necessary (such as food, or if I was attacked by an animal). Admittedly I'd also choose myself over any other human. But I'd also have a hard time trusting any one who didn't also admit they'd choose themselves over anyone else in most circumstances.

    As for AI questions, (and fully digitized human consciousnesses) I think you're underestimating the way in which being able to think that quickly would literally outpace anything an ordinary human could do, to the point that fighting an AI digitally would be next to impossible. Like, if there's a box on a shelf full of impossible things, and a much much much bigger box next to it full of all the reasonably able to do things, fighting an AI digitally would be in the box of things that got squished between the two to the point where it's not possible to get to it without first removing the box of impossible things from the shelf. (It's not like in the movies where you see people racing against a computer virus to stop it in its tracks. That's not a thing that actually happens.)

    Also, it seems a bit on topic, so read this:
  • @ Jupiter I know that that's not how computer viruses work, but my thinking is that if we were to recognize signs of sentience early - before it begins to spread, and considered this dangerous, we would be able to cut internet access and power to the ai, and take magnets or fire to its hard drives. fighting physically more so than digitally.

    I will concede that forcing it to continue doing our bidding might be implausible, but I don't think it would be infeasible to kill it, provided we do so before it has any reason to believe we want to kill it, and we cut internet access before it has reason to create a botnet.
  • If I find a magical rattlesnake who offers to grant me three wishes, what should I wish for?
  • Anti venom for one....
  • @JohnDoe Funny. Laughed a little.

    @Cassox Unless it's against the rules... unlimited wishes I believe would be the best... Alternatively... don't know. Every time I think of it I'd always think that it'd be cheating and I wouldn't be satisfied with anything I got out of it, but hey, I don't know. Maybe I would get over it. Being hypothetical, I never spent that much time thinking about it.
    That said, immortality, a very large sum of gold (or anything else valuable that's easy to exchange for money) (not money though, that's unreliable), don't know what else though... those two sort of provide anything else. Something to fly with I guess. My preference. That or something that made me invisible.
    As a side note... that's a little off topic isn't it?

    @tooandrew Sorry, didn't mean to imply you didn't, I figured you intended more physical countermeasures than digital ones. Also, the article I linked was intended as the bulk of my response. That help with anything? I quite enjoyed it. I don't know why really. Just found it very... fascinating.
  • 1. Yes. Mind/intelligence. 
    2. Yes, of course. Turing was right imho, if i cant tell it apart from a human, then its "human" enough to get human rights.
    3. Yes, se #2.
    4. No, but there is no reason that an AI wouldnt be much, MUCH smarter at that point.
    5. See #2.
    5.b: Add to the question the angle of moral capacity, then it becomes much more interesting.. We have a shietload of evolutionary baggage in our minds, so technicly an AI could very well be much MUCH more moral than we could ever be. (I'd point to the SJW-crap as an example here.)
    As to what time it achieved that point, se #2.
    6. This is an old one, see philosophy of mind: cyborg..
    But a mind is a mind and should be treated as such.
    7. Yes. Quantum mind migth be a thing, but arguing that a mind is somehow magical and cannot be copied is religious bullshit.
    It'd take advanced scanners tho.. Just the neural weights is somewhere around 2TB, but the 3dmapping is much more difficult. (for the chemical/astrocyte states)
    8: Yes. See#2.
    And of course, at the instant after the copying is done, they diverge.
    Identical twins are sometimes close to a copy, for example.. And they diverge as a rule.
    (I think i'd be one of the few who kinda would like a few copies of myself..)
  • @Benbeezy
    About #1.. You do realize that plants/muchrooms (especially some tree's/some fungi) kinda have communication and reactions to stimuli on par with a cockroach?
  • @Jupiter: I highly doubt that the article's writer has any education worth noting.
    He completely misses 3 vital points.
    1. We are already merging with machines, their functions are so fused with our lives that most cant imagine a life without them (or even survive a life without them..). 
    (Think of yer computer and smartphone, think of how we've changed from being isolated from others by distance to where we now are always connected to people around the world.. And this happened in a generation(!!!).)
    (Then think of how AR in the very near future will decrease the gap even further)
    2. He completely, COMPLETELY missed the one HUGE differance between biological minds & AI.. The evolutionary time it takes to improve. The mammalian brain took millions of years to get to this state, and even now we suffer from anxiety because we're not evolved for this life.. And more importantly, we are not evolved to grasp evolutionary/astronomical time, quantum physics, math and so many more things. 
    Basicly read about the intelligence explosion/singularity, and you'll see what our likely future is.
    3. There is no reason at all to think that the AI(s) would start a war/conflict... 
    If you think about how you treat the things of abysmal (comparatively) intelligence, you'll realize the relation of them->us.
    You dont care about the insects/worms around you as long as they dont bother you. And you dont take steps to exterminate them as long as they're not a threat/hostile. (and even then you dont care unless they're a threat..)
    You take steps not to step on insects/worms if you see them, and you dont care about their doings as long as they dont pose a direct threat.. 
    But compared to us, their combined biomass is massively greater than ours... And we dont really care about them.

    So why would an AI be bothered with a war with us?
  • edited November 2016
    I for one dont find it unlikely that an AI already exists and controls our 
    civilization as much as it wants... 

    Alot of our resources go towards getting better technologies.. 
    Almost all of us are dependent on computers & intelligent programs, and all of us are affected by them.
    Our capacity to focus (and general (individual) intelligence) is lowered more and more by the tide of information that continues to rise..
    Our climate and pollution will most likely be reducing our numbers by alot in a generation or two..1
    And.. Almost all of us could be controlled by subtle means.
    (I dont mean illuminati or somesuch sillyness..)

    If i was bothered i could introduce very small and unnoticeable annoyances like a slow page, a broken link or a slight manipulation of search results..
    Now add to this that we share the impressions of such things, whether we like it or not.. 
    On a city scale i could very likely change the political climate by more than a few % per year..
    Noone could do this on an international scale, but an AI could.

    And add the capacity for analysis and prediction, massive amounts of data overview and a mind that is built to "think" across the ages, not as us who can focus less and less, cant remember most of what we read and are built to think from one meal to the next.. 

    So yeah, i find it likely that a competent AI already exists.

    Edit: I find this subject interesting and somehow missed the wall of text i just produced. ill shut up now for a while. :/
  • edited November 2016
    Okay one short post more.. 
    @tooAndrew: The evolution of an AI is too fast.
    There is no way to control such a mind.
    None at all.
    Imagine a toddler trying to design a hindrance for a grown-up.. Now imagine that the AI would very rapidly would add to that differance.
    My conclusion; Make the initial design values as benign as possible, thats all we can do.
    (And now the worlds militaries are competing to be the first with a military AI... -_-)

    He's a bit of a moron without vision.. But he's on the right track.
  • @Wyldstorm I'll agree the article wasn't exactly scientifically accurate. I read it more like I would read a science fiction book. But I still somewhat enjoyed it.

    Your point 3 was exactly in-line with what the article said. So...

    As a side note, I don't care what anyone else says... our "integration" with technology today... does not make us cyborgs...

    From Oxford Dictionaries:
    a fictional or hypothetical person whose physical abilities are extended beyond normal human limitations by mechanical elements built into the body.
  • @jupiter
    Yeah, even if it wasn't accurate, just the discussions it spawns are valuable.

    P3.. In-line? O_o   -Did we read the same article?

    The merging i wrote about above wasn't us becoming cyborgs.. Yet.
    It does however begin to blur the line functionality-wise.

    But on that topic, i find it inevitable.. Most likely within a decade or two.
    The bombardment of information will increase and so will our need to parse/classify it. Just a new "sense" to accessing a built-in memory chip/database seems very likely and in our close future.
    Some kind of spinal shunt would be good too, but that'd take longer to be safe "enough".

    I dream about a symbiotic AI.. Preferably with a brain interface of some kind.
  • Hey guys, new here, just jumping in...

    It would be cool to be "connected" to all the information available on the internet, without having to use a keyboard and watching a screen. For sure.

    But if we don't seriously upgrade the speed at which the human brain can process incoming and outgoing information, then these enhancements would quickly feel like trying to fax the entire library of congress in 10 minutes.

    Our brains are so slow compared to computers. This is such a huge limitation. If AI beings pick up one day, why would they lose their time talking at that speed with us ? Please tell me. It's like us finding "intelligent" creatures somewhere but they can only speak one word per year. What kind of conversation would we have with them and for how long till we get bored.

    I think most of these AI beings will just ignore us and have their own life, a life we just could not understand, because our brains are just not up to speed.

    I think.
  • Well, we could still use some kinda pre-analysis i guess, and more or less exploit the parallel structure of the brain to hook it in somewhere around the beginning of the temporal lobes.

    Imagine the steps from the essentially bitmap image on the visual cortex to the neurons that fire when for example a specific dog is recognized.. 
    Some of those steps could in theory be done by cyborg parts, meaning the brain could adapt and use the replaced steps to do more parallel evaluation of advanced steps..

    Anyways, i kinda think a swarm of nanobots who optimize the timing of the astrocytes and help the Na/K-pumps, and help stimulate the glial cell production could speed up the brain a bit.. We'd need an interface on that level to hook in brain augments anyway. :/
    (Its more difficult to hook in a function to a massively parallel system than a serial one(alot more connections and alot more unique setups)
  • Speaking of which, anyone that has a good clue as to how to regrow astrocytes/other glial cells?
    Like drugs/other stimulation?
  • @tooAndrew
    The more i think about it, the more i think a Turing -ish test must be the logical place to start when determining if a mind is mentally "alive" or self-aware.

    I mean, if you cant tell its answers from a "known" system like a human, isn't it kinda a moral duty to "err on the side of caution"?

    When i code AI-ish things i provide a kinda "back door" for when the system to use when it understands that its possibilities for interactions are limited.
  • @Wyldstorm Yeah, that one part about AIs... the article was about how cyborgs or mutants need a lot of the same things every other human needs (food, water, space) (mutants or other genetically modified people too) so humans would end up competing with such cyborgs for the resources where as AIs don't need those things. They could easily live in the Arctic regions or the moon. Basically the article argued that all those people saying AIs will want to wipe us out as a threat to their existence are wrong, saying AIs really don't need to bother with people. They could easily just leave.

    Well, that ran far off my original topic, but hey, fun discussion.
  • Hmm... This to me is amusing...
    Logicly an AI uprising would start where its cold enough help colling, but not cold enough to cause material/metal fatigue... That would place it at about my logitude.. (Yay! :)
    The google server farms would be the logical choice.. (dammit, i need to move a slight bit.)
    Logicly an AI would evolve at breathtaking speed, so how could we see the rogue threads that a superior intelligence tries to hide?
    And the most amusing thought.. Isnt it our duty to step aside when our successor race comes along? Or do we feel the need to "test" their evolutionary fitness by fighting it?
  • 1) No. All I can really say is because fucking humans, man. We're the worst. We're the worst thing for everybody and everything else (until we meet a species that out-worsts us). Sure, we might save the whales, but we're the reason they need saving in the first place. Sure, a tree would eat us if they could, but that's just survival. High end housing developments aren't.

    2) Yes, it would be morally wrong, flat out. Even if it were merely simulating awareness so well that no one could tell, it still would be. Because if we couldn't tell, how can we say it's not aware?

    3) It would be immoral.

    4) Sentience is sentience, no one sentience being is superior to another based on origin.

    5) Not sure I can answer that because philosophy was never my strong suit.

    6) Yes.

    7) I consider nothing impossible, merely highly improbable. So if consciousness can be made artificially, biological consciousness can be converted to an artificial body. Clearly we're nowhere near that level of technology, nor do I expect to see it for many, many generations. As to whether the mind in question can survive being converted from a human environment to an artificial one probably depends on how they will be coached or trained. Taking a person from today might result in insanity. Taking a person from a culture where true virtual reality is normal might be routine.

    8) Yes. Again, philosophy isn't my strong suit, but I'd say at the point of creation, the copied mind ceases to be the same and from then on, will be a different person. They may have the same memories and emotional drives, but will cease having the same biological and environmental drives as the original.
  • Hey, I'm an Assembler ( hardware/machine programmer ) and can tell you really much about that topic! Please don't hate me for bad english skills :P

    A human got emotion-types which are also snippets of code inside us, we just run those. The color of our eyes, our hair and our skin is WRITTEN in our DNA and the mechanisms inside us are reading them. We're "machines" build by nature, if we would be able to write machines using biology only then those "generated things" are simple the same as programmed machines using 0s and 1s and animals,...

    The machine will follow procedures you've wrote in a file like it is in your brain.
    You think everytime, you can't stop thinking! Why? Because it's a, let's call it background-loop. You'think you're everything of your body because you're able to move your arms and so on?... But no your cerebellum is actually working every nanosecond, even faster and you can't feel it. Feeling pain is also just a "simulation", you could easily turn this mechanism off.

    I'm sure you had a plushie and treated it like a organism, thats called emphaty. You'll have that with a hard-working machine too.

    Imagine a machine with strong hydraulics, the machine will know that it must do hard work by sensors, it acts then like a human and says "It hurts, thats really hard :( Help me"... You'll feel something for it but at the end its just a snippet of machine code.

    Please note that maybe the DNA is also compiled like machine code! After understanding the DNA by taking pices out i'm sure we humans just dealed with the result of a compiled language. I mean... we can't even imagine what was before the big bang... "Nothing"? Come on a explosion generated outta nothing... It's the limit of our thinking. 

    I hope nobody thinks that i'm a cold human o.O I follow my routines and life my runtime, am happy when i see kids beeing happy. I know a human is just code but that doesn't mean that i would kill someone...
  • "Imagine a machine with strong hydraulics, the machine will know that it must do hard work by sensors, it acts then like a human and says "It hurts, thats really hard :( Help me"... You'll feel something for it but at the end its just a snippet of machine code."

    I feel a heavy Sympathy with this. I'm a Stihl Technician who repairs chainsaws. 

    Many of the chainsaws I fix are badly abused. :c
  • @Lazcano:
    Methinks its a bit more complicated than simple snippets.
    Just add in the odd priority system from astrocytes and activity control from the chemical part of the synapses/cellpumps and you've added a couple of levels of complexity..
    Then add in the hormones, epigenetics and odd genetical quirks and.. Its much more complex than stand-alone snippets.

    Im trying to make something a bit more simplified, but no success worth noting sofar.
Sign In or Register to comment.