A little introduction

edited February 2015 in Everything else
Hey everyone,

Just passing by to say hi as another new guy here.
I've been fascinated by the concept of modifying and enhancing your own body since... well, I don't remember anymore.
It's been forever. Who didn't sharpie circuit boards and gears on their arms and legs to pretend being a cyborg when kids?
Now I'm discovering I can help make that come true and I can't put into words how excited I feel.

I work in electronics engineering/product design and I feel there's a lot where my field could help. If anyone could pint me into the right direction I'd be more than happy to help.

Also, another interesting thing. When I was about a year old I was diagnosed with retinoblastoma, a kind of ocular tumor. This means that for basically all my life I've been wearing a prostheses on one eye. I feel that there's potential to develop something there, or maybe work on something to help recover depth perception. Ideas are welcome, too!


  • Nice to meet you! 

    If you plan on getting a magnet implant you might find the bottlenose project interesting.

    Putting together a tDCS kit would probably be pretty easy for you too. 
    I'm still waiting for parts to come in to make my first tDCS kit. I'm using this guide with a few adjustments: diy tDCS guide
    Other resources:

    I've seen others working on some pretty advanced biofeedback stuff so just browsing around will give you some ideas. 
  • I'm an EE aswell and.. that's a pretty interesting situation you are in there indeed. Is your optic nerve still present and intact? If so, you may want to step far beyond a constant current controller of a tDCS and start looking into ways to construct and wire up multi electrode arrays (MEA). As well as driving those electrodes (which is probably the most interesting part for you as EE,altho it's easy compared to getting the array itself).
    Or maybe you can find an even better way to stimulate neurons at a high resolution in midst a big bundle.

    I'd recommend you to read research papers and studies relating to MEA, electrode materials, fab techniques for such micro structures, stuff like that. Some papers i can recommend to you are:

    Neural Stimulation and Recording Electrodes — Stuart F. Cogan
    Criteria for the Selection of Materials for Implanted Electrodes — L. A. GEDDES and R. ROEDER
    Tissue Damage by Pulsed Electrical Stimulation — A. Butterwick , A. Vankov, P. Huie, Y. Freyvert, and D. Palanker
    Investigation of Electrodes as Bidirectional Human Machine Interface or Neuro-Technical Control of Prostheses — Dipl.-Ing. Thilo B. Krüger

    Those should answer some questions, bring up many more, and maybe give you ideas on where to start. If you are feeling like chatting in order to brainstorm a bit, hit the irc channel.
    Not all of those papers may be freely accessible, if you have trouble finding them, just drop me a note.
  • As far as I know the nerve is intact, although it has retracted somewhat. I guess MEA is the most feasible way, at least as long as I can get my hands on the array itself, as you mention. Oh, and of course, getting the electrode into my nerve.
    I'm gonna take a look at the papers you mention,and I'm sure I'm gonna spend a long while in the IRC.
  • One point I just remembered. With such a long time without any stimuli on the optic nerve there's a good chance for an amblyopia. So before constructing a high fidelity array, it may be worth to wrap some flexible electrode around the nerve. Maybe with just a dozen contact pads. And see if the signals still get interpreted by your brain.
  • There is a guy who placed a camera in his eye prosthetic, though it's not connected to his optic nerve.

    A MEA is one way to fix that, but one possible issue would be the number of channels that you could use.  Assuming that we could fit an arbitrary number of electrode-driving circuits in the prosthetic, most MEAs (besides being in the few grand range) have 96 electrodes that you can drive, which probably corresponds to about 96 black-and-white pixels.  Although that's far from approaching the human eye's resolution, it's probably better than nothing.

    A more interesting option, though, would be to put an appropriate camera in to give yourself ultraviolet superpowers, for example, or the ability to sense anything else for which you can fit a sensor into the prosthetic.
  • I know the camera guy, Rob! (well, not personally, but we're friends on Facebook)

    About the MEA, even if it was only 96 BW pixels, I believe it would be enough to give the color image from my good eye a depth perception.
    Also, the coolest thing about using a camera for vision is that it would be sensitive to IR/UV!
    It's definitely worth a try. I'm still looking for prefab MEAs; we have a bioelectronics lab on my university, and the professor in charge seems very interested in the idea.
  • As far as I know, the best way to get a MEA (at least the "Utah" version) is to place a request at this company.  Your lab may have a better source, though.
  • I don't think a 10x10 pixel image offers a lot in terms of depth perception. I played around with a couple of old stereo photographs I had around.
    I used a full-resolution color image for one eye. and a reduced , grayscale and posterized image for the other.
    For an image which covered about the size of your palm when looking at your hand with the arm stretched out I got the following results:
    -below 24x24 it's virtually impossible to sort out depth, you may be able to tell a tree from the background but at best you can sort things between 3 or 4 different distances. so you could tell a person is standing in front of a car or behind it but objects appear to be flat as paper.
    -at around 24x24 pixels and about 8 shades of gray, I was able to reconstruct identify a depth structure of objects with good contrast and structure. Not really awesome tho.
    -at 32x32 pixels and 16 shades of gray gives a very good depth perception. I'd recommend to aim for at least that resolution for your application.
    -a 50x50 resolution resulted in an almost natural feeling of viewing.
    further increasing the number of shades hardly changes things.
    -64x64. That's the good stuff. Even when used on both eyes, this resolution gives good depth perception. It also allows you to read several words of text at once, decent detection and identification of objects. At this resolution, it doesn't .

    32x32 pixel can give you a decent depth image with one healthy eye. But upping the size to 64x64 improves perceived image by orders of magnitude. A 64x64 resolution even works when both eyes failed.
    Aim for at least 32x32, keep an eye out for 64x64.

    Increasing the shades of gray sort of improves the perception up to about 30 shades. So going past 32 is not exactly a high priority. 16 would be acceptable tho. (i don't want to know how many search engines index this post and associate it with a commonly known book).

    How to pull of that many electrodes on a 2 or 3mm thin nerve? That's a good question. My ideas would be to use blade-like shapes instead of needles. and place electrode pads along the blade, conveniently routing the traces on the blade itself. Maybe grow small hemisphere-like structures as electrodes. Each blade could have electrodes on both sides so for a 32x32 array you'd have to use at least 16 blades. in parallel. Placing electrodes along the blade should be an easy task, my guess is you easily fit 64 if not 128 electrodes along the length of the blade.
    I don't know how thin and mechanically robust enough such blades could be produced, but stacking 16 of them over a 2mm distance might already be tricky. So i'd recommend to have multiple rows of blades, each beeing offset a bit from the row behind/before. With 4 rows you'd have 4 blades per row, or one blade every 0.5mm. According to my intuition, a rather reasonable number. Since the nerve is rather long, even 8 rows at 0.25mm seems to be not too far fetched, which would give the 64 resolution between blades, and maybe you can fit 128 along a blade.
    One last idea might be to use even more complex structures, such as growing needles or pins on each blade, and for implantation, sealing them below a layer of PLA or another biodegradable polymer. So the array can be inserted without scarring neurons + giving the fine structures some mechanical support. And over the course of weeks, the body would degrade the polymer, and the neurons would move into the empty space. Just an idea tho. Removing such a thing would be a whole different story.

    It'd probably be best to have the driving circuit implemented on blade-level. So you just run a basic SPI with power, data, and clock line to each blade. If you daisy chain the blades up, you would still stay below 3Mhz pixel clock rate for a 64x128 array with 60fps and 6 bit shading depth. If you chain up each row for itself, you'd have rather low clock rates, and still acceptable few wires. Those data rates could be still be streamed using Bluetooth/zigbee or (modified from IR to red) IrDA.

    That post got a lot longer than i thought it would be. I hope it contains some useful inspiration. If a bit extra resolution turns out to be possible... go for 160×144. Simply because it's the originals GameBoy resolution so you could directly stream games to your optic nerve. (the original GB only had 4 shades of gray btw).
    My biggest fear is still that after such a long time, your visual cortex may not be able to process the signals properly. That's why i'd recommend testing the function of the nerve with a simpler to optain, less invasive method.
    now.. time for food!
Sign In or Register to comment.