The biohack.me forums were originally run on Vanilla and ran from January 2011 to July 2024. They are preserved here as a read-only archive. If you had an account on the forums and are in the archive and wish to have either your posts anonymized or removed entirely, email us and let us know.

While we are no longer running Vanilla, Patreon badges are still being awarded, and shoutout forum posts are being created, because this is done directly in the database via an automated task.

A theoretical approach to a pseudo-exocortex

I'm aware this whole subject has the potential for a lot of BS but it's something i've been pondering a lot.

Basically, the idea behind an exocortex is to have a computer that works closely with the human brain, as we all know current BCI tech lacks the ability to do very precise input of abstract information/concepts from the brain and there's even less in the way of direct input to the brain - to my knowledge there's only very low resolution visual input and cochlear implants.

In other words, until we crack more of the brain's neural coding mechanisms and get more advanced high-resolution interfaces, the traditional vision of an exocortex ain't going to work, but let's look at what having an exocortex actually implies:

  1. A computer system permanently implanted
  2. A bidirectional transfer of data in a tightly integrated way between that computer and the brain
  3. Storage of memories in digital form

Point 1 is a simple matter of getting a suitable device implanted, this means taking care of a bioproof case, power supply (battery and induction charging seems to be the standard approach here - just clip an induction charger to yourself while you sleep). There's plenty of small embedded devices that could be built for this purpose, add some wifi and bluetooth to talk to external devices and you don't even need to do all the processing internally if you want to upgrade later, you could strap on a wearable to augment the processing power of the implanted device.

Point 2 is where it gets interesting. To get straight to the point, we can use EEG-based BCIs but that's not going to work reliably for complex operations. Invasive BCIs work great for some activities and external EEG is great (and really fun) for gaming where it's more about fast reaction time than total precision, but not so great for data input. You wouldn't want to slowly type out a google search using a P300 speller for example, it'd be painfully slow.
What I propose instead is a set of subdermal EMG electrodes near the throat for subvocal recognition - with training over time and with the electrodes being in the same position every time this should become more and more accurate and has the advantage of silent input of complex data from the brain at will. Output could be presented using an eyetap (like what Steve Mann uses) mounted to the skull and/or a bone conduction earphone.

Now we get to point 3 - this is simply a question of having sufficient storage space and possibly augmenting with recording imagery from the eyetap and audio data with some added medical info (pulse, blood oxygen+glucose levels, neurological state), add some basic speech recognition and geotagging etc and you could search for memories based on location, time, content of conversations, people present etc - you could run a search for "all times my friend Bob talked about biohacking" etc, the real issue with this one really is just a matter of storage space and you could make tradeoffs such as storing only low-resolution single visual frames taken every 5 minutes unless manual recording is turned on or other configurations.


The end result of this whole setup is you would have a system with you 24/7 capable of augmenting your natural memory storage, performing complex computational operations, looking up information online etc. While you can of course get a lot of these advantages using a smartphone and wearable devices, integrating it this tightly would I believe make it far more useful. A lot of the system could be tested using external wearables until it's ready to be prepared for implantation, so it's a feasible project to work on and as technology improves in future just adding more advanced interfaces would make it more and more tightly integrated and more and more useful.

Comments

Displaying comments 1 - 30 of 70
  1. I have a question regarding your memories and decoding the brain. If you show an image some one has never seen and instantly have them try and think of a picture of that image could you not use that as a way to "decode" the information. You can use it as a Rosetta Stone for the brain as long as what the person is thinking of is that image.
  2. They've done that with fMRI scans. I remember reading a study that allowed some identification of what we being shown based on a class of learned images, i.e. they could detect you were looking at an image of a plane after having observed your brain's reaction to a set of images of other planes.
  3. Currently working on a fully-implantable closed-loop brain-computer interface system: http://faculty.sites.uci.edu/ucibci/research/fully-implantable-bci-system/ Their goal is for motor restoration but once a paper is published the hardware setup could be replicated and used for other purposes.
  4. bciuser - the problem with current BCIs is that they are no good for inputting complex data, including the one you just linked to - it might work great for motor control, but for "mentally typing" or abstract concepts, not so much. My proposal is to enable that so that the system truly becomes a form of extended cognition.
  5. @electricfeel do you remember if anything came from that? I think that a complex image (or basic) being either broken down or slowly added too would be able to "paint a picture" of what means what expels illy if that can tell from that study. Obviously you would need quite the data base of details but we already have that...the internet. I don't know if I'm rambling but it kinda seems simple to me on how to brake down what certain things mean at least more then a dead language. EDIT: Also I'm not necessarily talking about input so much as "seeing" what's on the mind.
  6. I know of some mapping that's been done with the occipital lobe that's refined enough that they can actually recreate a crude image of what you're seeing. Big difference from, think about an image. Mental constructs are more complicated. If I say think of a tree... The variable involved are vast. Even if I said remember a tree to two different people who were familiar with a particular tree, variance in memory is present. Add in all of the complicated sub associations.. I climb, so I tend towards a kinesthetic memory of bark.. You are visual do etc etc. Add in the different unique contexts.. associations to cutting wood with your father.. a great poem you read once. I don't think it would be easy say.. think of a, b, or c and finding a universal output. The closest I can think of is like pet scan activation of areas related to function. There are patterns discernable for example when you do math. Even this isn't universal... for example like 30% or something if left handed have certain language centers opposite of normal.
  7. garethnelsonuk - If what you want is a pseudo-exocortex right now, you're absolutely right that a BCI would probably not be the way to go. They are slow, computationally expensive, difficult to implement, and pretty inaccurate. However, the alternative you've suggested seems like  a homemade version of google glass that constantly records video and is hooked up to a pulse oximeter that requires neck stabbing.

    That being said, I really like the idea of integrating subvocal recognition with a currently-existing technology like google glass. If you're actually interested in pursuing this as a project that is where I think it would be best to start.
  8. I was inspired by wearables, true - but something much more integrated than a standard wearable (I hate how google glass is the go-to for this stuff, Steve Mann's wearcomp did it first and i've had a wearable myself for years before google glass came alone). An external device can't really be thought of as being an actual exocortex, it's not with you 24/7 and integrated. The idea here is to build something and integrate it as much as possibly into the human body with space for future upgrades to integrate it even more tightly. Starting off with just subvocal recognition would go a long way towards making using the thing second nature and by integrating it you can truly claim it to be part of yourself.
  9. I would be down to play with subvocal hardware, but I haven't had much success building piezo pickups.
  10. Piezo pickups really wouldn't work for true subvocal recognition - you need EMG to do that
  11. K. https://mazsola.iit.uni-miskolc.hu/~czap/letoltes/IS14/IS2014/PDF/AUTHOR/IS141213.PDF some of the more recent and successful work on semi-complete subvocal recognition. They sampled at 20khz over 5 channels on the face and neck. 

    To get decent EMG signals, you'll to amplify the signal, if you want to buy them you can find some for general biological signal acquisition here: https://www.biopac.com/Research.asp?CatID=32&Main=Amplifiers or you can build your own: http://www.ece.utah.edu/~ece1270/ECE1270_Lab1bU11.pdf (just google EMG amplifier or EEG amplifier). 

    It looks like it's going to be a very large and obnoxious setup. Best of luck making it small enough to shove under the skin! Let me know if you need any help.
  12. EMG and EEG both use the same basic approach to signal acquisition and there's single chip systems you can get which combine amplifier and ADC in one. After acquiring the signal the complex bit is feature extraction and classification - for simple EEG that's fairly easy (for gaming I personally just use frequency bands and meditate to get some control), for EMG (and for the kind of EMG we want) it's going to be more complex but still within the realm of what an embedded ARM platform can do.
  13. Alright, feel free to post your setup and ICs of choice when you decide on what would be most appropriate. 
  14. Still very much at the conceptual stage right now - for a quick prototype i'd grab one of those TI chips (don't recall the exact part number off the top of my head, it's been mentioned on these forums) and an arduino hooked up to a raspberry pi with python and GHMM (see http://ghmm.org/) for classification. That would be totally inappropriate for something implanted - for an implant i'd want to do a custom system with an ARM chip and the instrumentation amplifier all on the same board. To be honest, i'd really prefer to get hold of an existing dataset taken from some high quality lab hardware to tweak the software on before starting to deal with the hardware setup - the hardware setup is mainly a rather boring PCB design issue that could be done by a contractor with enough funding (and if we have more of the software in place, this could be feasible to crowdfund as a wearable which can later be developed into an implant). If people are willing to help out on this project it could be done in stages, starting with core UI design and then expanding to use traditional speech recognition before moving to the full thing using subvocal EMG. It'd certainly make for an interesting goal to move towards.
  15. Python? Sorry I'm allergic to interpreted languages. I'm going to be starting EMG analysis for other reasons in the next few weeks, I'll post up something if that turns into a UI. 

    You're not going to be able to get close to 20kHz from an arduino over 5 channels if you want to do any data processing other than immediate storage to external memory, which at those sampling rates you begin to need after a little less than a second.

    However, I do just so happen to have access to quite a bit of high-quality signal acquisition hardware and a trove of EMG data, let me know if you want me to grab some.


  16. access to fancy equipment and data? well, now I've got tingles :P That's like may favorite sentence after "yes here's a million dollars". Color me officially interested. So I can certainly help out once you get to the subdermal stage and by the time you get there who knows, maybe i'll have enough gear to make one of those expensive arrays and we could see if we could make this work in a rat or something.  As to the prototype I think you're on the right track. Although if you're gonna try interfacing with the brain anyway, I'd say you'll have better luck directly tapping the nerves that control your vocal chords or the parts of the brain that control speech. Would it be harder? a little. But the quality of the signal would be unparalleled. The hardest part of this though is how the hell you store it. I feel like that's where qunatum computing will come in. And since we're only a few years off by the time you've got this truly ready to implant onto someones brain, that tech will be availiable. Although for now you're probably better off just having something that connects wirelessly and stores the computational and storage stuff. Which you've already pretty much got covered. The programing needed to do real time analysis of this will be a bitch. So you should probably take bciuser on the offer and start building something that can turn the data into something useful
  17. You could probably get a decent sample rate if you moved from an arduino to an arduino-like platform. The Photon and Teensy can both run at ~100Mhz and perform quite solidly.
  18. Ok, so I talked to one of my graduate students homies, and apparently he was able to perform logistic regression based binary classification for EEG analysis in real-time on an arduino at up to 512Hz over four channels. Obviously non-binary classification, such as differentiation between 30+ phonetic sounds will probably run a wee bit slower than that, but that's what has already been done.
  19. @bciuser Python is only used as "glue", the heavy data processing happens in native code, it's just quicker to prototype that way. As for data, if you have a collection of EMG recordings made from people saying particular words or phonemes do share! As for the arduino, that's just for prototyping too, actual data processing would be on the ARM board, arduino just acting as a simple relay for the data from the instrumentation amplifier to a USB port.
  20. By the way, EEG with arduino isn't exactly a new or weird thing: http://www.openbci.com/ The trick is to just use arduino for reading the data and possibly doing FFT before presenting the data to another computer for the in-depth analysis. Arduino is just convenient to work with when building the hardware interface.
  21. EEG Collection with the arduino isn't a new thing, having all of the real-time processing done on a microcontroller independent of some beefy computer is a new thing. That's what I was referring to in the above comment ^. I don't have EMG data for spoken sounds, mostly just walking and flexion, but I can definitely collect some on myself. EDIT: Damn, that means I'll need to shave my beard. Oh well.
  22. I'm not insane enough to suggest doing heavy processing on the arduino itself, I think you misunderstood me there - the arduino is just to act as a relay between the external hardware and the main computer (which would be powerful enough to do the real processing). If you can get some EMG recordings made, the following would be an awesome start: For each of the digits 0-9 and a set of simple words useful for a UI (up/down/left/right/back/enter etc) Audio recording of audibly vocalised version along with throat EMG Throat EMG with no audible vocalisation A number of repeats of the above (so that we can train a classification algorithm on half the dataset, and test it against the other half) If possible, repeating with different people Throat EMG should be 2 channels, similar to this guy, not sure if there's a formal system for electrode placements on the throat like the 10/20 system for EEG: http://www.nps.edu/Media/PhotoGallery/Images/midsize/78101main_ACD04_0024_004%20copy.jpg Having that dataset (and the more repeats the better) would go a long way towards tweaking some software
  23. You're willing to sacrifice a beard in the name of science? I salute you sir!
  24. Lol you misunderstand me, I AM insane enough to suggest the data processing can be done on the arduino, in real time, at least for binary classification of motor intention using EEG. Probably not for this, and that's beside the point because what kind of idiot prototypes their data processing on an arduino when they don't need to. I'll try to get those recordings within a week or two. I'll update if I have difficulty.
  25. If you could pull off proper subvocal recognition using an arduino that'd be awesome, somehow I don't think it's possible......
  26. Well, something up my alley! Being a micro-computer and single-board-computer geek, I like where your thoughts are going. Personally I wouldn't go down the Arduino or Pi route even for prototyping, but that's because I (a) haven't worked with either (b) don't own either. A few years ago I would have suggested Gumstix as a prototype of choice (they already have similar limitations from size, which gives you realistic resources to work with, and you can get used ones cheap). I don't have anything EEG out of storage at the moment, but I wouldn't mind contributing some additional data for you in the future.
  27. I only suggest arduino and pi for the prototyping stage because I happen to have both lying around. For the end product i'd want a custom board.
  28. Looking further, Intel Edison looks almost perfect for this task - paired with an inductive charger it could even form the basis of an implant
  29. I've heard that the edison is a royal pain to work with, but I haven't gotten my hands on one to play with yet.
  30. So, where do you want to do your data storage? And would you be storing the raw EMG signals your device is acquiring during the operation? Or whatever the computer translates those signals into? (Storing the raw data vs. storing a String of whatever word that raw data was converted into?)

     I could see the value of storing the raw data somewhere (on a cloud or another external device) for the first phases, since it would improve the system's ability to recognize each user's speech patterns over time.

    In terms of hardware specs, royal pains aside, the Edison blows both the teensy and the photon out of the water. 2.4 GHz processor clock speed and 1600 MHz memory speed. In terms of pricing, you're "getting what you pay for". It's around triple the price of a teensy 3.1, excluding extra features (On amazon, a Teensy 3.1 costs $20.25, while an Intel Edison costs $59.62) As for programming, Yes, the Edison will more than likely be a total pain to work with, since it's only compatible with their compiler (or something along those lines). The size of an edison also roughly doubles that of the teensy (Edison:  4.2 x 3.4 x 1.6 inches. Teensy: 1.4 x 0.7). I presume that 1.6" height is because of the antenna or a heat sink. 
Displaying comments 1 - 30 of 70