Echolocation implant

13»

Comments

  • so, i'm just curious. since human echolocation is a thing, like, totally a skill that one can develop, why is this not being discussed? yes i know, metal beats meat,  rah rah rah, etc. but still... isn't developing the base system to it's optimal level a primary step? 
  • knowing what the echo looks like may help finding an environment where you can train yourself. i'm just providing tools and theory here.
  • Yeah Thomas I got less interested in my idea as I was typing it. Figured I would post it anyways to spark discussion.

    But still what remains is that audio bounces off the walls fast enough to effect our perception of the sound as it comes in. Your voice sounds perceptibly different in a big versus small room. So your mind is processing the initial echos, but just adding it on to what is heard rather than it being a sound of its own. Not sure what this has to do with anything biohackish though. And its not particularly new information...

  • I'm waiting on a raspberry pi to arrive in the mail so I can test out that code. I appreciate that a lot, @ThomasEgi. I don't know if you got a chance to mess with it too much more, but if you only have a room to work in it might not be so apparent. Opening the door to the room helps a lot. I had someone slowly closing a door while I was testing out my prototype and I could tell exactly how ajar the door was just by the audio.

    I did one kind of cool test that could possibly be improved upon. Still using these damned shooting muffs, I would trigger the muting function with a loud noise then turn a large ratchet I have so that it is spinning and making a lot of clicking noises. If you do it at the right speed, the echo comes back and and adds to the noise you are making, completely throwing off the rhythm of the actual ratchet. So the thing that is weird to me is that these ratchet clicks are so identical in sound that it seems like the precedence effect should kick in, mistaking the current actual click as the original sound and cancelling the echo. I wonder if this rhythm thing could be further exploited in the future. Anyway, I was impressed with how precisely the brain filters this stuff.

    I can see how having multiple microphones would be beneficial. Am I right in thinking that the opposite would be true with the speaker and that one speaker would be better than two or more speakers for this?
  • with multiple microphones you can mathematically construct a signal which is more direction specific. a bit of theoretical background can be found here http://www.analog.com/static/imported-files/application_notes/AN-1140.pdf

    the same can be done in reverse, creating a directional speaker from an array of speakers. so it would be quite the opposite. the more speakers, the better (given you drive them with the correct signal). for simple experimentation without beam forming arrays, a single speaker probably works best because it avoids unwanted interference.
  • edited September 2013
    @ThomasEgi: Quick questions on the code. The pylab stuff isn't called on in the script, right? Was it just used to match up the two audio files? Do those audio files need to be the same length? My friend keeps getting an error unpacking and return value. Are you using python 2.7.5 or 3? Was that the code you used or were some edits made after? 

    @John , @ThomasEgi: How do you guys feel about saw waves for this?
  • @DirectorX i use python 2.x. if you get unpacking errors double check the wave files. they must be mono, 16bit signed PCM, Wav. the files can be different length. saw waves won't work too well for correlation as they are very periodic, yielding undesired blurry correlation results.
    Audacity can export the desired format by selecting "other uncompressed format" as file type and then then selecting Wav (microsoft) signed 16bit in the options.
  • Take a look at "Physical Analysis of Several Organic Signals for Human Echolocation" Acta Acustica united with Acustica, Volume 95, Number 2, March/April 2009 , pp. 325-330(6). If you are serious about it may be a good idea to get in direct touch with Professor Martinez et al who have studied the training process. And of course with Daniel Kish, who besides being an expert in the subject uses it to navigate around 24/7 as he is blind. Warwick and company are more interested in playing around and make headlines.


Sign In or Register to comment.