The biohack.me forums were originally run on Vanilla and ran from January 2011 to July 2024. They are preserved here as a read-only archive. If you had an account on the forums and are in the archive and wish to have either your posts anonymized or removed entirely, email us and let us know.

While we are no longer running Vanilla, Patreon badges are still being awarded, and shoutout forum posts are being created, because this is done directly in the database via an automated task.

Echolocation implant

Comments

Displaying comments 31 - 60 of 68
  1. The brain chip isn't that far off.  

    It would need to be much more complex to convey sight, obviously, but this shows that it is possible.
  2. I think any idea that starts off trying to one-up nature is bound to fail.  You can't out do what billions of years of evolution have designed.  Our goal as grinders should be to utilize what it there to augment what's there, not replace it.  This is mostly directed at the first few posts.  
  3. Disagree Tim. The problem with evolution is that genetic shift can only work on preexisting structure. For example, wheels are a highly effective means of moving around, but they never emerged in biological systems. I out do evolution everyday that I drive a car. Just a little side debate. Lol. I still sure as hell wouldn't give up my hearing for sonar. :)
  4. Tim, the whole point of technology is to one-up nature.  Evolution is a random act, taking generations, and with no thought to efficiency. For evidence, just look at all the blank sports and junk data in our genetic code.  Whereas technological development is purpose driven, can be rapidly tested and adjusted, and is usually as efficient as we can make it.
  5. I was going to say something like the two above as well...The whole purpose of biohacking is something of a "directed evolution", not just mixing up our human material with existing natural ideas without even attempting to improve upon them. And as for being doomed to failure, I would submit that RFID implants are a blatantly successful improvement upon nature - let me know when you find any naturally-evolving organism that is MIFARE-compliant. ;)
  6. @tim116: The point of biohacking is to improve on nature's glaring failures. That's why we're all here
  7. I was contacted by a retired professor from Harvard who did some work on echolocation. My mind is blown. I've been experimenting with his methods all weekend. He pointed out to me that:
    1. Shouting in a canyon generally results in hearing an echo.
    2. Shouting in a parking garage results in hearing an echo.
    3. Shouting in your bedroom generally produces no echo.

    Even a bare room produces very little noticeable echo. The sound should still be bouncing off the walls an producing an echo though, right? Well, it turns out this is phenomenon or not hearing an echo is caused by an illusion in our auditory system called the precedence effect. Basically, if we hear the same noise twice within a certain amount of time, our brains are wired to register only the first of those noises. So talking in a small room is possible in part because of the precedence effect, otherwise we might hear some obnoxious echoes. Below is part of the email he sent me:

    "sometime in the late 80's after I' d published an article on the comparative anatomy of the cochlea and  audition, I noticed that "Action Ear, Inc." had come out with percussive-sound inhibiting ear muffs for shooters. The ear muff had a little microphone build in which cut-off at the high decibel level of a gun shot, but remained functional for conversation and other ambient sound (such as echos) at lower decibel levels. I knew at that moment that I would at last be able to hear the close object echos that bats can hear, but we can't. 
      Because of the "precedence effect" - or "law of the first wave front"- by which a nearby sound - like a shout or clapped hands - inhibits the perception of its' reflected echo for a short period of time, essentially making it impossible to hear an echo in anything under 20 feet or so. One of the ways bats get around this, I am told, is by tightening their tympanic membrane (or spadial muscle?) when they cry at echo-locating frequencies thus partially deafening themselves for just long enough to overcome this effect and allow the perception of an echo from a nearby object. 
      Sure enough, it worked. The shooter's muffs functioned the same way.  It was great fun. I could now hear echos of loud claps or clicks in a closed room as could anyone else who tried on the shooters muffs.  
       When I demonstrate this to a patent attorney, I told him to clap you hands three times, getting louder each time.  With the first two claps he registered nothing, but at number three he exclaimed" It came from my secretary's office!".  
       I thought at the time this might be useful for the blind, and, indeed, felt myself more secure walking around a room with my eyes shut when I had them on and was making clicks with a clicker, but that's as far as I took it."

    So naturally I went and got some shooting earmuffs, went home and started clapping. Sure enough, when I clapped loud enough the mic cuts off for a few milliseconds and when it comes back on I can hear the echo of my clap returning to my ears. I have experimented with some of the methods used in above link like the pallet clicking and judging object placement by observing audio obstruction. The effect of the shooting earmuffs is much richer and could even border on Daredevil-esque spatial awareness. I'm really excited about it.
  8. Here are the problems I need to work through.
    1. The muffs are set up to turn off after a loud noise. This means that you need to walk around clapping your hands really loudly.
    2. The clapping has to be done with perfect timing. Typically, a loud clap will shut off the mic then you can make a second clap. When the mic comes back on you will hear the echo of the second clap.
    3. The clap has to be just right in terms of loudness. If you clap too loud on the second clap, even the echo will be enough to shut the mic off again. I think that the sound is quieter on the return depending on how far away it is, so clapping next to a wall requires a much softer second clap. Too soft and the wave will be too faint to detect in far away objects.
    4. The muting of the earmuffs means that all sound is cut off for a half a second, which sucks.
    5. Walking around with ear muffs on while clapping your hands loudly in public will result in an ass kicking.

    Here is what I'm thinking:
    1. Use an ultrasonic rangefinder instead.
    2. Set the rangefinder to emit an ultrasonic chirp 10 times/second (random estimate).
    3. When the speaker is emitting noise, the mic is turned off. This way the mic doesn't detect the speaker constantly and just keep turning itself off.
    4. When the speaker shuts off, the mic turns on, hopefully hearing only the echo of the the emitted chirps. So one is on while the other is off and then they switch.
    5. The mic sends the sound to the mystery device which converts the noise into an audible tone which can be heard in headphones. This way we can emit an undetectable ultrasonic chirp (not disturbing others) and hear it in our own ears.
    6. A standard mic can be used simultaneously and layered over the echolocation sounds.

    Alright guys, tear it up. Do you see flaws in my thinking? Think this will work? Did I over or under complicate this?
  9. 1. Interference!! I wonder how much ultrasonic noise floats about without our even realizing it... 2. I refuse to believe you're serious about this project until it's included in a nefarious plot. :P
  10. Oh, there's a plot. A good plot in fact that involves four million dollars and a bag of otters. That's all I can divulge.
  11. you can very well do this without going the ultrasonic way. a microphone, speaker and a computer are enough. all you need to do is sending out a known signal, and correlating that with whatever you record. you can get precise timings for the echos. with multiple microphones you can even locate the directions the echos are coming from. no magic involed, except a couple of fast fourrier transforms. no need to shut down the microphone either. it's actually a good to have it record the initial signal aswell so you don't have to rely on anything but the recorded signals to get the timing out. if you want to play with it, i am quite sure i have some python code doing correlation on audio-samples. you'd need one sample of the audio you play, and the recorded audio signal.
  12. Very cool information DirectorX, appreciate the repost. And yeah what Thomas said about being able to do it all from a computer should work great, especially for testing various variables before intergrating the final bits into a smaller chip.

    You can source some what are called "binaural mics" for not too much, they will have great quality compared to any cheap mic you might otherwise use, and most decent(read:over 20 dollar)  speakers will be able to reproduce this well enough to show 3d space. And you are going to want to go with some mics and drivers that go up to at least 20k, even though you might not hear above that a driver can go up to 30k will perform far more accurately at 20 than those that only go to 20. The high ranges would be very important to 3d hearing as high pitches are more directional. Though i'm sure you already know most if not all of this.

    But yeah youll probably want to do a lot of studying on acoustics as well as binaural audio as proper binaural mic technique will be important for getting really exact hearing when passing though mic and driver.

    As a side note hear is a holophonic recording ive done(wrote binaural but realized later it is technically holophonic) as well as a really good example of how accurate things can get.

    https://soundcloud.com/john-calligan/binaural-test-1

     

    https://soundcloud.com/pyunghee-won/3d-sound-holophonics

  13. Also I forgot to mention, do you guys really find it that hard to hear echos in small rooms? I've never been in a room where I couldn't click(with my mouth) and be able to hear where exactly in the room the most reverberation was coming from. I have been training my ear for a while but... I'm not sure if im not normal or if i'm just not understanding this right.
  14. @ThomasEgi: Nice. You don't think I'll have to shut down the mic? Won't the first sound ruin the second? I'll try to make a good recording.

    @John: Thanks man. Actually I don't know any of this stuff so it is all good info. I know what you are saying about the echo, but muting the first sound has a pronounced effect. It sounds like the noise is coming from whatever location it is reflecting off of. It is really hard to explain. The ear muffs are about $30 to $80.
  15. as long as the microphone doesn't go into clipping (which it shouldn't do in first place) that's not a problem. as i mentioned. it's even beneficial for timing analysis.
  16. I forged a sound https://soundcloud.com/rich-lee-8/echolocation-test-sound-1
    but I have no editing software for to trim it.
  17. a friend of mine was at my place today. he's pretty talented with music synthesizing. we synthesised a sound that's suited for correlation math (we tried hard to not make it not too nasty/annoying) http://home.arcor.de/positiveelectron/files/EchoLocPing.wav you can get good correlation results for distances of about half a meter and more. we were able to estimate the distance to the walls from my bathroom pretty well. using the auto-correlation analysis function of audacity and some basic physics we estimated a 1.6m distance. as close to the real distance as we could measure. admitted it was an echo from a flat wall so the signal was clear. but it's a start. with proper cross-correlation instead of just auto-correlation the readouts we may get more precision. also, the more microphones the more interesting result could get. if one is not afraid of the math, calculating the direction of the echo should be somewhat possible.
  18. ran a few more test. looks like speaker and microphone are required to have a pretty good frequency response in a less ideal environment. i'll try reworking the sample.
  19. Nice! I like the sound too.
  20. Director glad you found my information useful. I believe I have a decent amount of knowledge in acoustics,sound, hearing, audio and the like so if you have anymore question ask away.

    You can get audacity for free audio editing.

  21. @John you mentioned binaural mics. Would these work better than an omnidirectional mic? I also saw these fiberoptic mics that looked nice, but my ignorance on the subject prevents a purchase. It seems like the mics are going to be the most important part of this.
  22. Well yes they would, they are essentially the same thing. But the binaural mics would be tuned for accuracy and would be matched so the two mics will be paired so quality control would not interfere with accuracy. I don't have a clue what a fiberoptic mic is... Never heard of them. Doesn't make any sense really.
  23. @DirectorX link to the fiberoptic mics? I'm with John here that's confusing the hell outta me O.o
  24. @Saal those are mics made from a fibreoptic with a reflective membrane at the end. they have excellent frequency response due to the very light membrane materials, extremly low noise, and they are vartually un-detecable. they are most commonly used for surveillance, in MRT to communicate with the patients, and in the film industry since they are pretty much invisible. i don't think microphones which are "that" good are required. any reasonable mic with an appropriate frequency response should do. something that may improve the results could be the use of shotgun microphones. those would provide a better selection on the direction to listen from. which results in a better defined cross-correlation results since the rest of the room causes less "smearing" of the peaks.
  25. so, @ThomasEgi, can you describe the prototype and tests you were doing? Did you have headphones on? If you were filtering out noises did you find that closer echoes were occasionally filtered out as well? Or did it work pretty well?
  26. i didn't wear any headphones at all. i played the signal over the laptop speakers. and recorded it with the laptop internal microphone. then run a crosscorrelation over the signal i send and the one i recorded. right now i can't do new tests as i am traveling around fixing stuff (also known as tech support work). as for now. i'd recommend to filter out all-too high frequencies out of the signal (for both, sending and receiving). maybe keeping it between 500Hz and 5kHz. that may help getting a less noisy cross correlation. when i'm back at my appartement (around sunday or so) i'll try slapping together a small how-to with the code i used (python, scipy etc) and how to get the eye-candy plots and how to interpret them. remind me to upload that stuff in case i forget.
  27. That would be awesome and much appreciated.
  28. ok it took a bit longer than i expected. here's the script. http://home.arcor.de/positiveelectron/files/echoLocate.py you need python, numpy and pylab installed. you also need 2 audio files in 16 bit signed wav (audacity can be set to export that). one audio file is the signal you send out, the other is the one you record. the script then searches the original sound in the recording and plots the correlation around that timeframe. the diagramm you get is pretty much the ammount of echo received plottet over the distance the sound traveled. due to the poor directional characteristics of my speaker/microphone setup, aswell as the fact that i am forced to test it in my room (which produces an awful lot of echoing since it is an enclosed room) well due to all this i don't really get any worthwhile results. except that i can identify the echos of my walls. if someone is lucky enough to run a recording in an open field or an anechoic chamber or with a shotgun microphone it should give better results.
  29. Okay i'm now wondering how much of the precedence effect is mental and how much is physical.

    Mostly i'm wondering about how phasing effects this and if we could make our devices much smaller by just making use of phase. Ill get to the application after I explain my theory.

    So talking in a large room vs a small one, first, obviously in the large you would hear echos while the small you wouldn't, the precedence effect claims this is all mental, but i'm not so sure. Now the second difference in talking in a small vs large room is the volume of your voice, to both you and others. Whats happening obviously is the waves are bouncing off the walls and reinforcing your voice. At the first bounce they align in phase with the original soundwaves, but the echos you hear have traveled and bounced long enough that they are somewhat out of phase with the original sound. This creates a sound which does not reinforce the original sound but rather comes to your ears as a sound of its own. So the phase of sound would change if your ears "listen" the sound or not. Lenth of time in air and bouncing affects the phase of the sound.

    So I was thinking about a very very simple device which would be a microphone, amp, and small speaker where the amplifier would be made to flip the polarity of the sound, which could be done simply with a balanced circuit that has it's wires flipped somewhere or I think most op amps have a built in section if you'd want to do this.  The sound the speaker produced would be directly out of phase with the sound the microphone picks up. These would go over your ears like headphones. Though i'm starting to doubt myself as I write this, this might just act like a noise canceling headphone. Or maybe if we could design a circuit that didn't just flip the phase but moved it half way out of alignment. This wouldn't necessarily cancel out the sound and it also wouldn't reinforce the sound. In theory this would allow room sound to become unique rather than just adding to the original sound. So the room could be heard separately and almost immediately.

     In theory...

  30. your phase idea doesn't really match the physics. altho it is true that you can create a standing wave, this pretty much only works for a single frequency, and a noticable effect is only obtained when used in a geometry like a tube. most sounds, including voice, consists of a giant mix of different frequencies , each with a different wavelength. the precedence effect is in fact, all mental. even small rooms produce an echo, but wall dampen the signal a bit each time it bounces off, also it ends up superpositioning itself all over again, causing it to sorta, blur itself out so it's less noticeable for your brain. a good and easy to understand example on echos is the impulse response. impulse responses are used for audio-effects when you turn on effects such as "concert hall" in some audio players. those impulse responses are made by firing a short loud impulse (usualy a small explosion), and recording the echo. which represents exactly the echo of the room. this works for everything, small rooms, big rooms, caves etc. you may be able to find some recorded impulse responses when you search for it online. since it is a bit hard to use explosions every time, i used a more reproducible signal (posted earlier), and some signal processing math to get a result which is about the same as an impulse response (a bit less accurate over the course of all frequencies but good enough for tracking echos). you are welcome to play with the above script. running it in big and small rooms you can see how it affects the echo-intensity you get.
Displaying comments 31 - 60 of 68