Looking for help on a hearing modification

Hi All
New here, so forgive any lapses in protocol. I'm looking for help on a project I'm working on. I've recently become a hearing aid (HA) user as my hearing is degrading for reasons unknown. Unfortunately, hearing aids aren't exactly what you might call interactive. I can't change any of the settings on my current pair (Starkey X series). I can see why this is OK for old folks, but for a young guy like me it's quite frustrating. 

I did a documentary for it for BBC Radio 4 - tune in next Monday at 11am to hear it. (http://www.bbc.co.uk/programmes/b03nt1st)

Anyway, here's the thing: I have all the technology I need right now in my ears to listen to any data I can reasonably sonify. With a bluetooth or radio adaptor, I can connect the HAs to my phone, and then possibly stream data to my ears constantly using the smartphone as an internet-linked platform. 

With the right custom software, I could listen in on local wifi traffic levels, or hear geomagnetic storms above me. The idea is to build in additional layers of information that I have to instinctively decipher (so not like email alerts -- more like learning to understand parts of the everyday environment we're not normally sensitive to). Even without the phone, I believe the microphones in my HA are probably sensitive to frequencies beyond the normal range of hearing -- why not shift those into audible frequencies, if I can access the code of the HA directly? (Pitch shifting is already a normal feature of HAs.) 

I realise this is a bit less 'wetware' than, say, subdermal implants, but functionally for me, it's no different to hacking the nerves and hairs of my ears, because I rely on HAs. Hearing aids are essentially a convenient surface for non-medical augmentation...

I'm currently working on an arts grant to pay for some of the time / materials needed. My question for this forum is - has anyone attempted this before? Can anyone point me in the right direction for hearing aid hackers beyond the usual google hits?
Tagged:
«1

Comments

  • I forgot to add - if you want to contact me directly you can do so at biohack at frankswain dot com.
  • hi, and welcome.
    i only have a few minutes cause i'm on the run so no long texts atm. one thing you may want to check out is this talk from the 28c3 about hearing aid hacking:
    http://media.ccc.de/browse/congress/2011/28c3-4669-en-bionic_ears.html
    from what i understand HAs come with rather standardized interfaces, so whatever data you want to stream to it is basically a software problem focusing around signal processing.

    most microphones are sensitive to a bit less than the regular hearing range. microphones that cover the entire range of human hearing are expensive, those who cover more than it even more so.

    i gotta go for now. see you around :) and feel free to hang out on the irc channel as well, always a great place to brainstorm ideas.
  • Thanks for the link, Thomas. I'll drop Helga a line. Hopefully with the new generation of bluetooth / RF linked HAs I won't even be stepping on any manufacturer toes. We'll see.
  • edited January 2014
    This sounds very interesting! I will look around for any mobile applications that poll smartphone sensors and generate audio output based on readings, they should be easy enough to use "out of the box" with a pairable HA.

    EDIT: I already found one app that could be promising, it monitors network metrics and the pro version appears to have some sort of "signal notification". It may just be a simple trigger like "play a tone when the strength is over X or under Y", but it's something. I don't have a Google account so I can't buy it, but if you want to chance the two bucks on it the link is below.

    https://play.google.com/store/apps/details?id=de.android.telnet

    Found a better Android one. Free app that continually monitors wi-fi signal strength and plays beeps with varying pitch and frequency based on the signal.

    https://play.google.com/store/apps/details?id=de.avm.android.wlanapp

    Also, on iOS the "TeslaBot" app polls the Hall Effect sensor and generates audible clicks like a Geiger counter, which could be useful if you're in an area with lots of magnetic interference. It has adjustable scale to click every 10 or 100 microteslas, so you can set it according to the environment.
  • edited January 2014
    I've thought about this type of idea in the past, and I really love it. I also think it is interesting to consider it alongside implanted earphones.

    This exploits auditory perception the way Bottlenose does with the somatosensory system. Hooray for neuroplasticity!

    I'm going to start work on writing a mobile app to convert various signals (wifi, magnetism, bluetooth, nfc, time-of-day, gps, altitude, whatever else) into a constant (or not?) stream of sound with whatever properties we'd like.

    Anyone have an idea for a (clever) name for such an app? Best I've come up with so far is "Extrasonic", with a tagline of "Streamable omniscience."

    Now I want a hearing aid... but (Bluetooth) headphones will do.

    EDIT: How about "WiseStream"? I like it.
  • edited January 2014
    @drew I can help with the coding if you're planning on using Android. Just hacked together a bluetooth app for somebody a couple months ago so I can handle interfacing with bluetooth headphones or HA's.

    @Frank Does your current setup have bluetooth? If so, can I get any information, model numbers, etc. related to it and specifically the bluetooth portion? As long as I can find communication specs it wouldn't take long to throw together an app especially if drew is already working on polling sensors.

    EDIT: It occurs to me there are adapters made for this purpose (convert audio directly to an HA signal). If so I just typed a lot for nothing...but can still help with any Android app hah.
  • edited January 2014
    imageimage

    So, I made this.

    The GUI is the boring part, I had to write a whole crazy Javascript bridge for iOS to take advantage of the Web Audio API.

    Anyway, the foundations have been laid!

    I took a look at accessing WiFi signal strength data through the iOS SDK, but it seems the only way to do it is via an undocumented private API. Apple won't allow anything using such naughtiness into their walled garden.

    Perhaps I'll release a Jailbreak version with extra features. Even without private APIs, we still get access to a number of sensors: the accelerometer, gyroscope, and magnetometer are accessible via public APIs, and it looks like Bluetooth signal strength might be possible with some tricks.

    Also, I only have minimal Android development experience, but I'll take a shot at it.

    Now, a question: how do we want to model this data as sound? Amplitude modulation, frequency modulation? Both? Neither? Pretty much anything's fair game, but I need to know what to build here before I continue.

    I wonder how the NorthStar does its thing...
  • @drew: That is kickass! Any way you can have a setting to select the audio model at will? Personally I like the idea of using pitch modulation, but I also think a geiger counter-esque click frequency is a good model. Combined with the magnetic compass that could provide a handy SouthPaw alternative, something like "increase click frequency nearer magnetic north".
  • This whole concept seems doable even without a hearing aid or Bluetooth.  Just play whatever sound-encoded data you like through a standard set of earbuds.  The cool part is deciding what phenomena to monitor.
  • edited January 2014
    @zombiegristle: Thanks!

    A Geiger-counter-like option sounds great, and I'll also put in a frequency modulation option.

    Also, I was thinking the Clock option would be something like a striking clock, with a configurable interval. This is somewhat related.

    How do you suppose we should deal with multiple simultaneous input sources? I was thinking each source could get its own range of frequencies to work within, though how large these ranges should be is something I'm completely unsure of.

    Also, anyone with an iOS device who'd like to do some testing, please let me know :) Again, I'll get to work on an Android version later.

    @mkabala: Yes, sure, the cool part of the concept isn't the device inserted into the ear. However, I'd rather have something more inconspicuous than wires hanging down all the time.

    These are kind-of cute, though I believe they come with a microphone, which is just unnecessary for our purposes. I've heard legitimate hearing aids are ridiculously overpriced, but I haven't actually done any of my own research on the matter.
  • Quick note: working with magnetometers when you have magnet fingers is hilarious.
  • edited January 2014
    @drew: That it is, haha...

    I think multiple input sources could be handled as either different pitch ranges (though that could get confusing), or different output modes (bluetooth is pitch, magnetometer is geiger-clicky, etc.). The latter might make it easier to differentiate quickly while listening, but unless you can think of a good third output mode it's stuck with just two sensors at a time.

    Also, I have two iOS devices if you ever want me to do some testing for you. I'm hit-or-miss on here, but if you e-mail me (it should show up on my profile page) it goes to my phone and I'll reply as soon as I can.

    Have you seen those "Lepka" or whatever they're called environment sensors? They plug into the headphone jack of an iOS device and there's one that supposedly detects ionizing radiation. Maybe you could add that as an optional input source (and/or any other headphone-jack-based sensors)?
  • I you use a midi library, you could use a variety of instruments to determine the sensor reporting.

    IE: magnetic field = bass drum.  Running at a steady tempo, and it gets lower/higher as the field changes.

    Radiation = symbol.  longer notes mean higher amounts of radiation.

    Light = piano.  higher keys = brighter; lower = dark.

    By using multiple instruments, you could potentially run them all at the same time and still differentiate them.
  • edited January 2014
     @iexiak: That sounds like too much input to me. Different instruments allow easy differentation but they basically "waste information" (if you go by a generic IT way of thinking).
    This guy is colorblind and hears colors. They only differ in the pitch, volume is used to describe saturation iirc; It seems like he can use the senses pretty well this way.
  • I guess it depends on the fidelity you're looking for.  In your example, he only is doing colors and is looking to completely replace his ability to see colors.  In mine your trying to determine multiple sensor values quickly.  They have completely different use cases.

    Take Geiger Counters vs radiation alert badges.  One lets you measure the exact amount, but requires a full device/training/etc - while the other doesn't provide anywhere near the same amount of information, just enough to let the wearer know they are in danger.

    So it comes down to what the use case is for these sensors.  Do you really want to know the exact strength of the magnetic field you just entered, or would a simple ding be enough?  You could easily pull out your phone to get the detailed data without having to train your hearing to determine this.  Same with a compass, have a simple drum kick, like a heartbeat noise, when you point north.  If you ever need a compass, just spin until you point north and figure it out from there (or pull out your phone for more detailed info).  In a radiation zone? Do a geiger counter style clicking. 

    Essentially, I was just offering a way to utilize more than 1 sensor at a time.  I don't think this should replace just looking at your phone for an accurate report of sensors attached to it.  But I do think it could be used as a good quick measure of what is around you.
  • My take on the instrumental output, is that if I'm hearing what sounds like an orchestra tuning/warming up everywhere I go, I promise you I will never use this app. If it's something simple and unobtrusive, it will be a daily staple. More than two sensors being monitored over audio could get cluttered no matter HOW they're being modeled, unless they go off so infrequently that any output at all is worth noting (like radiation).
  • Actually, turning sensor data into music may sound better than you expect (literally). If done right, you can have the sensor playing music 24/7 and you wouldn't even notice unless willfully decide to listen to it. Just like the radio broadcast in supermarkets. The brain simply decides to not hear it, but it still influences you. Not sure what this phenomena is called but it may work out pretty well.
    Of course, a tuning orchestra would be suck, but with a bit of work it can be turned into an actual playing orchestra. There has been work to encode braille into music already which led to quite interesting results, too.
  • edited January 2014
    imageimage

    Here's what I've gotten so far.

    The closer you are to magnetic north, the higher the frequency of the sound emitted. With south, it's the opposite. You can control the upper and lower frequency "boundaries".

    I've been testing using an iPhone 3GS, and I've found that the compass can be a bit wonky at times, especially when changing the orientation of the device. I'm curious to see if a device with a gyroscope is any better.

    Also, currently, I'm using only magnetic north. It would probably be a battery drain, but using the GPS compass is possible, too. Maybe as a configurable option...

    Currently, I don't have a Bluetooth headset *or* headphones, so I've just been using the built-in speaker. I'm sure my neighbors love it.

    With regards to instruments, this Javascript library may prove useful.

    @ThomasEgi: I'd be interested in hearing (heh) more about what work could be done to avoid the "tuning orchestra" effect.

    @zombiegristle: Thank you for your offer of testing, I'd love to take you up on that. I'll email you. The "Lepka" devices look neat, especially the exploitation of the headphone jack for streaming non-sound data. I'd have to see how they're setup, but I'm sure using them as an input source is possible!
  • @drew as far as my musical knowledge goes, the absolute pitch is rather rare among humans. Intervals on the other hand are rather easy to make out. So I'd suggest playing with intervals and chords. Especially chords could be interesting. Using one midi instrument per sensor type would also be a logical step to take. The theory of music is rather huge, and parts of it include rule sets for writing music which could be utilized.

    and who knows.. maybe the musical theme from the movie jaws was actually the audio output of a shark detecting sensor.
  • Like I said before, I think it's all in how much fidelity your want.  Swimming in open waters and have a shark sensor?  You probably want something really obvious like the jaws theme.  You probably don't want a constant playing of the theme at quiet levels when a shark isn't nearby. 

    But how often will you care about a compass?  Make it a simple quiet drum beat every time you face north.  That will give you enough detail to quickly determine where north is, then you can make 45/90 degree turns to you given direction.

    Clock-4 different tones (or chords), maybe chimes or something similar.  Plays beats = hour, and each tone is a 15 minute interval.  IE.  6:00 = 6 beats of A; 6:15 = 6 beats of C; 6:30 = 6 beats of E; 6:45 = 6 beats of F#; 7 = 7 beats of of A; 7:15 = 7 beats of C; etc.

    I'm sorry I don't have time to really divulge into all the messy musical theory details, but here's a piece of Python code for making various chords that will sound good.  Maybe you can take it and use the idea in some sensors, one note is a low level of detection, more notes is higher levels etc.  (this is pulled from a file I made to take an input txt and output a midi.  If you want the full file let me know and I'll send it)
  • Notes:
    • channel is the instrument type
    • noteval is the starting note value
    • chtype determines what chord will be used
    • velval is volume
    • lenval is duration length
    • Major chords sound pretty (think happy music)
    • Minor chords sound chaotic (think the music that plays when a bad guy shows up)
    • Diminished
      chords are weird (think the music that plays in classic detective
      movies when they discover something they don't expect)
    • making a chord in midi is essentially picking a base note, then make some other notes based on that first note + integers.

    )

    #builds a major chord
    def Major_Build(chan, noteval, chtype, velval, lenval):
        #normal major chord
        if chtype == 0:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 7, velocity = velval)
        #augmented major chord
        if chtype == 1:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 8, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 8, velocity = velval)
        #major 7th
        if chtype == 2:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 11, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 11, velocity = velval)
        #supsend major chord
        if chtype == 3:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 5, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 5, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 7, velocity = velval)
        #major fourth
        if chtype == 4:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 5, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 5, velocity = velval)
        #major 9th
        if chtype == 5:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 11, velocity = velval)
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 14, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 4, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 7, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 11, velocity = velval)
            midi.update_time(0)
            midi.note_off(channel = chan, note = noteval + 14, velocity = velval)
        midi.update_time(0)
       

  • #builds a minor chord
    def Minor_Build(chan, noteval, chtype, velval, lenval):
        midi.update_time(0)
        midi.note_on(channel = chan, note = noteval + 3, velocity = velval)
        midi.update_time(0)
        midi.note_on(channel = chan, note = noteval + 7, velocity = velval)
        #above, regular minor, below minor 7th
        if chtype == 1 or chtype == 3 or chtype == 5:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 10, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 10, velocity = velval)
            midi.update_time(0)
        else:
            midi.update_time(lenval)
        midi.note_off(channel = chan, note = noteval + 3, velocity = velval)
        midi.update_time(0)
        midi.note_off(channel = chan, note = noteval + 7, velocity = velval)
        midi.update_time(0)
           

    #builds a diminished chord
    def Dim_Build(chan, noteval, chtype, velval, lenval):
        midi.update_time(0)
        midi.note_on(channel = chan, note = noteval + 3, velocity = velval)
        midi.update_time(0)
        midi.note_on(channel = chan, note = noteval + 6, velocity = velval)
        #above, regular minor, below minor 7th
        if chtype == 1 or chtype == 3 or chtype == 5:
            midi.update_time(0)
            midi.note_on(channel = chan, note = noteval + 9, velocity = velval)
            midi.update_time(lenval)
            midi.note_off(channel = chan, note = noteval + 9, velocity = velval)
            midi.update_time(0)
        else:
            midi.update_time(lenval)
        midi.note_off(channel = chan, note = noteval + 3, velocity = velval)
        midi.update_time(0)
        midi.note_off(channel = chan, note = noteval + 6, velocity = velval)
        midi.update_time(0)



    Translating to this Javascript library
    midi.note_on(channel = chan, note = noteval + 9, velocity = velval)

    becomes
    MIDI.noteOn(chan, noteval + 9, velval, 0);
  • Wow! I disappear for a week at work and you cats have surged ahead of me. Thank you so much for all the interest and activity in this thread. 

    If you've not seen it, I wrote a companion piece to the documentary for BBC Future, which has been seeing loads of traffic. I guess it's a popular idea...
    BBC Future is non-UK, here's a proxy link for British readers:

    Also, the documentary has been selected for Pick Of The Week on Radio 4, which is lovely news as well.

    ::::UPDATES::::
    I was hoping that my audiologist would give me a device that would connect my hearing aids to my phone, but that looks like it isn't going to happen. The device is an interesting one from a hacking POV - it's a Bluetooth sensor that uses a radio loop worn around the neck to connect to the hearing aids. Like this. This arrangement saves the battery life of hearing aids which would be sapped very quickly by constant Bluetooth streaming. If it were open source, I wouldn't even need the smartphone, I could hang a sensor round my neck and pair it with my ear pieces...

    I'm in chats with GN Store Nord (one of the big manufacturers) and Siemens to see if either of them will partner up and get involved in the project. I hope so. In the mean time it's great to see what's possible! 

    Not sure what smartphone I'll be working on. I don't even own one at the moment (long story), and I'm waiting on this project to see what the best to invest in will be. Likely I'll end up buying a second hand one off ebay, so jailbreaking etc not a huge concern. I'm excited to try out the things you've already made - I ought to go get a smartphone now!

    On sounds driving you mad, you're absolutely right @zombiegristle, it's definitely a risk! Been chatting to an art director colleague who's done lots of sonification work in the past, she's recommended I hire a sound designer to work with. On a sheer functional basis, I would need to keep my own augment in the low frequencies, partly because my devices already ramp up the ~4kHz band, so more could be damaging. My hearing sensitivity at the low spectrum is relatively normal, and this area works fine for data that doesn't need clarity (which the brain only picks up in higher frequencies).

    A final point to note - my BBC Future piece is the start of a new column there, BEYOND HUMAN, which is all about the social, legal, and ethical issues surrounding transhumanism, augments, body mods, etc. If you want to suggest a story / point me to something interesting, feel free to drop me a line at biohack at frankswain dot com.

    Thank you for all the interest so far, it's an overwhelming response. You guys are great!
  • @Frank:
    These devices are a part of me, an extension of myself. So should health services – or even manufacturers – be allowed to control the abilities of devices that become part of a person’s body?
    That really resonated with me, and it really highlights why I think open-source biohacking is a Good Thing™.

    As far as your specific device goes: if it uses Bluetooth, there's probably a way to reverse engineer it and make it do what you want :) Keeping a determined hacker down is not an easy task. For example, I've seen some pretty neat stuff done with the Nintendo Wii controllers, which run Bluetooth, even though they're totally proprietary.

    If you end up getting an iPhone, make sure it's at least as new as the 3GS. That was the first device to have a magnetometer, and anything older won't run IOS6.

    My app (WiseStream) allows you to configure the frequency modulation, so we (probably) won't end up destroying anything :)

    Any chance you'd be able to make it out to San Francisco at the end of the month? The conference there should give you plenty of material for your column, and is probably going to be a lot of fun, too.
  • I stumbled across this thread, made an an account because I've playing around with this a bit. I'm in ME so this isn't my field, but I've dabbled in circuit building a little.

    @frank: "Even
    without the phone, I believe the microphones in my HA are probably
    sensitive to frequencies beyond the normal range of hearing -- why not
    shift those into audible frequencies, if I can access the code of the HA
    directly? (Pitch shifting is already a normal feature of HAs.)"

    Shifting frequency down digitally is not an easy task. I wouldn't be able to help you there. I've always thought you might be able to preform a FFT so you've got a table of frequencies, generate frequencies at some fraction of that, and then recombine the signals with inverse FFT. I tried modeling it in MatLab once and it didn't work. Anyway I think the hearing at circuits only shift very slightly.

    What you can do, (that's cheaper and easier) is to hetrodyone the signal down using an analogue circuit. You just need an ideal mixer ic, an oscillator, some filters, and a pre-amp and a post amp. Any combination of these should work, just make sure you get transistors and such that are made to have low noise.

    When you mix two signals via ideal mixer you get two outputs, one their sum, the other their difference. Mix a ultrasonic with another ultrasonic, you get a sonic output that is their difference. 40khz sound mixed with 39Khz sound gives you 1Khz sound. Easy as that. You'll want to filter out the one that's their sum.

    Now 40Khz sounds like 1Khz. Same sound, different range.

    If you don't think your hearing aid microphone can manage it, try an electret like this one:

    http://www.ebay.com/itm/Panasonic-WM-61A-Electret-Condenser-MIC-Capsule-6PCS-/270680829845?pt=US_Pro_Audio_Parts_Accessories&hash=item3f05d5af95

    Human hearing goes up to 20Khz max. Average people 10-14Khz. Old people it's more like 7khz and under.

    The mic is rated up to 20Hkz, but I've seen people test them and they're perfectly capable of hanging with the high price units all the way up past 60khz or so. 20$ shipped for a 6 pack. Response does fall off, but with the right filters and amplification they can do the job. They're popular among people who like to record nature sounds, especially the ones that then slow them down to get the u-sonic noises. Careful because their tiny and it's tricky to solder on the leads sometimes.

    http://www.wildlife-sound.org/equipment/technote/micdesigns/

    I put together a few ultrasonic frequency mixer circuits a while ago.

    I made a video with a few tests I did with it


    This one uses a 40Khz transducer


    You can also use a 25 or a 40 KHz transducer, but you'll need to modify the amp because capacitor mics need power and transducers do not. You can open them up a little bit to a broader range of sounds with a inductor, but a transducer will only pick of the highest frequencies of human speech which just sounds like hissing.


    @Frank
    "I could listen in on local wifi traffic levels, or hear geomagnetic
    storms above me. The idea is to build in additional layers of
    information that I have to instinctively decipher (so not like email
    alerts -- more like learning to understand parts of the everyday
    environment we're not normally sensitive to)."


    The geophysical institute at UAF has a
    thing that turns the nothern lights emissions into sound. I can't find a
    link, I'm not sure if they even broadcast it. Proof of concept though.

    I have no idea if this works (never built one myself) but what about a tesla spirit radio? They pick up energy discharge like lightening, router noise, the 60hz of ac from a light bulb. You could make a portable one.

    http://www.instructables.com/id/Spooky-Tesla-Spirit-Radio/


    I know these analogue methods aren't as fancy as the coded ones, but I like to think they have some merit.
  • @Permafrost - thanks for all the suggestions. I love the idea of a spooky Tesla radio, it's exactly the kind of augment I'm looking for - 1) turning some invisible component of the environment visible (well, audible!) and 2) something rooted in real data that nonetheless has to be interpreted by me. I wonder if the components could miniaturised to the point of being a brooch or something? My physics / electronics skills aren't much to write home about.

    More generally - hoping to have a chat this week with the HA company, see if they'll work with me on this project. Watch this space!
  • A quick update on this: have spoken to Siemens and submitted details of my idea; still chasing GN Store Nord. Hopefully can convince one of them to sign on to the project. 
  • OK, for those watching this, I've got the details on the Bluetooth linked hearing aids: essentially *any* app on an iPhone can be made to output onto the hearing aids (yay!), and I can hear the sound alongside the normal environment

    But...


    This is because the streaming operates just like any other Bluetooth headset, ie, ALL sound from the iPhone will go to the ears. Boo!

    Also, yes, streaming constantly will knacker the batteries of the hearing device.

    So this project just got a lot simpler. All I need is the code above written by @iexiak, and a device to use as a wireless sensor - either a spare iPhone, or an iPod touch, or whatever. The hearing devices can be connected to anything bluetooth based.

    Some of you will be thinking: "but Frank, if you're going to use a dedicated device just for sensing wireless, why not skip the Bluetooth completely and use a wifi-sensing device that has a radio loop antenna that can connect to your existing hearing aids?"  

    The answer to that is... Well maybe I will! I like the idea of the smartphonw as a platform though, because it's easier to demonstrate that any data can be turned into sound and piped to my ears, and that the wifi sensing is part of my hearing, rather than a dedicated tool. 

    In other words, it's the difference between carrying around a beeping wifi sensor, and just hearing it with what I have on me daily. Still, it's a failsafe.  

    I love @Permafrost's idea for a spooky radio though. If I don't go with smartphones as a platform, I may create some kind of wearable platform into which different tools can be slotted, a bit like a socket for prosthetics. 
  • edited February 2014
    @Frank: Quick note regarding using an iOS device for Wi-Fi "sensing": it'll have to be jailbroken, because access to signal strength is only available via private APIs.
  • edited February 2014
    What you're doing is really cool, I'm interested to hear how it all turns out.

    I had another thought, if you used an ideal mixer designed to operate around 2GHz and an antenna you might be able to simply mix the wifi frequencies down. Essentially a short wave radio made to listen to and determine the presence of those frequencies.

    What would be ever better though, is if instead of using a phone you could set up an interface with a spectrum analyzer. Have a slightly different tone for each range of RF emission. That would potentially be a lot more complicated though.
Sign In or Register to comment.