Here is Ben last Monday, October 13. My husband takes him to a play group every Monday morning, and this week it met at a local pumpkin patch and petting zoo. Ben had a blast. He loves pumpkins (and anything orange).
You can see the equipment for his cochlear implant. He has an Advanced Bionics HighRes 90K implant, and his sound processor is programmed with Fidelity 120 software. Right now he's using the Body Worn Processor (BWP), which is in a purple pouch attached to his belt. There's a cable which goes up to the headpiece, which sits just behind and above his right ear. The headpiece attaches to his head with a magnet. (There's another magnet just under his skin, on the actual implant itself.) There's a microphone on the headpiece. Sound enters through the mic and travels down the cable to the sound processor, where it is digitized and the signal is processed in all sorts of fancy ways. Then the signal is sent back up the cable to the headpiece, where it is transmitted by short range radio to the implant under the skin. The implant then activates a sequence of electrodes along a wire that has been inserted into his cochlea. There are 16 electrodes spaced along the basilar membrane, corresponding to 16 different frequencies. When an electrode is fired, it stimulates the auditory nerve at that point directly, bypassing his inoperative hair cells. By firing the electrodes simultaneously in various combinations, it can produce the effect of more than just those 16 frequencies. In theory, there are 120 frequencies he can perceive. Some users are able to discriminate all or most of these. Who knows exactly what Ben is hearing right now, and how it compares to the "natural" sound that I hear. It doesn't really matter -- for him, this is natural sound.
The most important goal is good speech perception, and traditionally this has been the only priority for CI sound processing -- to optimize speech. At this point in time, all three CI manufacturers (Advanced Bionics, Cochlear, and MedEl -- the only three approved for use in the US) produce terrific speech perception in most users, so two of them (AB and MedEl) have started trying to improve the perception of music, which has traditionally been a sore spot with this technology. They do this by monkeying around with the way the signal is processed after it has been digitized. I'll say more about Ben's love of music in a future post!
No comments:
Post a Comment