Defend your smartphone from voice impersonators gaining access to your bank account.
It's a lot easier to talk to a smartphone than to try to type instructions on its keyboard. This is particularly true when a person is trying to log in to a device or a system: Few people would choose to type a long, complex secure password if the alternative were to just say a few words and be authenticated with their voice. But voices can be recorded, simulated or even imitated, making voice authentication vulnerable to attack.
The most common methods for securing voice-based authentication involve only ensuring that analysis of a spoken passphrase is not tampered with; they securely store the passphrase and the authorised user's voiceprint in an encrypted database. But securing a voice authentication system has to start with the sound itself.
The easiest attack on voice authentication is impersonation: Find someone who sounds enough like the real person and get them to respond to the login prompts. Fortunately, there are automatic speaker verification systems that can detect human imitation.
However, those systems can't detect more advanced machine-based attacks, in which an attacker uses a computer and a speaker to simulate or play back recordings of a person's voice.
If someone records your voice, he can use that recording to create a computer model that can generate any words in your voice. The consequences, from impersonating you with your friends to dipping into your bank account, are terrifying. The research my colleagues and I are doing uses fundamental properties of audio speakers, and smartphones' own sensors, to defeat these computer-assisted attacks.
How speakers work
It's a lot easier to talk to a smartphone than to try to type instructions on its keyboard. This is particularly true when a person is trying to log in to a device or a system: Few people would choose to type a long, complex secure password if the alternative were to just say a few words and be authenticated with their voice. But voices can be recorded, simulated or even imitated, making voice authentication vulnerable to attack.
The most common methods for securing voice-based authentication involve only ensuring that analysis of a spoken passphrase is not tampered with; they securely store the passphrase and the authorised user's voiceprint in an encrypted database. But securing a voice authentication system has to start with the sound itself.
The easiest attack on voice authentication is impersonation: Find someone who sounds enough like the real person and get them to respond to the login prompts. Fortunately, there are automatic speaker verification systems that can detect human imitation.
However, those systems can't detect more advanced machine-based attacks, in which an attacker uses a computer and a speaker to simulate or play back recordings of a person's voice.
If someone records your voice, he can use that recording to create a computer model that can generate any words in your voice. The consequences, from impersonating you with your friends to dipping into your bank account, are terrifying. The research my colleagues and I are doing uses fundamental properties of audio speakers, and smartphones' own sensors, to defeat these computer-assisted attacks.
How speakers work
Conventional speakers contain magnets, which vibrate back and forth according to fluctuations of electrical or digital signals, converting them into sound waves in the air.
Putting a speaker up against the microphone of a smartphone, for example, means moving a magnet very close to the smartphone. And most smartphones contain a magnetometer, an electronic chip that can detect magnetic fields. (It comes in handy when using a compass or navigation app, for example.)
If the smartphone detects a magnet nearby during the process of voice authentication, that can be an indicator that a real human might not be doing the talking. It is good to ensure that it's a person talking and not a computer.
We must be very careful these days because hackers are really on the rise using highly sophisticated technologies and mechanisms that could easily penetrate most firewalls, surveillance systems and security measures put in place to protect users' privacy and information.
Putting a speaker up against the microphone of a smartphone, for example, means moving a magnet very close to the smartphone. And most smartphones contain a magnetometer, an electronic chip that can detect magnetic fields. (It comes in handy when using a compass or navigation app, for example.)
If the smartphone detects a magnet nearby during the process of voice authentication, that can be an indicator that a real human might not be doing the talking. It is good to ensure that it's a person talking and not a computer.
We must be very careful these days because hackers are really on the rise using highly sophisticated technologies and mechanisms that could easily penetrate most firewalls, surveillance systems and security measures put in place to protect users' privacy and information.
0 comments:
Drop your comments here!