Biometric authentication is not merely something we watch only in a science fiction movie nowadays. The rise in types of biometric authentications these days range from simple fingerprints to iris scanning, and even our heartbeats.
Biometric authentication is a digital identification method using the physical traits or behavioural habits of individuals to unlock devices or access personal information. Biometric technologies allow access based on unique and immutable characteristics about ourselves and is often the stuff our favourite spy movies are made of.
While the invention of biometric identification was made mostly to replace the inherent weaknesses of passwords, biometrics such as fingerprints, voice authentication and facial recognition may not be as infallible as you might think. Fingerprints could still be stolen, as a security researcher Tsutomu Matsumoto from Yokohama National University demonstrated with a dummy fingerprint made from molded gelatin tested successful 80% of the time. What’s more, once a fingerprint is stolen, it is stolen forever. Voice authentication is still subject to environmental factors and background noise on top of the potential danger of our voiceprints being recorded and stored on a remote system. Even facial recognition (read our post about Amazon’s patented ‘selfie pay’ system) is dependent on people’s faces which can change radically with age, weight, medical conditions, and injury.
However, researchers at Perceptual User Interfaces have now come up with a new technique for biometric authentication; skull echoes. While it is yet to be seen if the sound of our skulls could truly replace the strongest of our passwords, it still makes one of the most interesting forms of biometric authentication yet.
The three researchers in Germany; Stefan Schneegass of the University of Stuttgart, Youssef Oualil of the Saarland University, and Andreas Bulling Max Planck Institute for Informatics stated in their paper published in the Journal of ACM:
“We present SkullConduct, a biometric system that uses bone conduction of sound through the user’s skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user’s skull.”
Source: Perceptual User Interfaces
SkullConduct is a new authentication system utilising a bone conduction speaker and mic attached to the user’s head as identification. The system was designed by the researchers hooked up to a modified Google Glass, but the SkullConduct system could be built into other smart glasses or VR headsets in the future so that a user may log into their account as soon as they wear the device. In due course, the technology could even be incorporated into our smartphones, which would be useful as holding a phone to our head to answer a call would be enough to identify the user.
The headset uses two bone-conducting plates that sit on the cheekbones, next to the ears. When worn, it works by playing a one-secong-long ultrasonic audio clip through its speaker that is imperceptible to the user. The microphone is then able to pickup and capture that sound after it has passed through the user’s skull. It then analyzes the unique frequency to identify the wearer.
“If recorded with a microphone, the changes in the audio signal reflect the specific characteristics of the user’s head.”
Source: Perceptual User Interfaces
Because essentially every skull is different, thus when the device plays a specific sound pattern into a user’s head, they way the sound bounces around and reverberates through the user’s head inside will make a unique sound back. The result; a signal that comes out which can be tied to a specific individual. Once set up, the device is said to be able to recognize the pattern again when re-worn, creating a new kind of useful password.
“Since the structure of the human head includes different parts such as the skull, tissues, cartilage, and fluids and the composition of these parts and their location differ between users, the modification of the sound wave differs between users as well.”
The controlled experiments were carried out using 10 different test subjects who were free to take on and off the modified SkullConduct Google Glasses as they pleased. Based on the tests, the SkullConduct device was found to correctly identify and recognise the assigned participants 97% of the time. While results still are not at 100%, this tech is still very much in the research stage so it may be likely refined over time.
However, a couple of problems presented with the system include its limitation in audio being picked up when there were large amounts of interfering background noise (a factor not considered in the current prototype), as well as other factors like a user’s weight gain affecting the change in audio pattern as it plays through the skull, which may leave the user locked out of their device.
With that said, while the SkullConduct isn’t quite ready to be on the market just yet as a stand-alone mean for user authentication, it will definitely make an interesting solution in the future as a combinations method with other passwords or authentication to as an extra layer of security.
 Perceptual User Interfaces
 The Next Web
 Science Alert
 Popular Science
 New Scientist