AI Can Now Steal Your Passwords With Almost 100% Accuracy — Here’s How

It’s safe to say AI can steal passwords with a method no one would’ve thought of–guessing. Yes, this process which has an AI listen to keystrokes as people type and guess their passwords with above 90% accuracy, was discovered by researchers at Cornell University.

The research was conducted using an AI model to learn the sound signature of the different keystrokes and test it on a smartphone nearby. A MacBook Pro’s integrated microphone was used to listen for the keys being pressed on the smartphone and the reproduction was spot on with 95%, the highest score they achieved without the use of a LLM.

The researchers proceeded to test the accuracy via a Zoom call with the keys being recorded on the laptop with the internal mic in the middle of a meeting. The result was 93% accuracy, while Skype came out with 91.7%.

RELATED: How Thomson Reuters Is Leveraging AI To Enhance Productivity, Rather Than Replace Jobs

The loudness of the keyboard, however, had almost nothing to do with the precision of the AI since it was trained to detect the time, waveform, and intensity of each keystroke to be able to tell. So, the AI notices how fast a user types and the delay between pressing certain keys.

Out in the world, this method could show up as malware, embed itself on any smartphone, use the microphone to gather the keystroke data from other users nearby and feed the info to an AI. This could mean a massive security breach as a lot of users could be hacked in a very short time.

All hope is not lost, though, as there are still ways to protect oneself from this form of cyberattack. Instead of typing, biometric features, the likes of Touch ID, Face ID, and Windows Hello on smartphones and laptops would seem like the way to go. Also, a secure password manager would allow a user to use completely random passwords for all their different accounts and eliminate the necessity for users to keep having to reenter their passwords.

The other problem here is that this method is only one of many that have surfaced with the emergence of AI tools, particularly ChatGPT. Not that long ago, the FBI raised concerns about how ChatGPT is being used by cybercriminals and scammers to create malicious code and launch cyberattacks.

What Are AI-Driven Cyberattacks?

Artificial intelligence walks a tightrope in the world of cyber warfare. On the good side, it steps up as the defender, spotting and reducing digital threats. It’s a whiz at going through heaps of data on the fly, spotting trends, learning from past encounters, and foretelling possible dangers before they rear their heads.

On the bad side, there are got cyber criminals working the AI angle too. These AI-driven attacks can be more tricky to unmask than the old-school hacks cybersecurity experts may have come to expect. They’re clever enough to dissect all possible angles of attack, cherry-pick the prime ones, nail the mission, and dodge the radar, all while morphing in real-time.

RELATED: 4 Ways Generative AI Makes Founders More Interesting To Journalists

What’s more, these AI-powered hits run smoothly, allowing attackers to broaden their scope and hit with sniper-like precision. According to a fresh study by Forrester, a whopping 88% of cyber defenders are bracing for AI-led raids to go mainstream. It’s no longer a question of ‘if,’ but ‘when.’

As much as everybody loves AI and sees its benefits, cybercriminals also see its potential and they’re sure to use it to their advantage. With AI’s ability to learn and evolve from its previous encounters, AI-backed cyberattacks will keep getting more sophisticated with time making traditional security methods obsolete.

That being said, cybercrooks aren’t the only ones who get to use AI as a means to their ends. Cybersecurity with the use of AI is also next level, it has the ability to detect threats immediately, it can tell what kind of threat it might be facing, it also knows how to respond to that particular threat, and most importantly can also learn and evolve from that encounter. Talk about fighting AI with AI.

This doesn’t mean users get to do away with all the traditional methods they know and leave it all to AI. Secure methods such as common sense (first and foremost), multi-factor authentication, biometrics, and a password manager can go a long way in protecting a user from avoidable attacks.

NEXT: This Week In AI: Experiments, Retirement And Extinction

Leave a comment