Former hacker and cybersecurity expert Alissa Knight talks about her latest research on vulnerabilities in data privacy for the telemedicine and health app space.
- Telemedicine and mobile health space can take simple, basic steps to better protect user data from attacks.
- Health care data is extremely valuable on the dark web, because it contains a wealth of information about an individual and can’t be replaced.
- “I always believed that what could be made by humans can be broken by humans, that we’re not infallible and that anything we make is going to have a way around it,” says Alissa Knight, a former hacker and cybersecurity expert and founder of Brier & Thorn.
Did you know your health care data is worth more money on the dark web than your credit card information?
In a recent episode of Tech on Reg, I interviewed Alissa Knight, a self-labeled “recovering hacker” of 20 years and a serial entrepreneur. During the show, Alissa revealed some interesting discoveries she uncovered when researching – hacking with permission – 30 health and mobile apps. We discussed the work she’s done investigating vulnerabilities in the online services of telemedicine and the mobile health care space.
The COVID-19 pandemic has highlighted the importance of being able to contact health care providers remotely. Unfortunately, the mHeath apps are still riddled with vulnerabilities.
Users want to know their data privacy and security are ensured, and Alissa says the industry needs to take it more seriously and focus on specific code training for developers to avoid simple missteps.
From arrested teen to serial entrepreneur
Alissa became a hacker at age 13. By 14, she was arrested for hacking a government network.
“I always believed that what could be made by humans can be broken by humans, that we’re not infallible and that anything we make is going to have a way around it,” she says.
The charges were eventually dropped, and she was later recruited to work for the U.S. intelligence community in cyber warfare.
By 27, Alissa had started and sold two cybersecurity companies. She then moved onto her third, a managed security service provider (MSSP) called Brier & Thorn. Separately, she works as a thought leader and cybersecurity influencer through Knight Ink, creating content and strategy for category leaders and challenger brands in cybersecurity.
“I hack something and show the efficacy of their product by showing how their product works, and why the CSO [chief security officer], the buyer, needs it, through the lens of an adversary,” she says.
Why healthcare data?
“Believe it or not, your PHI, or protected health care information, is worth more on the dark web than your credit card,” says Alissa. “It’s worth so much more, because it contains way more data about you.”
And unlike credit cards, which your bank can cancel and replace, your protected health care information (PHI) or Social Security number can’t be so easily erased.
In research she did for the API security company Critical Blue, Alissa scanned the dark web and found this information can be worth up to 10 times the amount of a credit card number. Beyond simply selling data, hackers can also apply ransomware to hospital networks, seeking big payouts to release the data they’ve held up.
This can be life-threatening. In a recent case in Germany, a woman died because a hospital attacked with ransomware couldn’t access her information and therefore couldn’t help her in time before she passed.
Let’s talk HIPAA and mHealth
So, are mHealth apps subject to HIPAA? You betcha. mHealth app developers must understand what types of information and experiences fall under the Health Insurance Portability and Accountability Act (HIPAA) and what it means to be HIPAA compliant. HIPAA sets the standard for protecting sensitive patient data., First and foremost, you need to determine is if your mHealth app or wearable device is going to collect, store, or transmit protected health information (PHI). Essentially, PHI is any information in a medical record that can be used to identify an individual including. That may include medical records, billing information, health insurance information, and any individually identifiable health information.
If your mHealth app is going to track, transmit, or store PHI – it needs to be HIPAA compliant. Same is true if the application will be exchanging information with a doctor’s office, or other types of covered entities.
Hacking telemedicine apps to find the vulnerabilities
Alyssa recently hacked 30 health apps and mobile apps for various companies in search of serious vulnerabilities in the systems. What she found was not reassuring.
“There are always going to be findings, but they shouldn’t be as bad as they are,” she says “They’re pretty bad.”.
These mostly mobile app companies came to her because they wanted to be part of her research in this field and prove they care about cybersecurity. What she found most often was what’s called Broken Object Level Authorization, an API vulnerability.
Alissa was also able to find API secrets that were displayed in the code of the apps, something attackers could easily access.
“I reverse-engineered the mobile apps and found hard-coded API keys and tokens, bad stuff, found all these EPA secrets and all these credentials and stuff hard-coded in the apps. A big no-no,” she says.
What can companies do to protect your data?
Hard coding credentials was a common practice 20 years ago, says Alissa. But today it’s not practical for security purposes.
Coders are including sensitive information in the original code of an app, which can be hacked, instead of obfuscating this information with things like encryption.
The responsibility for implementing these methods of data protection lies with the health companies and app developers. But in many cases, she says, coders don’t receive secure code training — or security isn’t written at the same time as the code itself.
That’s why companies hire Alissa: to find the bugs and vulnerabilities in their products and prove how secure they are. However, she says, it’s not done enough.
While companies are willing to pay to send other employees to security awareness training, they aren’t doing the same with their developers to train them in secure code.
For better security, Alissa suggests companies should:
- Send developers to secure code training.
- Pen test their apps.
- Do static and dynamic code analysis.
- Compile their apps with SDKs.
- Secure their back-end API endpoints.
- Do safe certificate pinning.
“There’s just a lot that they’re not doing. And a lot of it is low-hanging fruit.” she says.
This article is based on an episode of Tech on Reg, a podcast that explores all things at the intersection of law, technology, and highly regulated industries. Be sure to subscribe for future episodes.
What Developers Can Do to Keep User Data Private in the Age of COVID and Telemedicine Since 2006, Google has been actively working on data privacy issues when it comes so-called “zero-knowledge” technology which uses techniques such as self encryption or distributed denial (DDos). The latest development is a new method for storing user information that allows only authorized people access by using code called Doxygen from Open Knowledge – an organization founded by former Microsoft executive Larry Page who also happens out at his apartment near Stanford University where he works part time while traveling around the world teaching computer science courses
MAKE AN APP THAT IS USEFUL FOR EVERYONE