A common information security axiom is that deciding how to security harden something is always a compromise between usability and security. More usability means less security. More security means less usability.
An obvious example is usernames and passwords for operating system accounts and online services. Ordinary people have had to remember passwords for online services since use of the web exploded in the 1990s. By the first decade of the 21st century, most adults in the developed world now have to use multiple sets of authentication credentials for numerous social networking sites, online banking, and ecommerce sites such as Amazon.
Easily remembered passwords are more usable but much easier to crack, as they typically are a word or phrase and a series of numbers tied to important dates. More complex passwords take a lot more time and effort to crack, but they're difficult to remember and less usable. Ideally, end users should use a different password for each service they authenticate with, and change those passwords every few months or so.
However, that's challenging even for a cybersecurity professional. Password management programs make doing all that much easier by letting the user have one set of credentials to unlock all of their other credentials. But then you have one point of attack to authenticate everything a user has online.
But lately, I can think of a few ways where usability and security are friends rather than foes.
Earlier this year, I wrote about a few security vulnerabilities that were due to bad UX design. Good UX design makes applications easier for people to use. It can also make it easier for end users to configure applications securely.
There was a flaw in the ASUSWRT firmware GUI which is used by multiple ASUS models of home routers. Even if “Enable Web Access from WAN” was set to “no,” an attacker could still have remote access to a router through the public Internet. It was an iptables routing quirk that didn't properly cooperate with how the GUI presented the configuration. The vulnerability has since been patched, thank goodness.
Microsoft Office macro malware was especially common in the late 1990s and early 2000s, and still exists to some extent today, as well. One of the measures that has made that sort of attack less common was when Microsoft added a popup to future versions of their Office suite. Those popups said, “The document you are opening contains macros or customizations. Some macros may contain viruses that could harm your computer.”
Something as simple as a little warning in the GUIs reminded users to be more cautious when opening documents. They should only be opening documents from parties they trust. But by Office 2010, Microsoft replaced that warning with the “SECURITY WARNING. Macros have been disabled” message in the notifications bar.
There are multiple problems with that. The first is that most macros are safe and an Office user has to use them sometimes in order to do their work. Secondly, a notifications bar message is easier to ignore than a popup. Thirdly, the wording they chose was confusing to many users, for obvious reasons. Macro malware attacks became more common, once again.
By Office 2013, a button would appear that allowed users to “Enable Content,” which could include macros. The vague warning made the associated risk obscure. Even I would be tempted to click the button if I was using an Office program. Enable my content, come on! Don't keep content from me while I create more content!
Microsoft could probably make usability improvements in their GUI design that'd show users how to use their software in a more secure fashion.
On March 9th, 2017, during Google Cloud Next, Google announced that they are ready for Google Cloud Platform users to authenticate with their implementation of FIDO U2F (universal second factor) technology, in place of OTP 2FA (one-time password two-factor authentication.)
OTP implementations usually use SMS, or a specific app, such as Google Authenticator. Sometimes OTP is implemented by displaying a code on a key fob device. Because OTPs are designed to time out, they're a lot more secure than conventional passwords against replay attacks.
Unfortunately, OTP is vulnerable to man-in-the-middle attacks and phishing. The user and the enterprise need to trust the security of the telecommunications infrastructure used, as well as endpoint security. For example, there's mobile malware that facilitates man-in-the-middle attacks on OTPs sent via both SMS and proprietary apps.
Plus, OTP 2FA requires a user to manually input a code such as “7BT063.” There's not only a man-in-the-middle vulnerability, but also a user error vulnerability.
There is no password in the U2F authentication process. OTP is still somewhat vulnerable to replay attacks, U2F isn't at all. There is no “something you know” involved in U2F, only “something you have.”
It's easier for users because they don't ever need to type any sort of password or PIN. Also, one device, such as a user's key fob or fingerprints, can be used to authenticate across multiple sessions with multiple services on multiple endpoints. A new OTP must be generated for each new session on an endpoint. And if the user doesn't input their OTP within its activation period, the user needs to generate a new OTP.
U2F is also simpler for the enterprise because they don't need to support a backend to generate all of those OTPs for all of their users whenever they need them.
By April 2013, Google joined the FIDO Alliance, which is comprised of many different technology vendors. Later that year, there was already criticism in the tech industry that Google's development of FIDO U2F implementations missed the opportunity to use push authentication methods for better usability. Google focused their development efforts on a method that requires a FIDO USB device.
Entrust's Chris Taylor wrote, “First, this is a USB device. Many users may feel leery of sticking a USB device into their PC. What else is on this dongle? Also, what happens if I’m using a device that doesn’t support a USB port like my iPad or any other tablet? Or, perhaps, I don’t have any free USB ports at all.
This is a strong authentication solution that verifies digital identities via multiple factors: something “I know” and something “I have.” I have an issue with the “I have” — a USB token. Great, something else I need to put on my keychain with all of the other gadgets, keys and fobs that are forced upon consumers. How many users are going to misplace, drop, or leave the USB somewhere?”
It's worth noting that Google announced their fobless Google Prompt authentication app for mobile devices in 2016. But Google's implementation of FIDO U2F for Google Cloud Platform requires a dedicated authentication device, usually in the form of a USB stick that stays in a developer's client machine.
Conventional wisdom about the relationship between usability and security in information security will continue to be challenged in interesting ways as time goes on.
About Kim Crawley
Kimberly Crawley spent years working in consumer tech support. Malware-related tickets intrigued her, and her knowledge grew from fixing malware problems on thousands of client PCs. By 2011, she was writing study material for the InfoSec Institute’s CISSP and CEH certification exam preparation programs. She’s since contributed articles on information security topics to CIO, CSO, Computerworld, SC Magazine, and 2600 Magazine. Her first solo-developed PC game, Hackers Versus Banksters, and was featured at the Toronto Comic Arts Festival in May 2016. She now writes for Tripwire, Alienvault, Cylance, and CCSI’s corporate blogs.
The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Cylance.