Multi-factor authentication is the process of identifying an online user by validating two or more claims presented by the user, each from a different category of factors. Factor categories include knowledge (something you know), possession (something you have) and inherence (something you are).
When you talk with information security people, they tend to love their TLA’s (Three Letter Acronyms). It’s like a secret language. If you’re not privy to the inner workings, you’ll soon start paying attention to more pertinent things than the fact that your sites need to upgrade to EV SSL Certificates and the admins definitely need better SSH key management and your IAM solution lacks OAuth and UDF capabilities. Still with me?
Let me add one more acronym to the list – MFA. Multi-factor authentication is something every web facing application owner should be aware of. There’s plenty of regulatory pressure in many verticals pushing online services and applications towards stronger alternatives in user authentication than the password. The owners of these applications under pressure need to also understand how to build usable multi-factor authentication schemes, as they can work as a competitive advantage.
Factoring the Factors
The factors when we talk about multi-factor authentication are based on your memory (1), what you possess (2) and yourself (3). Let’s examine these factors a bit more.
Once upon a time there was a computer system and people thought that it would be good to store confidential information on the system. This information needed protection and the password was born. Password authentication, unfortunately, is known to all of us.
Passwords and other secrets (PIN-code, passphrase) that rely on your memory constitute a factor. It’s a secret known only to you. Another widely used memory-based secret is the question and answer to which only you should know the answer to. The Q&A is much harder to implement than a password. Easy to remember answers tend to be also easy to discover by a third party. Harder answers on the other hand are more difficult to remember by the user. And the reason I use memory to describe the requirement rather than knowledge is the fact that people forget. The secret is usable if you remember it, just like knowing your own wedding date is not that useful if you forget the anniversary coming up next week.
Forgetting a password might prevent you entering an application, but forgetting the anniversary prevents you entering your house – at least for a short while. Fortunately, both cases have recovery options. I’ll leave you to evaluate which one is the more complex and time consuming.
As time passed people realized that the information stored into their computer systems were sometimes very confidential in nature. They realized that better security was needed and something that didn’t rely on the memory of the user, or was something that was harder to give to someone else intentionally or by mistake, or discovered by breaching the database of stored secrets (passwords). Another factor was born. The second factor was taken away from the computer systems. The new factor meant that the user was in a possession of something. Something they could carry with themselves – and it’s hard to carry a mainframe with you, so the new factor was small. Late 90’s saw the introduction of this second factor in the form of PKI smart cards and USB-tokens, One-Time-Password lists or tokens. Today a mobile phone is a prime example of this second factor.
Time marched on and criminals found new ways to breach some of these second factor systems. The amount of confidential information in applications and databases grew and more people needed access to that information. The clever defenders of our data discovered that the user can also act as a factor. Not their memory, not something that they have, but something that they are. The Sci-Fi style access controls of Mission Impossible and Aliens: Resurrection relied on physical attributes of the user – fingerprint or determining content in the breath. Unfortunately, the Star Trek holodeck is not yet here, but the biometric factors are. Beyond physical attributes that can be scanned, new ways of constituting the inherence factor include e.g. behavior. The way we move a mouse or type can be measured (constantly) and compared to previously recorded data.
Adding the ‘Multi’ Factors
When you consider a multi-factor authentication scheme, you have to add different factors. An authentication scheme using different implementations of the same factor does not constitute a multi-factor authentication scheme. A password combined with a Q&A is not a multi-factor authentication method. A subset of multi-factor authentication is two-factor authentication (2FA – again with the TLAs) combining two of the factors. Another widely used term is ‘strong authentication’. All of these terms are ambivalent, leaving room for interpretation, except that multi-factor method uses more than 1 factor. A good example is that right now the European Banking Authority is figuring out what exactly does the strong authentication requirement mean in the Payment Services Directive 2.
So it’s all about adding factors. Within the factor category we have several different implementations. A good multi-factor authentication method combines two or more factors in a convenient way. When you consider implementing a multi-factor authentication scheme for your application, you must always consider the usability of the implementation. If the user has to remember a 16-digit password (and change it every three months) with special characters and the name of his first pet (albeit never owning one), then input the One-Time-Password generated by the token he forgot home, and finally typing arbitrary text and wiggling the mouse to determine the behavior pattern, we are sort of asking for trouble. A good multi-factor method can be even easier and more convenient than the first computer based authentication method, the password.
Deploying MFA
Each of the factors have their own challenges. Memory-based systems are suspect to our own shortcomings. Possession category can include e.g. tokens that are always in the wrong place or broken. Inherence could prove to be challenging, or even impossible, to change. These are not insurmountable challenges. However, you should always use appropriate authentication for each resource you’re trying to protect. Multi-factor authentication is not always needed. Sometimes social identities can be enough to get the user through the first door. That’s why it’s advisable to deploy an Identity Provider that can take care of the different levels of authentication requirements for your applications. An identity provider also enables your applications to use third party issued identities, including multi-factor methods, to authenticate and register users.
Other Factors to Consider
On top of the three categories of factors introduced we can reduce risk even further by introducing other attributes about the user. Location (geolocation or IP address range) data can be useful in determining the validity of the transaction (authentication transaction, payment transaction etc.). If the user is trying to login from Delhi when the last time he logged in was only two hours ago from New York, you might suspect foul play – or that he has access to Unidentified Flying Objects. Time can be also factored in. If the transactions typically happen between 9am–8pm, an attempt at 4am could be suspect. However these actions can be completely legitimate. The user might be using TOR or F-Secure Freedome to protect his privacy, or in case of time inconsistencies he might simply be travelling and trying to login from a different time-zone. These and other factors can be used to evaluate the risk of fraudulent transactions. If the risk score raises, you can ask for a stronger validation of the transaction, i.e. require multi-factor authentication instead of a password.
About The Author: Petteri Ihalainen
More posts by Petteri Ihalainen