Wait! I thought the Feds only cared about smartcards . . .
Monday, I posted how Quest's tokens allow for users to program their own seeds. Which then prompted questions (internally) of, "Why do you even care? I thought you focused on the Federal space, and the Feds only cared about PAP, or CIV cards?"
Well, yes. For the most part, the US Federal government does focus on CAC (Common Access Card - used by the DOD), and PIV (Personal Identification Verification - mostly used by the civilian agencies) cards. And you'll also hear about PIV-I (PIV Interoperable) being involved. In the case of CAC and PIV, a user has to file an application, and some Federal agency would need to sponsor the individual to obtain such a card. This usually includes a background check, or some sort of formal review, before the card is issued. PIV-I tried to lower this barrier, by allowing non-Federal organizations (think government contractors, state governments, first responders, etc) to issue interoperable smartcards that are trusted by Federal systems.
However, PIV-I has a lower "assurance level," and often involves the same (or similar) sort of background check, just by a different organization. Assurance Levels are set by NIST and can be found detailed in their Special Publication (SP) 800-63. You'll see there are 4 assurance levels, and PIV-I only gets to level 2 (really, it gets to level 3, but with a less stringent background check, it can only be considered level 2.5 at best). CAC & PIV strive for level 4, if not level 3.
Ok. So we've established that smartcards are the main vehicle for 2-factor authentication in the Federal government, but I still haven't explained why tokens (RSA and other ones) crop up. And this is because a token is also an acceptable form of 2-factor authentication (read SP 800-63. and you'll see them mentioned as 'one-time passwords'). The "identity proofing" is still required, but tokens are actually a lot more flexible for several reasons.
1. Tokens can be assigned to any user (part 1). In the case of smartcards, they are usually assigned to people, while tokens can be shared. In fact, with our tokens, users can share tokens, but continue to use their distinct credentials (username and password) with the token. Which means that multiple admins can share a token to access a common system, but you can determine which admin logged in to do something with the particular token. This lets you continue to have 2 factor authentication, but also a "check-in/check-out" system for the particular token, allow more controls over the physical token (perhaps locked away in a safe or vault).
1a. Tokens can be assigned to any user (part 2). This is really a corrollary to 1, which is that the token can be assigned to service, or privileged accounts. Putting in the same sharing (check-in/check-out) model for a token, plus tie the password to a password vault product, and you have some pretty solid security around your privileged accounts such as oracle, root, Administrator, and other 'non-named' accounts.
2. Tokens are independent of environment. With smartcards, you have to have a reader attached to the user's console. No reader, or a malfunctioning reader, makes authentication a bit more difficult (read not possible). There are also situations where PKI simply isn't used. Older applications or platforms that do not make use of certificates cannot be changed quickly or easily. With a token, along with a username and password/PIN, you can continue to get 2FA, even in a scenario where a reader isn't available or not practical, or the app is expecting only a username and password. There is still some amount of work to be done, but it's often easier than tying an app into an entire PKI infrastructure.
2a. Tokens are independent of environment. There are some cases where the app or platform is capable of using PKI, but it is sitting in a location/area/network that simply cannot reach the PKI infrastructure to which the certificate on the smartcard is tied to. Not every system is actually on the internet (unbelievable, I know), which means tokens can provide access without requiring a centrally managed CA (certificate authority) to be present.
3. Tokens can be assigned much quicker and easier. And this is really the crux of why tokens come into play in the Federal space. Smartcards require a centrally issued certificate to be put onto the card. In some cases, there is no time for the requests to make their way through the system to get a certificate, publish it to a CA (certificate authority) and card, and get the card to the user. In other cases, the user simply will not get through a background check (or unwilling to get one), but has to be given access to certain systems. Yes, there are times when the Federal government has to deal with "questionable people" but they might as well make sure it's the right person, so 2-factor is still needed.
3a. Tokens can be revoked much quicker and easier. Because the token is usually some bit of plastic, it's easier to revoke and know that it won't be used in other ways. Most smartcards take a while to issue because they are also printed as a security badge. Meaning that even if the certificate on the card is revoked, the card may still be usable to get physical access to a building or location. However, it's unlikely that an agency will let you in because you have a black piece of plastic on your keychain.
So, with those all those reasons, tokens are not going away. Smartcards will continue to dominate, but there will continue to be a need for 2-factor authentication (2FA) using one-time passwords (OTP) in the Federal space.
The RSA Breach saga continues . . .
I've discussed the RSA breach before, but had a very interesting conversation last week with a customer that was in a dilemma as to what to do. RSA have said they would replace some of the tokens, depending on the situation (protecting Intellectual Property, Consumer-Oriented products, etc) and this customer was pretty certain they were to get new tokens from RSA, but didn't know if they wanted them. Nor do they want any other "standard" token that other vendors could provide, because that new vendor could be breached as well.
What this client really wanted was a token where he could program his own seed into it. I mentioned that our software tokens actually allow for this out of the box, and when you "program" a new soft token, the seed is automatically generated. Which means that Quest never knows what the seed record is for any software token that we provide. For an example of this, you can actually see a recording of it here where the seed is generated, and then here where the seed is put into our Desktop token (there is no audio on either recording).
However, this wasn't as interesting to him as programmable hardware tokens. He had concerns about keys getting compromised while being transferred to the end user, and wanted to send a pre-programmed hardware device. At first, I didn't think we had anything like that in our arsenal, however, it turns out that one of our latest token models (The Yubikey token) as well as some of other ones already allow for user programmability! In fact, if you look at the link for the Yubikey token, and then scroll down, you'll see that some are programmable. It took our Defender Product Manager (Stu Harrison, who blogs on and off here) to point this out to me, even though I should have remembered this from earlier on.
Obviously, each token does come with a default seed that Quest will know about, but if there was concern about a vendor having the "keys to the kingdom," a programmable token puts the onus back on the organization, and keeps Quest out of the limelight. I don't want to discount the fact that this will take more manpower, but if you don't want your vendors to have your seed records, reprogramming the tokens is the only secure way of doing it. It only makes sense, and as a security professional, you should never rely on RSA, Quest or any organization if you can minimize the number of people that have access to the token seed records.
This why we have standards
Blah, blah, blah
RSA breached
Blah, blah, blah
The last week or so has been really interesting. Yes, one of the hot topics is the RSA breach. I'm not going to link to it - if you haven't heard by now, I'll wait until you google it, and then come back.
For me, the interesting part was that it was an APT (Advanced Persistent Threat). It's a hot topic in the Fed space, but most on the commercial side have never heard of it (want to know more about APT? Check out this 30 minute session I did a few weeks ago). However, our internal discussions started going down the "are we vulnerable to this same attack with our Defender product?" I replied with some glib comment saying, "No. Duh!" and went on. Not really, and it was much more pleasant, but it was definitely short on details as to why. But the conversations continued. Now, for someone on the outside, I probably would have given a more detailed answer, however I thought it was obvious that being standards-based, as we are, we avoid this sort of attack simply by following the rules instead of creating our own.
Well, last night, the questions continued, so I put together much of what is below in an email to explain. The gist of this is that Defender support OATH which means that we can use any token to provide 2-factor authentication and the algorithm used to calculate the next "random" value is a standard. By using a standard, a breach of the code simply isn't possible. Why? Because the code is already public. There's nothing to hide, so there's nothing to steal. And what that means is if there's a problem, it is with the token seed records.
For those that don't know, one-time password tokens all operate on a pre-defined seed. That seed is used to calculate the next password by the device. And when I say device, I include software tokens, and any other "thing" that provides a 1-time password. RSA does it, we do it, everyone does it. That seed is unique to the device, and kept secret by the owner of the token. Here's where things become interesting.
If you steal the seed records, you only have 1/3 the puzzle in getting to someone's login. Not only do you need the seed (to calculate the next, unique password), but you also need to know the account it's tied as well as the 'something you know' part of 2 factor authentication, which is usually a PIN or a predefined password. In the case of Defender, most customers use the AD password, which is usually better than a 4 digit PIN but that's irrelevant for this conversation. However, knowing someone has your seed record, as a user, ought to make you nervous. If one of the two factors you rely on is compromised, then it's no longer 2-factor, now is it?
The next bit is a direct copy/paste from the email I wrote. Vasco is one of our token suppliers, and the ones make the Go tokens we most often use for demos. So it's the one most people inside of Quest are familiar with, which is why I used them in the example below.
--
To add to this, let's say Vasco have a breach, and they get both source code from Vasco, as well as all their token seed records, including those of our clients. And the breach is so bad, they get our purchase orders (from Vasco) and maybe even see which tokens may have been drop shipped to our clients on our behalf. That's pretty bad.
But none of that mean you need to rip out and replace Defender. Our client can simply say, "We no longer trust Vasco tokens, and will switch to [insert name of OATH token vendor]." The Defender server stays intact, nothing needs to be changed, upgraded or patched. The customer simply gets a new set of tokens, loads them up, and gives the token to their users for self-service registration.
In the case of RSA, they are a tightly coupled system, where tokens, algorithms and authentication servers are all tied together. If you don't trust RSA, and feel they've been compromised, then you need to throw out the whole lot. In our case, Defender simply cannot be compromised in that way as we don't do anything secretly or have proprietary algorithms. OATH is published. RADIUS is published. If you trust those standards, then Defender is fine. The same for AD - we store our data there, but its a "pseudo-standard" where it's ubiquitous, and there's no deep dark secrets about it.
This is why I keep saying that our approach is more sound, and ultimately, more trustworthy. Not because our code is better (it may be but RSA have some clever folks, too) but because our entire approach/methodology does more to protect the client from a breach such as this.
There's still a cost, and you have to throw out the tokens, but you don't need to learn a new toolset, and plan a migration from one auth system to another. Especially in a stressful time when your CISO is breathing down your neck, asking how much of a threat is this.
--
That was really it. I don't have much more to add at this point. Other than maybe that we have a really cool, new token from Yubico called a Yubikey. You should ask for a demo, but it's something you have to see in person. USB-Based, but works on Windows, Macs, Unix, etc. Very clever, that token.
Et tu, brute?
It’s evident throughout history – inside jobs. Aside from nuclear war and weapons of mass destruction, cyber attacks pose the single greatest threat to US security – and they are growing more and more difficult to prevent. One clear indicator of the threat is the sheer volume of breaches. Cyber attacks on federal computer systems have increased more than 250% over the last two years, according to the Homeland Security Department. Federal computing resources are under constant threats -- not only from the outside, but also from trusted partners and internal users. Cyber attacks are a clear and present danger and the potential for both accidental and deliberate breaches of sensitive information is a growing concern. Innocent but careless employee actions can set the table for attacks by more malicious parties. In many cases, the threats are inadvertent, with users unwittingly introducing harmful viruses to your agency or allowing sensitive data to be leaked. But whether or not there’s malice, the damage from breaches can be great.
Join me for a discussion on Monday, March 14 @ 1:30 pm ET on ways to protect your environment from the inside threat. We’ll talk about how you can not only improve your security posture, but also meet regulatory and statutory guidelines during audits and reviews. Plus, you’ll also learn about forensics and tools you’ll need when a breach does occur to minimize the losses and downtime.
You can register here. I’m looking forward to hearty discussion.