Wait! I thought the Feds only cared about smartcards . . .
Monday, I posted how Quest's tokens allow for users to program their own seeds. Which then prompted questions (internally) of, "Why do you even care? I thought you focused on the Federal space, and the Feds only cared about PAP, or CIV cards?"
Well, yes. For the most part, the US Federal government does focus on CAC (Common Access Card - used by the DOD), and PIV (Personal Identification Verification - mostly used by the civilian agencies) cards. And you'll also hear about PIV-I (PIV Interoperable) being involved. In the case of CAC and PIV, a user has to file an application, and some Federal agency would need to sponsor the individual to obtain such a card. This usually includes a background check, or some sort of formal review, before the card is issued. PIV-I tried to lower this barrier, by allowing non-Federal organizations (think government contractors, state governments, first responders, etc) to issue interoperable smartcards that are trusted by Federal systems.
However, PIV-I has a lower "assurance level," and often involves the same (or similar) sort of background check, just by a different organization. Assurance Levels are set by NIST and can be found detailed in their Special Publication (SP) 800-63. You'll see there are 4 assurance levels, and PIV-I only gets to level 2 (really, it gets to level 3, but with a less stringent background check, it can only be considered level 2.5 at best). CAC & PIV strive for level 4, if not level 3.
Ok. So we've established that smartcards are the main vehicle for 2-factor authentication in the Federal government, but I still haven't explained why tokens (RSA and other ones) crop up. And this is because a token is also an acceptable form of 2-factor authentication (read SP 800-63. and you'll see them mentioned as 'one-time passwords'). The "identity proofing" is still required, but tokens are actually a lot more flexible for several reasons.
1. Tokens can be assigned to any user (part 1). In the case of smartcards, they are usually assigned to people, while tokens can be shared. In fact, with our tokens, users can share tokens, but continue to use their distinct credentials (username and password) with the token. Which means that multiple admins can share a token to access a common system, but you can determine which admin logged in to do something with the particular token. This lets you continue to have 2 factor authentication, but also a "check-in/check-out" system for the particular token, allow more controls over the physical token (perhaps locked away in a safe or vault).
1a. Tokens can be assigned to any user (part 2). This is really a corrollary to 1, which is that the token can be assigned to service, or privileged accounts. Putting in the same sharing (check-in/check-out) model for a token, plus tie the password to a password vault product, and you have some pretty solid security around your privileged accounts such as oracle, root, Administrator, and other 'non-named' accounts.
2. Tokens are independent of environment. With smartcards, you have to have a reader attached to the user's console. No reader, or a malfunctioning reader, makes authentication a bit more difficult (read not possible). There are also situations where PKI simply isn't used. Older applications or platforms that do not make use of certificates cannot be changed quickly or easily. With a token, along with a username and password/PIN, you can continue to get 2FA, even in a scenario where a reader isn't available or not practical, or the app is expecting only a username and password. There is still some amount of work to be done, but it's often easier than tying an app into an entire PKI infrastructure.
2a. Tokens are independent of environment. There are some cases where the app or platform is capable of using PKI, but it is sitting in a location/area/network that simply cannot reach the PKI infrastructure to which the certificate on the smartcard is tied to. Not every system is actually on the internet (unbelievable, I know), which means tokens can provide access without requiring a centrally managed CA (certificate authority) to be present.
3. Tokens can be assigned much quicker and easier. And this is really the crux of why tokens come into play in the Federal space. Smartcards require a centrally issued certificate to be put onto the card. In some cases, there is no time for the requests to make their way through the system to get a certificate, publish it to a CA (certificate authority) and card, and get the card to the user. In other cases, the user simply will not get through a background check (or unwilling to get one), but has to be given access to certain systems. Yes, there are times when the Federal government has to deal with "questionable people" but they might as well make sure it's the right person, so 2-factor is still needed.
3a. Tokens can be revoked much quicker and easier. Because the token is usually some bit of plastic, it's easier to revoke and know that it won't be used in other ways. Most smartcards take a while to issue because they are also printed as a security badge. Meaning that even if the certificate on the card is revoked, the card may still be usable to get physical access to a building or location. However, it's unlikely that an agency will let you in because you have a black piece of plastic on your keychain.
So, with those all those reasons, tokens are not going away. Smartcards will continue to dominate, but there will continue to be a need for 2-factor authentication (2FA) using one-time passwords (OTP) in the Federal space.
Technology on the battlefield and “The Golden Hour”
Government Computing News has just published an article on smartphones on the battefield, and within the Defense sector specifically. That article can be read here: http://gcn.com/Articles/2011/05/03/Cyber-defense-handheld-encryption.aspx . It's a very good article, and one that highlights the challenges with using commercial products in a military or intelligence setting, where the stakes are much higher in some cases.
But it reminded me of a presentation I saw late last year about "The Golden Hour" from Air Commodore Tony Boyle (UK Defence Operations Board) and the use of technology to help get medical attention to wounded combatants much faster. The Golden Hour is a medical term originally coined by the military to describe that window of opportunity (usually 60 minutes) to save a life after severe trauma occurs. And while there is controversy (see the wikipedia article on this here) about it's validity, there is absolutely no doubt that getting a wounded solider proper medical care faster raises the survival odds.
The overall presentation was titled "Future Networks - Enabling Defence Interoperability and Interconnectivity" and he also got into the military doing more with COTS (Commercial Off The Shelf) systems, as well as looking at private industry for 'best practices' and practical savings. What follow's is my Quest colleague's (Ian Davidson) write-up of the presentation.
He gave some insights into how technology is actually used in theatre – some of which may seem obvious but nonetheless was very interesting.
One “vignette” was regarding an incident with Viking Patrol in Lashkar Gah :
When the IED went off, a 9 line text message was automatically sent to multiple places with differing results :
- One text message went to a Hermes ISTAR which automatically deployed it to the location of the explosion to monitor the area immediately
- Another text message went to a 904 squadron to deploy a Harrier which arrived in the area within 6 minutes.
- Another went to the Casualty Ops – the result of which was that 9 doctors/surgeons were lined up waiting when the casualties arrived – and had been automatically notified of blood type/ full history/allergies etc.
This combination of events lead to the casualties being dealt with by medics– well within “The Golden Hour” – the period of 60 minutes after sever trauma in which casualties are most likely to survive . In fact they were on the operating tables within 15 minutes of the device going off.
The whole point of this story was to illustrate how the military on joint operations depend on sharing information collaboratively in order to ensure the success of any given military operation – no pun intended.
“No secrets” between different military “departments” and deployments mean that soldiers survive based on a POST and subscribe method.
Each operating division posts information that is likely to be identified as a serious threat or something requiring immediate attention, and others can subscribe to that information.
He then went on to discuss the different business models that have come around over the last 10 years and how increasingly open systems via the internet allow for people to become enabled to share information sensibly – using an analogy about Amazon re: systems training. i.e. “No one goes to school to learn how to order books from Amazon”.
He also discussed ideas around redefining what needs to be secret and what doesn’t – stating that (as an example) in the case of military personnel ordering clothing provisions – there is no real need for it to be “so “ secure and locked down internally.
Why not use M & S for shirts for example – the worst thing that can happen is that M & S get hacked by another foreign country and he potentially gets the wrong shirt size delivered !
(For the non-UK based folks, M & S is Marks & Spence, a UK department store similar to Nordstrom).
Overall, it was a great presentation, making the case for using technology to it's fullest extent, and making sure people are comfortable with whatever model you set up.