federalcto.com The Blog of a Federal CTO

30May/120

Your Directory Has Been Breached – landing page

This is a landing page for a project Bob Bobel and I have been working on recently. Today (May 30th, 2012), I am presenting the first draft of the slides which will be posted shortly. A white paper is due out shortly. All information will be posted on this page, so it will be revised in the coming weeks.

(2012-05-30 - 13:00 ET)
First up, here is Bob's initial post on the topic:
http://www.bobbobel.com/active-directory-was-compromised-now-what/

(2012-05-30 - 16:15 ET)
Here is the slide deck just presented at the Department of Energy NLIT Conference (National Laboratories IT Summit):
http://www.federalcto.com/quest/breached-directory.pptx

(2012-06-05 - 18:00 ET)
Here is the first (rough) draft of the document to correspond with the slides. Yes, whole sections are missing, and will be filled in during the coming weeks. Stayed tuned.
http://www.federalcto.com/quest/breached-directory.docx

 

17Dec/110

Think there’s no cold war?

You think the cold war is over? You think things have settled? Have a read of this article - Full on hacking and cyber warfare is going on as we calmly surf the web. Have a read of this Business Week article: China-Based Hacking of 760 Companies Shows Cyber Cold War

27Oct/110

US Government Smartcards; CAC, PIV and PIV-I

Recently, I had the pleasure of trying to get some government certified smart cards for some of the technical people at Quest, and I can't believe how much of a headache and hassle it was. I actually don't work for Quest Software, but a subsidiary (Quest Software Public Sector) which is focused on the public sector space. And while most Quest employees don't have a need for government-issued or government approved smart cards, our company does. And while I knew that non-government employees can get a relatively new (a little over a year old) flavor of the PIV card called PIV-I, I was amazed at how difficult the process was to navigate. Thankfully, we have some pretty good and persistent purchasing folks, but the process is still pretty arduous for an organization that has been working with the government for years.

I won't get into all the details, but if you're interested, feel free to contact me offline.

In any case, I find myself constantly having to explain what the different is between a smart card, a CAC, a PIV card, and now a PIV-I card. A smart card is pretty straight forward - it's a generic term, and all the other cards fall into this category. You can find a lot more details on them here at Wikipedia. In any case, the thing that makes a smart card a CAC (which means Common Access Card, so please don't say, "CAC Card" as it is redundant) is that it is used by the US DoD (Department of Defense). If you want to do any work on government systems in the military, you will most likely need one of these.

Like their military counterparts, employees within civilian agencies also need smart cards. Of course, they opted for a similar, but not the same, standard. That standard is a PIV card (Personal Identification Verification). The cards are slightly different from CACs, and have varying information printed on them, depending on the issuing agency. Plus, they use a different set of CA (Certificate Authority) servers than the ones that CACs use, as the DoD have their own servers.

Finally, and this is the confusing thing, there are PIV-I cards. PIV-I stands for PIV-Interoperable. There are some great docs about PIV and PIV-I at www.idmanagment.gov which is a site run by GSA but I'll save you the trouble of wadding through a lot of documentation. The PIV-I FAQ here states:

2.2    What is the difference between a PIV-I Card and a PIV Card?
The term “PIV Card” may only be used to describe an identity card that is fully conformant with Federal PIV standards (i.e., FIPS 201 and related documentation). Only a Federal entity is capable of fully meeting these standards and issuing a PIV Card. A PIV-I Card meets the PIV technical specifications of NIST SP 800-73 and is issued in a manner that may be trusted by Federal Government Relying Parties, but does not meet all of the requirements of FIPS 201.

What does this mean? A Federal entity (aka employee) uses a PIV card, and a trusted, non-government entity has to use a PIV-I card.

So there you go. In summary:

  • CAC is for Department of Defense users
  • PIV is for civilian users working for the Federal government
  • PIV-I is for non-Federal entities that need to access government systems
29Jun/110

Wait! I thought the Feds only cared about smartcards . . .

Monday, I posted how Quest's tokens allow for users to program their own seeds. Which then prompted questions (internally) of, "Why do you even care? I thought you focused on the Federal space, and the Feds only cared about PAP, or CIV cards?"

Well, yes. For the most part, the US Federal government does focus on CAC (Common Access Card - used by the DOD), and PIV (Personal Identification Verification - mostly used by the civilian agencies) cards. And you'll also hear about PIV-I (PIV Interoperable) being involved. In the case of CAC and PIV, a user has to file an application, and some Federal agency would need to sponsor the individual to obtain such a card. This usually includes a background check, or some sort of formal review, before the card is issued. PIV-I tried to lower this barrier, by allowing non-Federal organizations (think government contractors, state governments, first responders, etc) to issue interoperable smartcards that are trusted by Federal systems.

However, PIV-I has a lower "assurance level," and often involves the same (or similar) sort of background check, just by a different organization. Assurance Levels are set by NIST and can be found detailed in their Special Publication (SP) 800-63. You'll see there are 4 assurance levels, and PIV-I only gets to level 2 (really, it gets to level 3, but with a less stringent background check, it can only be considered level 2.5 at best). CAC & PIV strive for level 4, if not level 3.

Ok. So we've established that smartcards are the main vehicle for 2-factor authentication in the Federal government, but I still haven't explained why tokens (RSA and other ones) crop up. And this is because a token is also an acceptable form of 2-factor authentication (read SP 800-63. and you'll see them mentioned as 'one-time passwords'). The "identity proofing" is still required, but tokens are actually a lot more flexible for several reasons.

1. Tokens can be assigned to any user (part 1). In the case of smartcards, they are usually assigned to people, while tokens can be shared. In fact, with our tokens, users can share tokens, but continue to use their distinct credentials (username and password) with the token. Which means that multiple admins can share a token to access a common system, but you can determine which admin logged in to do something with the particular token. This lets you continue to have 2 factor authentication, but also a "check-in/check-out" system for the particular token, allow more controls over the physical token (perhaps locked away in a safe or vault).

1a. Tokens can be assigned to any user (part 2). This is really a corrollary to 1, which is that the token can be assigned to service, or privileged accounts. Putting in the same sharing (check-in/check-out) model for a token, plus tie the password to a password vault product, and you have some pretty solid security around your privileged accounts such as oracle, root, Administrator, and other 'non-named' accounts.

2. Tokens are independent of environment. With smartcards, you have to have a reader attached to the user's console. No reader, or a malfunctioning reader, makes authentication a bit more difficult (read not possible). There are also situations where PKI simply isn't used. Older applications or platforms that do not make use of certificates cannot be changed quickly or easily. With a token, along with a username and password/PIN, you can continue to get 2FA, even in a scenario where a reader isn't available or not practical, or the app is expecting only a username and password. There is still some amount of work to be done, but it's often easier than tying an app into an entire PKI infrastructure.

2a. Tokens are independent of environment. There are some cases where the app or platform is capable of using PKI, but it is sitting in a location/area/network that simply cannot reach the PKI infrastructure to which the certificate on the smartcard is tied to. Not every system is actually on the internet (unbelievable, I know), which means tokens can provide access without requiring a centrally managed CA (certificate authority) to be present.

3. Tokens can be assigned much quicker and easier. And this is really the crux of why tokens come into play in the Federal space. Smartcards require a centrally issued certificate to be put onto the card. In some cases, there is no time for the requests to make their way through the system to get a certificate, publish it to a CA (certificate authority) and card, and get the card to the user. In other cases, the user simply will not get through a background check (or unwilling to get one), but has to be given access to certain systems. Yes, there are times when the Federal government has to deal with "questionable people" but they might as well make sure it's the right person, so 2-factor is still needed.

3a. Tokens can be revoked much quicker and easier. Because the token is usually some bit of plastic, it's easier to revoke and know that it won't be used in other ways. Most smartcards take a while to issue because they are also printed as a security badge. Meaning that even if the certificate on the card is revoked, the card may still be usable to get physical access to a building or location. However, it's unlikely that an agency will let you in because you have a black piece of plastic on your keychain.

So, with those all those reasons, tokens are not going away. Smartcards will continue to dominate, but there will continue to be a need for 2-factor authentication (2FA) using one-time passwords (OTP) in the Federal space.

27Jun/110

The RSA Breach saga continues . . .

I've discussed the RSA breach before, but had a very interesting conversation last week with a customer that was in a dilemma as to what to do. RSA have said they would replace some of the tokens, depending on the situation (protecting Intellectual Property, Consumer-Oriented products, etc) and this customer was pretty certain they were to get new tokens from RSA, but didn't know if they wanted them. Nor do they want any other "standard" token that other vendors could provide, because that new vendor could be breached as well.

What this client really wanted was a token where he could program his own seed into it. I mentioned that our software tokens actually allow for this out of the box, and when you "program" a new soft token, the seed is automatically generated. Which means that Quest never knows what the seed record is for any software token that we provide. For an example of this, you can actually see a recording of it here where the seed is generated, and then here where the seed is put into our Desktop token (there is no audio on either recording).

However, this wasn't as interesting to him as programmable hardware tokens. He had concerns about keys getting compromised while being transferred to the end user, and wanted to send a pre-programmed hardware device. At first, I didn't think we had anything like that in our arsenal, however, it turns out that one of our latest token models (The Yubikey token) as well as some of other ones already allow for user programmability! In fact, if you look at the link for the Yubikey token, and then scroll down, you'll see that some are programmable. It took our Defender Product Manager (Stu Harrison, who blogs on and off here) to point this out to me, even though I should have remembered this from earlier on.

Obviously, each token does come with a default seed that Quest will know about, but if there was concern about a vendor having the "keys to the kingdom," a programmable token puts the onus back on the organization, and keeps Quest out of the limelight. I don't want to discount the fact that this will take more manpower, but if you don't want your vendors to have your seed records, reprogramming the tokens is the only secure way of doing it. It only makes sense, and as a security professional, you should never rely on RSA, Quest or any organization if you can minimize the number of people that have access to the token seed records.

4Apr/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part III

Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago. That still amazes me.

Note 2: this is a very rough, stream of consciousness blog entry. Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

------------------
Preparation and migration
--
Preparing for the move, beyond the simple assessment seems a no-brainer. But a migration? Really? Right before a move? And I say "yes." First, let's be clear and qualify that "migration" (to me) means a cut over to another platform. This could be email, user directory, database, etc, and it may be to a new version of some current software, but it's definitely getting off the current version. The idea is to make the software and version of that software as current as possible before you actually move to a new location/environment. The only thing this excludes is the move from physical to virtual. That comes later, and I'll explain why then.

There are several reasons for you to consider doing a migration before consolidating environments. First, and foremost, the migration is going to happen sooner or later, and aren't you better off doing the migration in a comfortable and stable environment instead of your new one? Plus, the migration may actually shake some things out and make the consolidation easier. For example, if you use QMM to migrate mailboxes from an old version of Exchange to 2010, or use QMM for AD to restructure your Active Directory environments, you may actually find that there are many users, mailboxes, groups and other objects that could be deleted/abandoned.

The same goes for databases. If you're running an old version of Oracle for your databases, it's time to cut over, and see what features and benefits you get. And we even have a tool that let's you mix versions to replicate data while you make this sort of move, so it's not a jarring, "all at once" process.  That tool, BTW, is Shareplex.  And it also let's you replicate mixed versions of databases, which is pretty cool.

But why else should you do the migration before starting the actual move? Well, frankly, because support is easier. Sure, you can migrate a Windows 2003 server, or an Oracle 9i database into your new environment, but if there's a problem, what will the vendor tell your team? Most likely, they'll tell you to upgrade to the latest version.

It's not widely discussed, but the reality is that most software companies want you on the latest and greatest version of their software when you call support. It's usually the version their developers are currently using as the basis for the next release. It's the version that support is working with most, and one that they have set up locally to recreate your problems. One or two versions is often ok, but if you're running something more than 2-3 years old, I think you're asking for trouble. Get to the latest versions however you can, because you don't want to consolidate and move old software around.

Another reason is your staff's personal development. I've run IT groups in the past, and the most common question was always, "when are we going to get to the latest version of X?" where X was some database, operating system or programming language. If you are the CIO at a federal agency, your staff knows that data center consolidation is coming, in some form or fashion. You want them ready and energized for the task. Letting them get to the latest version of whatever software they work with will excite them, and you'll have a happy crew moving into the consolidation.

Now, I did mention that cutting over to a virtual environment should be last. The reason is that this is really the same as a hardware move. No matter what hardware your environment is using, something is sure to change. And your software may react adversely. Plus, if you couple that with a version upgrade, you are changing a lot out from under your user base as well as IT staff. The idea is to minimize risk, and that just doesn't cut it.  So do the "migration" first, get it settled, and then cut over hardware (which can be P2P or P2V).

And if you're contemplating a P2V move, you should definitely check out vConverter.  Not only does it work with P2V, but V2V (you may want to switch hypervisors, or try out multiple hypervisors with the same ), and even V2P in case you absolutely have to back out.  Or even want to switch hardware, using the move to virtual as a stepping stone.

Finally, if you upgrade, migrate and virtualize in a single move, how do you know where you got performance gains or losses? If you read my last post on this topic, you'll know I propose baselining before starting.  The only way to do that is to start with a known point, but then make incremental moves so you can collect more information on what impact each part of the upgrade, migration and consolidation has on your environment.

28Mar/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part II

Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago.  That still amazes me.

Note 2: this is a very rough, stream of consciousness blog entry.  Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

------------------
Initial assessment and baselining
--
Let's dive right in and get started.

To begin, an assessment needs to be performed to determine all the things that will be part of the consolidation.  Presumably, this has already started as an initial plan is due to be submitted to the OMB for review and budgeting. However, everyone knows adjustments can and will be made. So I'd suggest you do an assessment assuming every item will be questioned.

There are lots of ways to survey what you have, but looking to Quest to help with that may not be something you thought to do.  Well, you should.  The reason is that we have a lot of tools, and lots of tools to help with your entire environment. From the operating system, to the databases and file servers, all the way to app and web servers as well as desktops.  And while we're not in the inventory management business, we can certainly hold our own if you need a list.

"What kind of lists can you provide," you ask? For starters, we can get you users and desktops. Nowadays, most users are in Active Directory. And most of their desktops and laptops are joined to AD. So you could use something as simple as Quest Reporter to pull a list of those objects out of AD.  Following the 80/20 rule, that should give you a good ballpark of your end-user environment. Need something s little more accurate? Then you'll need to do more work but get more results. You can either go with something like Desktop Authority to get you a good idea of what is actually out at the desktop level.  Or, you can fall back to your AD event logs, and monitor login events over some time period with Quest Change Auditor for AD. In both cases, the products are sure to give you a lot more benefits beyond the initial assessment. And both Change Auditor and Reporter give you a good feel for your Windows server environment as well.

But the assessment is more than just a 'survey.' You cannot just make a nice clean inventory of everything you are responsible for, and leave it at that. It is critical to know -how- those systems are performing. In other words, you need to set a baseline, and you probably need to do it in 2 ways. The first way is through some measurements and metrics. Quest's Foglight platform is fantastic for end to end monitoring, and it can serve double duty by providing those initial statistics up and down your entire app stack.

Foglight can also provide those initial lists I mention above.  You need RAM, CPU and disk numbers off your servers? We can get those to you, and help with some capacity planning as well. And if you run Foglight long enough, you'll have some very good trending and usage data to use beyond the consolidation effort.

The second baseline to check is subjective, and it's the user's perception of the current systems.  This wouldn't involve any Quest product, but to simply put together a quick, 5 minute survey of what the users think of the apps they use. There are many free and paid sites out there that can run such a survey for you but I'd really encourage you to get this initial feedback. And if it starts to look grim, and you're surprised by the results, check out Quest End User Monitor to walk through the apps, and see what the users are complaining about.

That's really it on the baseline side.  We can help with that initial assessment as well as providing initial metrics for how your environment is functioning.  Can we provide complete coverage of your environment?  Probably not, but the tools you'd use from us would continue to provide value beyond the assessment rather than being a throw away once the assessment is complete. And wouldn't it be nice to be in a new environment but with a familiar toolset? I think your IT staff would say, "yes."

26Jan/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part I

[Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago.  That still amazes me.]

[Note 2: this is a very rough, stream of consciousness blog entry.  Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.]

The current administration has developed the Federal Data Center Consolidation Initiative (FDCCI) with every agency and department falling under scrutiny. It is  mandated by administration as a way to cut costs as well as secure the environment. This document does not seek to go into detail about the FDCCI but outline how Quest software can help every agency and department with the overall initiative.

The focus of the initiative is on physically collapsing all centers to a much fewer number. Of course, this is not just an exercise in put all the servers into a single room.  This is an opportunity to both consolidate, and update the environment as well as potentially modernize key systems and platforms.  And this is where Quest can help.

At a high level, the entire consolidation will involve the following steps:

  1. An assessment of the current environment, determining the disposition of every item to be included or excluded from the consolidation
  2. As part of the assessment, it is good to determine and establish baselines for services
  3. Prep work to get platforms or systems ready to be moved to their new environment - this includes any procurement and training that needs to occur
  4. The actual movement itself, which may be as simple as putting the same server into a new location or as complicated as migrating to an entirely new system on new hardware, operating system, platform, etc
  5. Tuning and optimization of the systems to their new environment
  6. Post-mortem review and on-going monitoring and maintenance

Depending on the agency or department, the age and condition of the systems, the number of users and administrators involved and the number of data centers, the time for each step will vary.

In all of this, personnel will also be affected. Not only may IT staff have a new location to go to, but they will also need new skills and tools. And with this, it would be good practice to audit and adjust access controls to make sure additional rights are not unduly granted.

Over the next few weeks, as time permits, I will be exploring the different areas of this sort of project, and tying it back to solutions that Quest Software provides. The hope is to give a good, solid overview of the help available from Quest in making this move.  Many of the topics covered will also apply to other situations, and later on, I'll make the case that many of our solutions are "dual purpose" meaning they may be used by one set of users for a particular task, but that a different set of users may also benefit from the same tools in a different capacity.

First off, it would be helpful to review what Quest's goals are in providing the software that they write. At a high level, Quest is a systems management company.  We do not have many of the platforms that people traditionally think of when they think of an enterprise software company; databases, web servers, operating systems, etc. For example, no one needs another database platform, at least not from us.  But organizations do need to get a better handle on their existing databases, database servers, and the systems that use them. And that's where we come in.

We focus on making your existing infrastructure easier to use. This is not to say all our tools are simple. Yes, we have "Simplicity At Work" as our tagline, but we do some pretty complicated things. And while the overall message does make sense to someone familiar with the problems we solve, these are not "Fisher Price" tools. The common misconception is because we have some products that have the word 'wizard,' that they are easy. No, what it means we make whatever the wizard works against easier to use.

Lastly, it would probably be good to give you an idea of who I am and what I do. I am currently the CTO for Quest's Federal Public Sector group. Which means I keep an eye on all things Federal, working with clients and partners to help them get the most out of Quest while also working internally to make sure our solutions align with my customers' needs. We have many other people with similar roles, however I don't have a single area of focus.  We have over 150 products, and anywhere from 3-8 different areas (depending on who you talk to). So I try and stay current with everything we have to offer.

And though I've worked with Public Sector clients since I arrived at Quest over 5 years ago, I've also worked for Quest in the UK as well as on the commercial side in North America. In those instances, my remit was Identity and Access Management as well as our overall Windows Management solutions. Before all that, and joining Quest, I was a developer, DBA, Director of IT, University instructor and a whole host of other things.

That's enough for now, and should give you a good enough idea of where this blog is headed over the next few weeks. If you have any questions or comments, don't hesitate to write me (dimikagi -at- federalcto.com) or post them up below.

   
Copyright (C) 2010-2011 Dmitry Kagansky – All opinions expressed are those of the respective author and do not reflect the views of any affiliate, partner, employer or associate.