federalcto.com The Blog of a Federal CTO

30Mar/110

This why we have standards

Blah, blah, blah

RSA breached

Blah, blah, blah

The last week or so has been really interesting.  Yes, one of the hot topics is the RSA breach.  I'm not going to link to it - if you haven't heard by now, I'll wait until you google it, and then come back.

For me, the interesting part was that it was an APT (Advanced Persistent Threat).  It's a hot topic in the Fed space, but most on the commercial side have never heard of it (want to know more about APT?  Check out this 30 minute session I did a few weeks ago).  However, our internal discussions started going down the "are we vulnerable to this same attack with our Defender product?"  I replied with some glib comment saying, "No.  Duh!" and went on.  Not really, and it was much more pleasant, but it was definitely short on details as to why.  But the conversations continued.  Now, for someone on the outside, I probably would have given a more detailed answer, however I thought it was obvious that being standards-based, as we are, we avoid this sort of attack simply by following the rules instead of creating our own.

Well, last night, the questions continued, so I put together much of what is below in an email to explain.  The gist of this is that Defender support OATH which means that we can use any token to provide 2-factor authentication and the algorithm used to calculate the next "random" value is a standard.  By using a standard, a breach of the code simply isn't possible.  Why?  Because the code is already public.  There's nothing to hide, so there's nothing to steal.  And what that means is if there's a problem, it is with the token seed records.

For those that don't know, one-time password tokens all operate on a pre-defined seed.  That seed is used to calculate the next password by the device.  And when I say device, I include software tokens, and any other "thing" that provides a 1-time password.  RSA does it, we do it, everyone does it.  That seed is unique to the device, and kept secret by the owner of the token. Here's where things become interesting.

If you steal the seed records, you only have 1/3 the puzzle in getting to someone's login.  Not only do you need the seed (to calculate the next, unique password), but you also need to know the account it's tied as well as the 'something you know' part of 2 factor authentication, which is usually a PIN or a predefined password.  In the case of Defender, most customers use the AD password, which is usually better than a 4 digit PIN but that's irrelevant for this conversation.  However, knowing someone has your seed record, as a user, ought to make you nervous.  If one of the two factors you rely on is compromised, then it's no longer 2-factor, now is it?

The next bit is a direct copy/paste from the email I wrote.  Vasco is one of our token suppliers, and the ones make the Go tokens we most often use for demos.  So it's the one most people inside of Quest are familiar with, which is why I used them in the example below.

--

To add to this, let's say Vasco have a breach, and they get both source code from Vasco, as well as all their token seed records, including those of our clients.  And the breach is so bad, they get our purchase orders (from Vasco) and maybe even see which tokens may have been drop shipped to our clients on our behalf.  That's pretty bad.

But none of that mean you need to rip out and replace Defender.  Our client can simply say, "We no longer trust Vasco tokens, and will switch to [insert name of OATH token vendor]."  The Defender server stays intact, nothing needs to be changed, upgraded or patched.  The customer simply gets a new set of tokens, loads them up, and gives the token to their users for self-service registration.

In the case of RSA, they are a tightly coupled system, where tokens, algorithms and authentication servers are all tied together.  If you don't trust RSA, and feel they've been compromised, then you need to throw out the whole lot.  In our case, Defender simply cannot be compromised in that way as we don't do anything secretly or have proprietary algorithms.  OATH is published.  RADIUS is published.  If you trust those standards, then Defender is fine.  The same for AD - we store our data there, but its a "pseudo-standard" where it's ubiquitous, and there's no deep dark secrets about it.

This is why I keep saying that our approach is more sound, and ultimately, more trustworthy.  Not because our code is better (it may be but RSA have some clever folks, too) but because our entire approach/methodology does more to protect the client from a breach such as this.

There's still a cost, and you have to throw out the tokens, but you don't need to learn a new toolset, and plan a migration from one auth system to another.  Especially in a stressful time when your CISO is breathing down your neck, asking how much of a threat is this.

--

That was really it.  I don't have much more to add at this point.  Other than maybe that we have a really cool, new token from Yubico called a Yubikey.  You should ask for a demo, but it's something you have to see in person.  USB-Based, but works on Windows, Macs, Unix, etc.  Very clever, that token.

28Mar/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part II

Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago.  That still amazes me.

Note 2: this is a very rough, stream of consciousness blog entry.  Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

------------------
Initial assessment and baselining
--
Let's dive right in and get started.

To begin, an assessment needs to be performed to determine all the things that will be part of the consolidation.  Presumably, this has already started as an initial plan is due to be submitted to the OMB for review and budgeting. However, everyone knows adjustments can and will be made. So I'd suggest you do an assessment assuming every item will be questioned.

There are lots of ways to survey what you have, but looking to Quest to help with that may not be something you thought to do.  Well, you should.  The reason is that we have a lot of tools, and lots of tools to help with your entire environment. From the operating system, to the databases and file servers, all the way to app and web servers as well as desktops.  And while we're not in the inventory management business, we can certainly hold our own if you need a list.

"What kind of lists can you provide," you ask? For starters, we can get you users and desktops. Nowadays, most users are in Active Directory. And most of their desktops and laptops are joined to AD. So you could use something as simple as Quest Reporter to pull a list of those objects out of AD.  Following the 80/20 rule, that should give you a good ballpark of your end-user environment. Need something s little more accurate? Then you'll need to do more work but get more results. You can either go with something like Desktop Authority to get you a good idea of what is actually out at the desktop level.  Or, you can fall back to your AD event logs, and monitor login events over some time period with Quest Change Auditor for AD. In both cases, the products are sure to give you a lot more benefits beyond the initial assessment. And both Change Auditor and Reporter give you a good feel for your Windows server environment as well.

But the assessment is more than just a 'survey.' You cannot just make a nice clean inventory of everything you are responsible for, and leave it at that. It is critical to know -how- those systems are performing. In other words, you need to set a baseline, and you probably need to do it in 2 ways. The first way is through some measurements and metrics. Quest's Foglight platform is fantastic for end to end monitoring, and it can serve double duty by providing those initial statistics up and down your entire app stack.

Foglight can also provide those initial lists I mention above.  You need RAM, CPU and disk numbers off your servers? We can get those to you, and help with some capacity planning as well. And if you run Foglight long enough, you'll have some very good trending and usage data to use beyond the consolidation effort.

The second baseline to check is subjective, and it's the user's perception of the current systems.  This wouldn't involve any Quest product, but to simply put together a quick, 5 minute survey of what the users think of the apps they use. There are many free and paid sites out there that can run such a survey for you but I'd really encourage you to get this initial feedback. And if it starts to look grim, and you're surprised by the results, check out Quest End User Monitor to walk through the apps, and see what the users are complaining about.

That's really it on the baseline side.  We can help with that initial assessment as well as providing initial metrics for how your environment is functioning.  Can we provide complete coverage of your environment?  Probably not, but the tools you'd use from us would continue to provide value beyond the assessment rather than being a throw away once the assessment is complete. And wouldn't it be nice to be in a new environment but with a familiar toolset? I think your IT staff would say, "yes."

14Mar/110

The Advanced Persistent Threat and Cybersecurity Webcast

For those of you that tuned into today's webcast with myself and Paul Harper, thank you.  If you haven't seen it, it will be available shortly, and I will post the link back in this blog post.  In the mean time, if you'd like a copy of the slide deck I used today, here it is: 20110314_-_Insider_Threat_Webcast

8Mar/110

Et tu, brute?

It’s evident throughout history – inside jobs. Aside from nuclear war and weapons of mass destruction, cyber attacks pose the single greatest threat to US security – and they are growing more and more difficult to prevent. One clear indicator of the threat is the sheer volume of breaches. Cyber attacks on federal computer systems have increased more than 250% over the last two years, according to the Homeland Security Department. Federal computing resources are under constant threats -- not only from the outside, but also from trusted partners and internal users. Cyber attacks are a clear and present danger and the potential for both accidental and deliberate breaches of sensitive information is a growing concern. Innocent but careless employee actions can set the table for attacks by more malicious parties. In many cases, the threats are inadvertent, with users unwittingly introducing harmful viruses to your agency or allowing sensitive data to be leaked.  But whether or not there’s malice, the damage from breaches can be great.

Join me for a discussion on Monday, March 14 @ 1:30 pm ET on ways to protect your environment from the inside threat.  We’ll talk about how you can not only improve your security posture, but also meet regulatory and statutory guidelines during audits and reviews.  Plus, you’ll also learn about forensics and tools you’ll need when a breach does occur to minimize the losses and downtime.

You can register here. I’m looking forward to hearty discussion.

21Feb/110

Token Bloat, AD Bridge and Quest Privilege Manager

If you don't know what "token bloat" is, then I suggest you do some research.  Plenty has been written on it, and while I'll provide a brief description, others do a much better job of covering it.  The short version is that when you log in using your Active Directory credentials you get a Kerberos ticket (which is just your identity - who you are) plus a PAC (Privileged Attribute Certificate - which is a list of all your security group memberships (their UUIDs, actually) - what rights you have).  As a user gets more and more rights (through group membership), the PAC grows.  Eventually, it becomes big enough to affect logins and applications that use the AD Kerberos ticket (often called a token).  The effect is that things slow down, or just flat out break.

But how does this affect AD Bridge products, like Quest Authentication Services?  Unfortunately, in much the same way.  If you're a member of 10,000 security groups, it's going to take some amount of time to get through that list.  That doesn't change just because you've logged into a Unix, Linux or Mac machine.  At best, you can set a flag to ignore the PAC if all you want is authentication, but if you are a member of 10,000 groups, someone probably put you in those groups for a reason (note: 10,000 is hyperbole, and if you really are a member of 10,000 groups, you have some serious AD design problems).

And an increase in the number of groups is often a by-product of deploying an AD bridge solution.  We have other ways to alias out and re-use existing groups, and you can certainly use local groups, which are not in the PAC, but who wants that?  In some cases, you want those added groups.  You need those added groups.  A unix sysadmin is not the same as a Windows administrator.  And an Oracle DBA often has different rights, and requires different groups from your SQL Server admins, so simply generalizing your groups aren't enough, either.

But for customers that have QAS along with Quest Privilege Manager for Unix, there's a better way.  And one that provides a lot more flexibility and configuration options. Plus, you don't have to have QAS.  You can do this with local accounts, LDAP, NIS, and any other users you have on unix. I'll just stick with AD users because the topic of this post does focus on 'token bloat' which is an AD-specific problem.

What is Quest Privilege Manager (QPM4U)?  It is a tool for unix command control, and I/O logging.  There are lots of other descriptions people use - "Tivo for your unix machines," or "sudo on steroids" are often used, though most fall short in a 1 sentence pitch.  QPM4U allows very granular control over what rights users have on a particular machine, and that granularity includes options such as time of day, and day of week, environment variables, and multiple group memberships. In fact, it can use any variable/attribute about the user, the machine and environment.

And the last item I listed (multiple group memberships) is where things get interesting.  You actually write a rule that says, "if a user is a member of OhioUsers, and a member of SalesTeam, then the user can access files in the /SalesDocs/Ohio folder."  This is interesting, because in the Windows world, most people would be added to the SalesTeam group, and the OhioUsers group, and then a third group would get created called 'OhioSalesTeam' just to grant access to a similar folder.  And even with sudo, the same thing would happen.  Sudo can only check for 1 group membership, and you're either in the group, or not.

This is where QPM4U can let you use existing groups, and perhaps other attributes, to determine whether users can access resources or run certain commands.  In fact, you could simply query AD (check out the 'vastool search' command), find out the user's job title (Account Manager), their state (Ohio) and make the determination that way.  Which means that no groups are needed.  This is an extreme example, and not one I'd recommend, but hopefully you see how it could help prevent token bloat.  In addition, you can start adding in other conditions, such as time of day (if the user is a DBA, and it's after 7 PM, then they can run the 'backup database' command), allowing for even more flexibility, and control.

Now, I've been writing all this using pseudo code, and not showing you how simple this config would actually be to put in place.  So what would be needed in the Privilege Manager config file for that initial scenario? Well, thanks to my colleague, Paul Harper, it would look something like:

# check all the arguements to see if the filepath is the one allowed
c = 0;
protected_dir = '/SalesDocs/Ohio/';
prot_dir_len = strlen(protected_dir);

while (c < argc) { 
   if ( (strlen(argv[c]) >= prot_dir_len ) && ( strsub(argv[c], 0, prot_dir_len) == protected_dir )) {
      if (ingroup(user, "SalesTeam")) && (ingroup(user, “OhioUsers”)) {
        accept;
      }
      else {
        reject;
      }
  c = c + 1;
  # do other things here for other conditions.
  }

To be fair, this is a poorly written script, and it has many faults, but it gives you an idea of the types of things that could be done with Privilege Manager, especially when it comes to multiple groups, and the fact that you can check for multiple group memberships, without having to create a brand new group just to manage access to a resource where other conditions already exist to check this.

There you go, Jeff. Hopefully this post helps you - give me a call if you've read this far down.  🙂

26Jan/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part I

[Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago.  That still amazes me.]

[Note 2: this is a very rough, stream of consciousness blog entry.  Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.]

The current administration has developed the Federal Data Center Consolidation Initiative (FDCCI) with every agency and department falling under scrutiny. It is  mandated by administration as a way to cut costs as well as secure the environment. This document does not seek to go into detail about the FDCCI but outline how Quest software can help every agency and department with the overall initiative.

The focus of the initiative is on physically collapsing all centers to a much fewer number. Of course, this is not just an exercise in put all the servers into a single room.  This is an opportunity to both consolidate, and update the environment as well as potentially modernize key systems and platforms.  And this is where Quest can help.

At a high level, the entire consolidation will involve the following steps:

  1. An assessment of the current environment, determining the disposition of every item to be included or excluded from the consolidation
  2. As part of the assessment, it is good to determine and establish baselines for services
  3. Prep work to get platforms or systems ready to be moved to their new environment - this includes any procurement and training that needs to occur
  4. The actual movement itself, which may be as simple as putting the same server into a new location or as complicated as migrating to an entirely new system on new hardware, operating system, platform, etc
  5. Tuning and optimization of the systems to their new environment
  6. Post-mortem review and on-going monitoring and maintenance

Depending on the agency or department, the age and condition of the systems, the number of users and administrators involved and the number of data centers, the time for each step will vary.

In all of this, personnel will also be affected. Not only may IT staff have a new location to go to, but they will also need new skills and tools. And with this, it would be good practice to audit and adjust access controls to make sure additional rights are not unduly granted.

Over the next few weeks, as time permits, I will be exploring the different areas of this sort of project, and tying it back to solutions that Quest Software provides. The hope is to give a good, solid overview of the help available from Quest in making this move.  Many of the topics covered will also apply to other situations, and later on, I'll make the case that many of our solutions are "dual purpose" meaning they may be used by one set of users for a particular task, but that a different set of users may also benefit from the same tools in a different capacity.

First off, it would be helpful to review what Quest's goals are in providing the software that they write. At a high level, Quest is a systems management company.  We do not have many of the platforms that people traditionally think of when they think of an enterprise software company; databases, web servers, operating systems, etc. For example, no one needs another database platform, at least not from us.  But organizations do need to get a better handle on their existing databases, database servers, and the systems that use them. And that's where we come in.

We focus on making your existing infrastructure easier to use. This is not to say all our tools are simple. Yes, we have "Simplicity At Work" as our tagline, but we do some pretty complicated things. And while the overall message does make sense to someone familiar with the problems we solve, these are not "Fisher Price" tools. The common misconception is because we have some products that have the word 'wizard,' that they are easy. No, what it means we make whatever the wizard works against easier to use.

Lastly, it would probably be good to give you an idea of who I am and what I do. I am currently the CTO for Quest's Federal Public Sector group. Which means I keep an eye on all things Federal, working with clients and partners to help them get the most out of Quest while also working internally to make sure our solutions align with my customers' needs. We have many other people with similar roles, however I don't have a single area of focus.  We have over 150 products, and anywhere from 3-8 different areas (depending on who you talk to). So I try and stay current with everything we have to offer.

And though I've worked with Public Sector clients since I arrived at Quest over 5 years ago, I've also worked for Quest in the UK as well as on the commercial side in North America. In those instances, my remit was Identity and Access Management as well as our overall Windows Management solutions. Before all that, and joining Quest, I was a developer, DBA, Director of IT, University instructor and a whole host of other things.

That's enough for now, and should give you a good enough idea of where this blog is headed over the next few weeks. If you have any questions or comments, don't hesitate to write me (dimikagi -at- federalcto.com) or post them up below.

13Jan/110

Upcoming posts

I have several posts queued up that are quite detailed about how Quest can fit a data center consolidation strategy.  But in the meantime, I have this post on a related site until those are published:

http://www.idmwizard.com/2011/01/13/securing-usb-and-cd-drives-with-temporary-group-membership/

As the title suggests, I talk about how to block USB mass storage and CD-ROM drive on a desktop, and complementing it with a Quest product called ActiveRoles Server to add and remove the machine from groups on a temporary or ad hoc basis.

10Jan/110

I think it’s finally working!

I've been wrestling with this site getting online, and I believe this will be my first successful post.  We'll see in a minute.

Filed under: Uncategorized No Comments
Copyright (C) 2010-2011 Dmitry Kagansky – All opinions expressed are those of the respective author and do not reflect the views of any affiliate, partner, employer or associate.