federalcto.com The Blog of a Federal CTO

9Nov/110

Meant to mention

I put up a post on 2-factor authentication here: http://www.idmwizard.com/2011/10/31/quest-on-2-factor-and-3-factor-authentication/ that can be considered an addendum to the last post I had here.

Going forward, this blog is going to be more about my travel and observations in my job as Quest's Federal CTO, and the IDM Wizard site will be for the more technical, identity-focused activity. With any luck, that site may have a new contributor as well, which is why I'm looking to revive it.

13Sep/110

Smartcards, certificates & OS X: Does Lion Roar or Meow?

I spent some time the week before last dealing with smartcards, certificates, Macs and OS X Lion (10.7), specifically. My initial take on Lion was this; I can't believe Apple has taken this direction for the enterprise user.

Before going any further, I should come clean and say I love Apple products. As a consumer, they do what I need, and work as expected. The fact that I've not had to rebuild my wife's MacBook in it's entire life (of more than 3 years) is remarkable. That all changed with OS X Lion. I downloaded it, and was not impressed from the get go. Just starting with the fact that I had to hunt down the DMG file so I would have a copy was irritating, and things continued from there. But I work with technology, need to keep up, and wanted to check out the buzz.

I also did some initial research, but articles like this one didn't help; http://www.macworld.com/article/161493/2011/08/lion_enterprise.html . This guy clearly doesn't run an "enterprise shop" and probably considers an office of 10-15 people to be representative of an enterprise. But I bit the bullet, upgraded my personal MacBook Pro, and have been wincing since. But that was as an end-user. On the enterprise side, things turned out to be worse. The AD Plug-in - broken. The Login Screen - completely screwed up. Apple broke everything when it came to Directory Services and the Login page. And to top it all off, smart card support has been deprecated! Really? This is enterprise ready?

Before we go any further, here's a quick review of how smart cards worked in OS X prior to 10.7:

  1. A card is inserted into the reader
  2. The reader uses the tokend file (ships with the OS) as a driver to read the certificate off the card
  3. The login screen is re-drawn, the username is pulled off the card (to display on the screen), and the screen is changed to only display the PIN request field
  4. The user enters the PIN tied to the card
  5. The certificate (issued by a Certificate Authority aka CA) is confirmed to be valid
  6. The CA's certificate is also checked, and confirmed to be valid, so it's certificate (and any up the chain) have to be in the OS X Keychain
  7. The user is allowed to log in

Apple no longer ships tokend files (step #2), and the default login window is no longer refreshed by a smartcard insert (though it does still flicker). We have since had a very helpful conversation with an Apple architect, and will be making changes to make it all work properly in Lion, but in the mean time, it's possible to get it all working with some elbow grease and lowered expectations.

So . . . how did we do it? Well, we started with the QAS Smartcard guide that you can find with the standard QAS documentation. And there's a whole lot of good stuff there about smartcards and certs, but here's what I boiled it down to:

1. Go to http://smartcardservices.macosforge.org/ and download the latest package to get the tokend needed to read the specific cards you are using. Apple has open sourced it all, and are relying on non-Apple folks to maintain this. If your card is not there, you will need to ask your smartcard vendor to give you a tokend file (basically, a little library to allow you to work against the specific card). We were lucky enough to be using CACs so the tokend we needed was in the beta2 batch from Aug 19, 2011.

2. Once the tokend is installed, and in place (and QAS is already up and running), run the following commands in Terminal to set up QAS to be able to use the smartcard:

sudo /opt/quest/bin/vastool smartcard configure \
pkcs11 lib /usr/libexec/SmartCardServices/pkcs11/tokendPKCS11.so

# just a test with a valid smartcard to make sure it works
sudo /opt/quest/bin/vastool smartcard test \
library /usr/libexec/SmartCardServices/pkcs11/tokendPKCS11.so

sudo /opt/quest/bin/vastool smartcard configure macos

(note that tokendPKCS11.so is the name of the file we used - your mileage may vary)

3. The client I was working with had 3 CAs in their certificate chain, so all 3 CAs had to be imported into Keychain. What that means is that Keychain had to have a copy of every CA cert that is in the chain of the user. We actually used a Windows host, and got those certificates in a cer format (crt would work as well) and used the QAS Files policy to get them over to the /etc/opt/quest/vas/ folder. We then imported them onto the Mac using the following commands:

sudo /usr/bin/security add-trusted-cert -k \
/Library/Keychains/System.keychain -r trustRoot -d /etc/opt/quest/vas/enterpriseCA.crt

sudo /usr/bin/security add-trusted-cert -k \
/Library/Keychains/System.keychain -r trustRoot -d /etc/opt/quest/vas/layer1.crt

sudo /usr/bin/security add-trusted-cert -k \
/Library/Keychains/System.keychain -r trustRoot -d /etc/opt/quest/vas/layer2.crt

We put these 3 commands into a script that deployed as a GPO Script policy that ships with QAS and set to 'run once' after the machine is joined.

After that, we were able to log in, but not without some trial and error. It turns out that if we had our card in the reader, and everything configured before the login window was drawn, we had the correct username read off the card, and a single field (labelled 'Password') that allowed us to enter a PIN and login. If we inserted the card after the login screen was already drawn with 2 fields (labelled 'Username' and 'Password'), we could not use the smart card to login, but could get in with the AD credentials, or a set of local credentials.

That was it. Most of it was what you'll find in the install guide, along with some additional configuration and troubleshooting sections. Also, don't think that smartcards can be bypassed with QAS. In the testing that we did, we left it open to allow username/password and local accounts to log in. But we didn't have to. QAS allows you to configure an entire host to require smart cards for login by editing vas.conf. Add the following directive:

require-smartcard = true

under the [vas_macos] section. You can set this option for multiple machines using group policy QAS Configuration policy extension. In addition, you can enforce this on a user-by-user basis by setting the 'SmartCard Required For Interactive Login' option on each user using Active Directory Users and Computers (ADUC).

(edited 2011-09-13 21:42 GMT to correct some formatting problems)

19Aug/110

VDI – nothing like watching the lightbulb go on!

I was a conference for a Federal agency this week, and had a fantastic experience with a (prospective) customer. We were talking about this and that, but the conversation worked it's way around to mobile computing. And while agencies are announcing "mobile initiatives" left and right, the reality is that everyone is still really concerned about what BYOD (Bring Your Own Device) means.

My friend, Joe Baguley, has been going on about "the consumerisation of IT" for a while (and he's English, so it's an "s" and not a "z" in consumerisation) as have others, but the Feds work differently. It's not purely about savings, and things are a bit more measured. And that's one of the reasons that the Blackberry continues to hold on, and the iPad has made no movement yet. When the OS cannot be locked down through a central server, and policies cannot be set on applications and data, the device has a slim chance of making it in the Federal space.

But Quest have an interesting solution to the problem; one that took 5 minutes to demonstrate, and turn on the lightbulb for a senior-level manager at this agency. It was vWorkspace. He had his iPad with him (running a beta of iOS 5, in fact - good to know it looks like it works there, too!), and we started talking about how he can control data and access. Thankfully, I had a demo account on a set of servers that a colleague had up  and running (Rob Mallicoat - many thanks!), and he had a network connection on the iPad. Within 5 minutes, I was able to download the vWorkspace iPad app, configure the settings, and voila! I was showing him Visio running on an iPad in no time.

And the thing is, I tell people about it time and again, but until you actually see it working, you don't realize how cool it is. Even the sales guy in the booth lit up when he saw how easy it was, and what it did. That got this manager interested, and proved one of the core problems with VDI; users need to touch it and see it for themselves. It's not enough to give them docs and white papers, and even web demos don't cut it.

But that's not enough; what's unique is how Quest does it. We don't care if you have some apps and desktops on VMware, and others on Hyper-V. You could even be running Terminal Services! We combine all of that, and give you a single interface. As a user, you don't know (and shouldn't care) how an app or a desktop is delivered. We even have a slick, "local VDI" option, thanks to MokaFive.

That's what got the manager really excited - he realized he didn't have to be tied to a single VDI option or vendor, and he can even provide multiple environments all through 1 app, delivered to his users' desktops. Plus, add the fact that it can all be remote means he could even give out iPads, but keep all the data in his data centers.

So if you have an iPad or Android device, ping a Quest person, and see if they can get you access to a demo system. Or ping me if you're with a Federal agency. Because this is something you simply need to try, and not just watch or read about.

20Jul/110

Federal CloudCamp in DC

Ken Cline and I just finished up a presentation  at FOSE for the Federal CloudCamp. The slides we used can be found here: http://www.federalcto.com/20110720-CloudCamp.pptx . I'm really pleased with how things have turned out so far. It's not exactly the CloudCamp I was expecting when I initially contacted Dave Nielsen to do one, but I definitely think it's definitely helped the Federal community, as well as advanced the notion of Cloud Computing within the Federal government.

If you attended, thank you for taking the time.

If you didn't attend, please consider going to the next one. You can find more info out about CloudCamp at http://www.cloudcamp.org and you can definitely contact me as well, as I'm sure to attend, if not present.

29Jun/110

Wait! I thought the Feds only cared about smartcards . . .

Monday, I posted how Quest's tokens allow for users to program their own seeds. Which then prompted questions (internally) of, "Why do you even care? I thought you focused on the Federal space, and the Feds only cared about PAP, or CIV cards?"

Well, yes. For the most part, the US Federal government does focus on CAC (Common Access Card - used by the DOD), and PIV (Personal Identification Verification - mostly used by the civilian agencies) cards. And you'll also hear about PIV-I (PIV Interoperable) being involved. In the case of CAC and PIV, a user has to file an application, and some Federal agency would need to sponsor the individual to obtain such a card. This usually includes a background check, or some sort of formal review, before the card is issued. PIV-I tried to lower this barrier, by allowing non-Federal organizations (think government contractors, state governments, first responders, etc) to issue interoperable smartcards that are trusted by Federal systems.

However, PIV-I has a lower "assurance level," and often involves the same (or similar) sort of background check, just by a different organization. Assurance Levels are set by NIST and can be found detailed in their Special Publication (SP) 800-63. You'll see there are 4 assurance levels, and PIV-I only gets to level 2 (really, it gets to level 3, but with a less stringent background check, it can only be considered level 2.5 at best). CAC & PIV strive for level 4, if not level 3.

Ok. So we've established that smartcards are the main vehicle for 2-factor authentication in the Federal government, but I still haven't explained why tokens (RSA and other ones) crop up. And this is because a token is also an acceptable form of 2-factor authentication (read SP 800-63. and you'll see them mentioned as 'one-time passwords'). The "identity proofing" is still required, but tokens are actually a lot more flexible for several reasons.

1. Tokens can be assigned to any user (part 1). In the case of smartcards, they are usually assigned to people, while tokens can be shared. In fact, with our tokens, users can share tokens, but continue to use their distinct credentials (username and password) with the token. Which means that multiple admins can share a token to access a common system, but you can determine which admin logged in to do something with the particular token. This lets you continue to have 2 factor authentication, but also a "check-in/check-out" system for the particular token, allow more controls over the physical token (perhaps locked away in a safe or vault).

1a. Tokens can be assigned to any user (part 2). This is really a corrollary to 1, which is that the token can be assigned to service, or privileged accounts. Putting in the same sharing (check-in/check-out) model for a token, plus tie the password to a password vault product, and you have some pretty solid security around your privileged accounts such as oracle, root, Administrator, and other 'non-named' accounts.

2. Tokens are independent of environment. With smartcards, you have to have a reader attached to the user's console. No reader, or a malfunctioning reader, makes authentication a bit more difficult (read not possible). There are also situations where PKI simply isn't used. Older applications or platforms that do not make use of certificates cannot be changed quickly or easily. With a token, along with a username and password/PIN, you can continue to get 2FA, even in a scenario where a reader isn't available or not practical, or the app is expecting only a username and password. There is still some amount of work to be done, but it's often easier than tying an app into an entire PKI infrastructure.

2a. Tokens are independent of environment. There are some cases where the app or platform is capable of using PKI, but it is sitting in a location/area/network that simply cannot reach the PKI infrastructure to which the certificate on the smartcard is tied to. Not every system is actually on the internet (unbelievable, I know), which means tokens can provide access without requiring a centrally managed CA (certificate authority) to be present.

3. Tokens can be assigned much quicker and easier. And this is really the crux of why tokens come into play in the Federal space. Smartcards require a centrally issued certificate to be put onto the card. In some cases, there is no time for the requests to make their way through the system to get a certificate, publish it to a CA (certificate authority) and card, and get the card to the user. In other cases, the user simply will not get through a background check (or unwilling to get one), but has to be given access to certain systems. Yes, there are times when the Federal government has to deal with "questionable people" but they might as well make sure it's the right person, so 2-factor is still needed.

3a. Tokens can be revoked much quicker and easier. Because the token is usually some bit of plastic, it's easier to revoke and know that it won't be used in other ways. Most smartcards take a while to issue because they are also printed as a security badge. Meaning that even if the certificate on the card is revoked, the card may still be usable to get physical access to a building or location. However, it's unlikely that an agency will let you in because you have a black piece of plastic on your keychain.

So, with those all those reasons, tokens are not going away. Smartcards will continue to dominate, but there will continue to be a need for 2-factor authentication (2FA) using one-time passwords (OTP) in the Federal space.

27Jun/110

The RSA Breach saga continues . . .

I've discussed the RSA breach before, but had a very interesting conversation last week with a customer that was in a dilemma as to what to do. RSA have said they would replace some of the tokens, depending on the situation (protecting Intellectual Property, Consumer-Oriented products, etc) and this customer was pretty certain they were to get new tokens from RSA, but didn't know if they wanted them. Nor do they want any other "standard" token that other vendors could provide, because that new vendor could be breached as well.

What this client really wanted was a token where he could program his own seed into it. I mentioned that our software tokens actually allow for this out of the box, and when you "program" a new soft token, the seed is automatically generated. Which means that Quest never knows what the seed record is for any software token that we provide. For an example of this, you can actually see a recording of it here where the seed is generated, and then here where the seed is put into our Desktop token (there is no audio on either recording).

However, this wasn't as interesting to him as programmable hardware tokens. He had concerns about keys getting compromised while being transferred to the end user, and wanted to send a pre-programmed hardware device. At first, I didn't think we had anything like that in our arsenal, however, it turns out that one of our latest token models (The Yubikey token) as well as some of other ones already allow for user programmability! In fact, if you look at the link for the Yubikey token, and then scroll down, you'll see that some are programmable. It took our Defender Product Manager (Stu Harrison, who blogs on and off here) to point this out to me, even though I should have remembered this from earlier on.

Obviously, each token does come with a default seed that Quest will know about, but if there was concern about a vendor having the "keys to the kingdom," a programmable token puts the onus back on the organization, and keeps Quest out of the limelight. I don't want to discount the fact that this will take more manpower, but if you don't want your vendors to have your seed records, reprogramming the tokens is the only secure way of doing it. It only makes sense, and as a security professional, you should never rely on RSA, Quest or any organization if you can minimize the number of people that have access to the token seed records.

29Apr/110

Amazon in the Federal space? Absolutely!

I work for Quest Software, and focus on making sure that we are helping our Federal customers. So when one of our account managers asked me to me with Amazon, I was a little skeptical.  Amazon? Really? I mean, I know the Feds have the "Cloud-First" initiative, and there are some things here and there, but  can we really work with Amazon? What can we do with an organization that doesn't do anything with most of the platforms we support?

There's no Oracle, no Microsoft, PeopleSoft, SAP and so on.  They don't even offer email that we can migrate to or from.

Well, it was an  hour and a half well spent, and I am absolutely jazzed about some of the things we discussed. It turns out they are very, very serious about Security (with a capital S). In the Fed space, that is the number one objection to  anything Cloud related. But they not only went through some of the certs and reviews they'd had (even a SAS 70 audit, which I think pretty highly of, having been involved in one years back) but their overall architecture and philosophy. They are keenly aware of the federal requirements that are out there, and have made a substantial commitment to making sure that their federal customers are able to use their solutions

Yes, they had an outage last week, but if you have concerns about these things, you should talk to Amazon about what can be done to prevent it. The outage was Amazon's fault, but there's also options (that obviously cost) which could be implemented to avoid such a thing happening. You can find a lot more info on the outage here.

In any case, it turns out they have a lot of options, including all the way down to a VPN secured environment, if that is your requirement.  And because everything is web-based, and often accessible via web services, I think there are a few solutions that Quest have that could be used with Amazon's platforms. The conversation ranged from monitoring, to management and provisioning of resources, as well as integration with internal systems, and potentially even things like SSO and access controls. Lou J (the Quest Account Manager) even learned about Cloudbursting.

I hope to explore those options in the coming months, especially with some specific Federal customers in mind.

4Apr/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part III

Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago. That still amazes me.

Note 2: this is a very rough, stream of consciousness blog entry. Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

------------------
Preparation and migration
--
Preparing for the move, beyond the simple assessment seems a no-brainer. But a migration? Really? Right before a move? And I say "yes." First, let's be clear and qualify that "migration" (to me) means a cut over to another platform. This could be email, user directory, database, etc, and it may be to a new version of some current software, but it's definitely getting off the current version. The idea is to make the software and version of that software as current as possible before you actually move to a new location/environment. The only thing this excludes is the move from physical to virtual. That comes later, and I'll explain why then.

There are several reasons for you to consider doing a migration before consolidating environments. First, and foremost, the migration is going to happen sooner or later, and aren't you better off doing the migration in a comfortable and stable environment instead of your new one? Plus, the migration may actually shake some things out and make the consolidation easier. For example, if you use QMM to migrate mailboxes from an old version of Exchange to 2010, or use QMM for AD to restructure your Active Directory environments, you may actually find that there are many users, mailboxes, groups and other objects that could be deleted/abandoned.

The same goes for databases. If you're running an old version of Oracle for your databases, it's time to cut over, and see what features and benefits you get. And we even have a tool that let's you mix versions to replicate data while you make this sort of move, so it's not a jarring, "all at once" process.  That tool, BTW, is Shareplex.  And it also let's you replicate mixed versions of databases, which is pretty cool.

But why else should you do the migration before starting the actual move? Well, frankly, because support is easier. Sure, you can migrate a Windows 2003 server, or an Oracle 9i database into your new environment, but if there's a problem, what will the vendor tell your team? Most likely, they'll tell you to upgrade to the latest version.

It's not widely discussed, but the reality is that most software companies want you on the latest and greatest version of their software when you call support. It's usually the version their developers are currently using as the basis for the next release. It's the version that support is working with most, and one that they have set up locally to recreate your problems. One or two versions is often ok, but if you're running something more than 2-3 years old, I think you're asking for trouble. Get to the latest versions however you can, because you don't want to consolidate and move old software around.

Another reason is your staff's personal development. I've run IT groups in the past, and the most common question was always, "when are we going to get to the latest version of X?" where X was some database, operating system or programming language. If you are the CIO at a federal agency, your staff knows that data center consolidation is coming, in some form or fashion. You want them ready and energized for the task. Letting them get to the latest version of whatever software they work with will excite them, and you'll have a happy crew moving into the consolidation.

Now, I did mention that cutting over to a virtual environment should be last. The reason is that this is really the same as a hardware move. No matter what hardware your environment is using, something is sure to change. And your software may react adversely. Plus, if you couple that with a version upgrade, you are changing a lot out from under your user base as well as IT staff. The idea is to minimize risk, and that just doesn't cut it.  So do the "migration" first, get it settled, and then cut over hardware (which can be P2P or P2V).

And if you're contemplating a P2V move, you should definitely check out vConverter.  Not only does it work with P2V, but V2V (you may want to switch hypervisors, or try out multiple hypervisors with the same ), and even V2P in case you absolutely have to back out.  Or even want to switch hardware, using the move to virtual as a stepping stone.

Finally, if you upgrade, migrate and virtualize in a single move, how do you know where you got performance gains or losses? If you read my last post on this topic, you'll know I propose baselining before starting.  The only way to do that is to start with a known point, but then make incremental moves so you can collect more information on what impact each part of the upgrade, migration and consolidation has on your environment.

30Mar/110

This why we have standards

Blah, blah, blah

RSA breached

Blah, blah, blah

The last week or so has been really interesting.  Yes, one of the hot topics is the RSA breach.  I'm not going to link to it - if you haven't heard by now, I'll wait until you google it, and then come back.

For me, the interesting part was that it was an APT (Advanced Persistent Threat).  It's a hot topic in the Fed space, but most on the commercial side have never heard of it (want to know more about APT?  Check out this 30 minute session I did a few weeks ago).  However, our internal discussions started going down the "are we vulnerable to this same attack with our Defender product?"  I replied with some glib comment saying, "No.  Duh!" and went on.  Not really, and it was much more pleasant, but it was definitely short on details as to why.  But the conversations continued.  Now, for someone on the outside, I probably would have given a more detailed answer, however I thought it was obvious that being standards-based, as we are, we avoid this sort of attack simply by following the rules instead of creating our own.

Well, last night, the questions continued, so I put together much of what is below in an email to explain.  The gist of this is that Defender support OATH which means that we can use any token to provide 2-factor authentication and the algorithm used to calculate the next "random" value is a standard.  By using a standard, a breach of the code simply isn't possible.  Why?  Because the code is already public.  There's nothing to hide, so there's nothing to steal.  And what that means is if there's a problem, it is with the token seed records.

For those that don't know, one-time password tokens all operate on a pre-defined seed.  That seed is used to calculate the next password by the device.  And when I say device, I include software tokens, and any other "thing" that provides a 1-time password.  RSA does it, we do it, everyone does it.  That seed is unique to the device, and kept secret by the owner of the token. Here's where things become interesting.

If you steal the seed records, you only have 1/3 the puzzle in getting to someone's login.  Not only do you need the seed (to calculate the next, unique password), but you also need to know the account it's tied as well as the 'something you know' part of 2 factor authentication, which is usually a PIN or a predefined password.  In the case of Defender, most customers use the AD password, which is usually better than a 4 digit PIN but that's irrelevant for this conversation.  However, knowing someone has your seed record, as a user, ought to make you nervous.  If one of the two factors you rely on is compromised, then it's no longer 2-factor, now is it?

The next bit is a direct copy/paste from the email I wrote.  Vasco is one of our token suppliers, and the ones make the Go tokens we most often use for demos.  So it's the one most people inside of Quest are familiar with, which is why I used them in the example below.

--

To add to this, let's say Vasco have a breach, and they get both source code from Vasco, as well as all their token seed records, including those of our clients.  And the breach is so bad, they get our purchase orders (from Vasco) and maybe even see which tokens may have been drop shipped to our clients on our behalf.  That's pretty bad.

But none of that mean you need to rip out and replace Defender.  Our client can simply say, "We no longer trust Vasco tokens, and will switch to [insert name of OATH token vendor]."  The Defender server stays intact, nothing needs to be changed, upgraded or patched.  The customer simply gets a new set of tokens, loads them up, and gives the token to their users for self-service registration.

In the case of RSA, they are a tightly coupled system, where tokens, algorithms and authentication servers are all tied together.  If you don't trust RSA, and feel they've been compromised, then you need to throw out the whole lot.  In our case, Defender simply cannot be compromised in that way as we don't do anything secretly or have proprietary algorithms.  OATH is published.  RADIUS is published.  If you trust those standards, then Defender is fine.  The same for AD - we store our data there, but its a "pseudo-standard" where it's ubiquitous, and there's no deep dark secrets about it.

This is why I keep saying that our approach is more sound, and ultimately, more trustworthy.  Not because our code is better (it may be but RSA have some clever folks, too) but because our entire approach/methodology does more to protect the client from a breach such as this.

There's still a cost, and you have to throw out the tokens, but you don't need to learn a new toolset, and plan a migration from one auth system to another.  Especially in a stressful time when your CISO is breathing down your neck, asking how much of a threat is this.

--

That was really it.  I don't have much more to add at this point.  Other than maybe that we have a really cool, new token from Yubico called a Yubikey.  You should ask for a demo, but it's something you have to see in person.  USB-Based, but works on Windows, Macs, Unix, etc.  Very clever, that token.

28Mar/110

The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part II

Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago.  That still amazes me.

Note 2: this is a very rough, stream of consciousness blog entry.  Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

------------------
Initial assessment and baselining
--
Let's dive right in and get started.

To begin, an assessment needs to be performed to determine all the things that will be part of the consolidation.  Presumably, this has already started as an initial plan is due to be submitted to the OMB for review and budgeting. However, everyone knows adjustments can and will be made. So I'd suggest you do an assessment assuming every item will be questioned.

There are lots of ways to survey what you have, but looking to Quest to help with that may not be something you thought to do.  Well, you should.  The reason is that we have a lot of tools, and lots of tools to help with your entire environment. From the operating system, to the databases and file servers, all the way to app and web servers as well as desktops.  And while we're not in the inventory management business, we can certainly hold our own if you need a list.

"What kind of lists can you provide," you ask? For starters, we can get you users and desktops. Nowadays, most users are in Active Directory. And most of their desktops and laptops are joined to AD. So you could use something as simple as Quest Reporter to pull a list of those objects out of AD.  Following the 80/20 rule, that should give you a good ballpark of your end-user environment. Need something s little more accurate? Then you'll need to do more work but get more results. You can either go with something like Desktop Authority to get you a good idea of what is actually out at the desktop level.  Or, you can fall back to your AD event logs, and monitor login events over some time period with Quest Change Auditor for AD. In both cases, the products are sure to give you a lot more benefits beyond the initial assessment. And both Change Auditor and Reporter give you a good feel for your Windows server environment as well.

But the assessment is more than just a 'survey.' You cannot just make a nice clean inventory of everything you are responsible for, and leave it at that. It is critical to know -how- those systems are performing. In other words, you need to set a baseline, and you probably need to do it in 2 ways. The first way is through some measurements and metrics. Quest's Foglight platform is fantastic for end to end monitoring, and it can serve double duty by providing those initial statistics up and down your entire app stack.

Foglight can also provide those initial lists I mention above.  You need RAM, CPU and disk numbers off your servers? We can get those to you, and help with some capacity planning as well. And if you run Foglight long enough, you'll have some very good trending and usage data to use beyond the consolidation effort.

The second baseline to check is subjective, and it's the user's perception of the current systems.  This wouldn't involve any Quest product, but to simply put together a quick, 5 minute survey of what the users think of the apps they use. There are many free and paid sites out there that can run such a survey for you but I'd really encourage you to get this initial feedback. And if it starts to look grim, and you're surprised by the results, check out Quest End User Monitor to walk through the apps, and see what the users are complaining about.

That's really it on the baseline side.  We can help with that initial assessment as well as providing initial metrics for how your environment is functioning.  Can we provide complete coverage of your environment?  Probably not, but the tools you'd use from us would continue to provide value beyond the assessment rather than being a throw away once the assessment is complete. And wouldn't it be nice to be in a new environment but with a familiar toolset? I think your IT staff would say, "yes."

Copyright (C) 2010-2011 Dmitry Kagansky – All opinions expressed are those of the respective author and do not reflect the views of any affiliate, partner, employer or associate.