federalcto.com The Blog of a Federal CTO

20Jul/110

Federal CloudCamp in DC

Ken Cline and I just finished up a presentation  at FOSE for the Federal CloudCamp. The slides we used can be found here: http://www.federalcto.com/20110720-CloudCamp.pptx . I'm really pleased with how things have turned out so far. It's not exactly the CloudCamp I was expecting when I initially contacted Dave Nielsen to do one, but I definitely think it's definitely helped the Federal community, as well as advanced the notion of Cloud Computing within the Federal government.

If you attended, thank you for taking the time.

If you didn't attend, please consider going to the next one. You can find more info out about CloudCamp at http://www.cloudcamp.org and you can definitely contact me as well, as I'm sure to attend, if not present.

19Jul/110

The battle for online taxes – the devil is in the details

Let me start by saying that I work for Quest, and we have no formal partnerships with Amazon, nor do we compete in any way that I can tell. Personally, I happen to be a satisfied (retail) customer of Amazon, but have done very little with their IT/Online services, beyond putting a bunch of MP3s in their CloudDrive.

The purpose of this post is to get everyone thinking about the complexity of online tax collection. Everyone sees it as a black and white issue, and it's not. It's not just the retailers "protecting their customers" and it's not just a single state or county that is having a revenue shortfall that needs this money. There are a lot of nuances that people do not consider when making their arguments in either direction. Personally, I'm on the fence as to whether online businesses ought to collect local taxes but I think my experience can help shed some additional light on the topic, as a whole.
--------
I had a colleague on the State & Local Government side send me an article about how Amazon needs to just "man up" and help the states. The full title and link for the article is:
Why Amazon is winning online retail and should fold on this silly sales tax fight

One of the last paragraphs of the article states:

"Rather than fighting it out on a state-by-state basis, and yanking on the incomes of their affiliates while they’re doing it, Amazon needs to man-up and do what’s right."

And my response to the whole article was, "It's an interesting, if light, article." I went onto say that the author, whilst championing the States' case for collecting online taxes, completely overlooked the real problem with doing this. The problem is not just the fact that consumers wouldn't like the 5-10% hike in prices. The problem is that it would drive up prices considerably, and take any competitive edge Amazon has away. David Gewitz overlooks a key financial factor that is a huge obstacle for anyone collecting taxes for online sales. And that is the cost of manning and maintaining a "Sales and Use Tax" department. What's that, you say? Why would you need a "department?" Can't all this be automated?

Well, not quite. For a retailer of Amazon's size, you're talking about getting dozens of tax analysts and clerks to handle this because there are thousands of jurisdictions. And because you're talking about thousands of jurisdictions, with millions of different rules, codes and policies, often open to interpretation, you need humans. You would need to add more labor, (and supporting systems) to deliver the same products and services. I know, as I was one of these humans at one point.

In a previous life, I worked for a company called Tax Partners as a Data Development Manager. Tax Partners has since been bought by Thomson, and it's impossible to find a nice, clean link on their site for what Tax Partners did, but what they did was manage outsourced Sales & Use Tax filings. These would often be for companies that deal with consumers in retail, telecomm, automotive, travel and other sectors. These were Fortune 1000 companies, that had a presence in hundreds and thousands of locations. And my job was to manage the development team that would write systems to take the client's tax data, and normalize it to fit into our system, which was then used by our internal tax analysts.

Now, here's where the details come in, and I'll use a single client as an example. Imagine you're a telecommunications company, and you provide cell phone service nationwide. You have to collect a ton of taxes, and all those taxes are a liability. You have to pass them onto the respective jurisdictions, so you really don't want to hang onto this money. The short term interest sounds interesting (pun intended) but it's not nearly enough to justify having to deal with this money. So while you get this money, it is only a burden to you, and you really don't want anything to do with it. I can tell you that one large, US telecomm was paying $80-95 million per month in taxes. However, that money would have to get divided among 5,000-9,000 different tax returns! Just try to wrap your head around those numbers on a monthly basis for 1 customer.

In some cases, the returns would only total as little as $5 or $10, but they were required to submit them. And everything had to be accounted for, with the money divided properly among the various jurisdictions in a state, even though you collect a lump sum from your customers. Oh . . . and those returns . . . often, they were a specific form, written by some policy maker, that said the form, in that exact format, must be submitted on the 10th of the month after business was closed. And perhaps even in blue ink! Now spread that out over more than 9,000 jurisdictions. And if you don't file, you not only get hit with interest, but a potential penalty. And that penalty may be $500 for a $10 return.

So, while each jurisdiction wants it's money, they're not exactly the easiest group to work with. It's one thing when you're a business with a physical presence in a location, and know what's going on in that location, but when you sell to people in places you've never heard of, it's rather hard to keep up with the rules and regulations that jurisdiction puts out. There's a cost to that, and it's not as easy as a company in the UK, that simply adds the same amount of VAT to everything. My colleague added:

"The states believe there is a river of tax money flowing by and they aren’t dipping into the stream.  They’re very thirsty right now."

They may be thirsty, but they're not very accommodating. Would that every jurisdiction used a common form, accept electronic feeds and transfers, and provided their rate information in an easy to use manner, it might be feasible. But the way things stand now, the cost (to a business) for collecting online taxes is very high, and there's one more thing. I mentioned before that if you get the taxes wrong, you could be facing a penalty. That penalty is often arbitrary, set by a judge in that very jurisdiction. Which means the odds are stacked against you, even if you make a good faith effort to help the states, as Mr Gewitz suggests that Amazon do. At best, you can "meet expectations" at a considerable cost to you. At worst, you could get it wrong, and get hit with enough fines and penalties to put you under, all while trying to "help the states."

I do agree that the states and other jurisdictions need help, and that there is a revenue stream they could tap into. But it's not by applying 20th century rules and methods to a 21st century problem. I have my own ideas on what can help, but this post is long enough as it is. Thanks for taking the time to read it.

29Jun/110

Wait! I thought the Feds only cared about smartcards . . .

Monday, I posted how Quest's tokens allow for users to program their own seeds. Which then prompted questions (internally) of, "Why do you even care? I thought you focused on the Federal space, and the Feds only cared about PAP, or CIV cards?"

Well, yes. For the most part, the US Federal government does focus on CAC (Common Access Card - used by the DOD), and PIV (Personal Identification Verification - mostly used by the civilian agencies) cards. And you'll also hear about PIV-I (PIV Interoperable) being involved. In the case of CAC and PIV, a user has to file an application, and some Federal agency would need to sponsor the individual to obtain such a card. This usually includes a background check, or some sort of formal review, before the card is issued. PIV-I tried to lower this barrier, by allowing non-Federal organizations (think government contractors, state governments, first responders, etc) to issue interoperable smartcards that are trusted by Federal systems.

However, PIV-I has a lower "assurance level," and often involves the same (or similar) sort of background check, just by a different organization. Assurance Levels are set by NIST and can be found detailed in their Special Publication (SP) 800-63. You'll see there are 4 assurance levels, and PIV-I only gets to level 2 (really, it gets to level 3, but with a less stringent background check, it can only be considered level 2.5 at best). CAC & PIV strive for level 4, if not level 3.

Ok. So we've established that smartcards are the main vehicle for 2-factor authentication in the Federal government, but I still haven't explained why tokens (RSA and other ones) crop up. And this is because a token is also an acceptable form of 2-factor authentication (read SP 800-63. and you'll see them mentioned as 'one-time passwords'). The "identity proofing" is still required, but tokens are actually a lot more flexible for several reasons.

1. Tokens can be assigned to any user (part 1). In the case of smartcards, they are usually assigned to people, while tokens can be shared. In fact, with our tokens, users can share tokens, but continue to use their distinct credentials (username and password) with the token. Which means that multiple admins can share a token to access a common system, but you can determine which admin logged in to do something with the particular token. This lets you continue to have 2 factor authentication, but also a "check-in/check-out" system for the particular token, allow more controls over the physical token (perhaps locked away in a safe or vault).

1a. Tokens can be assigned to any user (part 2). This is really a corrollary to 1, which is that the token can be assigned to service, or privileged accounts. Putting in the same sharing (check-in/check-out) model for a token, plus tie the password to a password vault product, and you have some pretty solid security around your privileged accounts such as oracle, root, Administrator, and other 'non-named' accounts.

2. Tokens are independent of environment. With smartcards, you have to have a reader attached to the user's console. No reader, or a malfunctioning reader, makes authentication a bit more difficult (read not possible). There are also situations where PKI simply isn't used. Older applications or platforms that do not make use of certificates cannot be changed quickly or easily. With a token, along with a username and password/PIN, you can continue to get 2FA, even in a scenario where a reader isn't available or not practical, or the app is expecting only a username and password. There is still some amount of work to be done, but it's often easier than tying an app into an entire PKI infrastructure.

2a. Tokens are independent of environment. There are some cases where the app or platform is capable of using PKI, but it is sitting in a location/area/network that simply cannot reach the PKI infrastructure to which the certificate on the smartcard is tied to. Not every system is actually on the internet (unbelievable, I know), which means tokens can provide access without requiring a centrally managed CA (certificate authority) to be present.

3. Tokens can be assigned much quicker and easier. And this is really the crux of why tokens come into play in the Federal space. Smartcards require a centrally issued certificate to be put onto the card. In some cases, there is no time for the requests to make their way through the system to get a certificate, publish it to a CA (certificate authority) and card, and get the card to the user. In other cases, the user simply will not get through a background check (or unwilling to get one), but has to be given access to certain systems. Yes, there are times when the Federal government has to deal with "questionable people" but they might as well make sure it's the right person, so 2-factor is still needed.

3a. Tokens can be revoked much quicker and easier. Because the token is usually some bit of plastic, it's easier to revoke and know that it won't be used in other ways. Most smartcards take a while to issue because they are also printed as a security badge. Meaning that even if the certificate on the card is revoked, the card may still be usable to get physical access to a building or location. However, it's unlikely that an agency will let you in because you have a black piece of plastic on your keychain.

So, with those all those reasons, tokens are not going away. Smartcards will continue to dominate, but there will continue to be a need for 2-factor authentication (2FA) using one-time passwords (OTP) in the Federal space.

27Jun/110

The RSA Breach saga continues . . .

I've discussed the RSA breach before, but had a very interesting conversation last week with a customer that was in a dilemma as to what to do. RSA have said they would replace some of the tokens, depending on the situation (protecting Intellectual Property, Consumer-Oriented products, etc) and this customer was pretty certain they were to get new tokens from RSA, but didn't know if they wanted them. Nor do they want any other "standard" token that other vendors could provide, because that new vendor could be breached as well.

What this client really wanted was a token where he could program his own seed into it. I mentioned that our software tokens actually allow for this out of the box, and when you "program" a new soft token, the seed is automatically generated. Which means that Quest never knows what the seed record is for any software token that we provide. For an example of this, you can actually see a recording of it here where the seed is generated, and then here where the seed is put into our Desktop token (there is no audio on either recording).

However, this wasn't as interesting to him as programmable hardware tokens. He had concerns about keys getting compromised while being transferred to the end user, and wanted to send a pre-programmed hardware device. At first, I didn't think we had anything like that in our arsenal, however, it turns out that one of our latest token models (The Yubikey token) as well as some of other ones already allow for user programmability! In fact, if you look at the link for the Yubikey token, and then scroll down, you'll see that some are programmable. It took our Defender Product Manager (Stu Harrison, who blogs on and off here) to point this out to me, even though I should have remembered this from earlier on.

Obviously, each token does come with a default seed that Quest will know about, but if there was concern about a vendor having the "keys to the kingdom," a programmable token puts the onus back on the organization, and keeps Quest out of the limelight. I don't want to discount the fact that this will take more manpower, but if you don't want your vendors to have your seed records, reprogramming the tokens is the only secure way of doing it. It only makes sense, and as a security professional, you should never rely on RSA, Quest or any organization if you can minimize the number of people that have access to the token seed records.

8Jun/112

Best Buy Security Breach waiting to happen

Seems like there's one breach after another these days, and organizations are leaking data. You would think that a retailer would want to minimize this sort of thing to keep up relations. Apparently, not Best Buy.

I bought a $35 game on BestBuy.com only to find out I already purchased this earlier. So I took the online purchase, that only arrived at my door 2-3 days ago, back to the local Best Buy (the Mall of Georgia one). I handed over the game (unpackaged) and the packing slip when the gal behind the return counter asked for my driver's license. I showed it to her, thinking she just wants to verify me, but she starts typing in the license number.

"Wait, what are you doing? You don't need my driver's license number," I say as I hide my license before she finished. "Sorry, but this is required. Would you like to speak to a manager?" is her reply. "Yes, please," I say.

Christie Bee, the manager, waddles over and just says "I cannot process a return without a license." And there's no way to return something without some ID. I offer up the American Express it was bought on, but that is not good enough. I've got other things to do, and don't have time to argue with Christie about the silliness of all this, and that it violates their agreement with American Express, so I leave in a huff.

About 10 minutes ago, I came home, filed a dispute with AmEx, and told them the store is unwilling to take the return. It looks like American Express is doing the right thing, and I have no doubt they'll resolve this for me.

But the bigger question is . . . Why would Best Buy take on this risk? I understand the whole fraud issue, which is why they're doing this. But the minute they get hacked, every person's driver's license, along with street address and other info, is going to get taken. I have a Rewards Zone account, as well as the credit card it was purchased on. And this is a $35 item from long time customer.

So to Best Buy, I pose these questions to you; Why on earth would you ask me to give up my identity to you like this? Do you really expect me to trust you with this information? And why would you want to store this information in the first place? Surely you can think of a better way to handle fraud without taking something so sensitive.

Until this changes, Best Buy won't be getting any more of purchases.

23May/110

Cloud & Virtualization Survey

Quest Software Public Sector has released a survey performed by Norwich University about the state of Cloud & Virtualization within the public sector area. The survey definitely surprised me in some of the results and you should check it out. We had a fantastic response rate, with close to 650 individuals, and a margin of error at less than 4%. I'll let the results speak for themselves.

You can find it here: http://www.quest.com/documents/landing.aspx?id=14279 .

Filed under: Uncategorized No Comments
6May/110

Technology on the battlefield and “The Golden Hour”

Government Computing News has just published an article on smartphones on the battefield, and within the Defense sector specifically. That article can be read here: http://gcn.com/Articles/2011/05/03/Cyber-defense-handheld-encryption.aspx . It's a very good article, and one that highlights the challenges with using commercial products in a military or intelligence setting, where the stakes are much higher in some cases.

But it reminded me of a presentation I saw late last year about "The Golden Hour" from Air Commodore Tony Boyle (UK Defence Operations Board) and the use of technology to help get medical attention to wounded combatants much faster. The Golden Hour is a medical term originally coined by the military to describe that window of opportunity (usually 60 minutes) to save a life after severe trauma occurs. And while there is controversy (see the wikipedia article on this here) about it's validity, there is absolutely no doubt that getting a wounded solider proper medical care faster raises the survival odds.

The overall presentation was titled "Future Networks - Enabling Defence Interoperability and Interconnectivity" and he also got into the military doing more with COTS (Commercial Off The Shelf) systems, as well as looking at private industry for 'best practices' and practical savings. What follow's is my Quest colleague's (Ian Davidson) write-up of the presentation.

He gave some insights into how technology is actually used in theatre – some of which may seem obvious but nonetheless was very interesting.

One “vignette” was regarding an incident with Viking Patrol in Lashkar Gah :

When the IED went off, a 9 line text message was automatically sent to multiple places with differing results :

  • One text message went to a Hermes ISTAR which automatically deployed it to the location of the explosion to monitor the area immediately
  • Another text message went to a 904 squadron to deploy a Harrier which arrived in the area within 6 minutes.
  • Another went to the Casualty Ops – the result of which was that 9 doctors/surgeons were lined up waiting when the casualties arrived – and had been automatically notified of blood type/ full history/allergies etc.

    This combination of events lead to the casualties being dealt with by medics– well within “The Golden Hour” – the period of 60 minutes after sever trauma in which casualties are most likely to survive . In fact they were on the operating tables within 15 minutes of the device going off.

    The whole point of this story was to illustrate how the military on joint operations depend on sharing information collaboratively in order to ensure the success of any given military operation – no pun intended.

    “No secrets” between different military “departments” and deployments mean that soldiers survive based on a POST and subscribe method.

    Each operating division posts information that is likely to be identified as a serious threat or something requiring immediate attention, and others can subscribe to that information.

    He then went on to discuss the different business models that have come around over the last 10 years and how increasingly open systems via the internet allow for people to become enabled to share information sensibly – using an analogy about Amazon re: systems training. i.e. “No one goes to school to learn how to order books from Amazon”.

    He also discussed ideas around redefining what needs to be secret and what doesn’t – stating that (as an example) in the case of military personnel ordering clothing provisions – there is no real need for it to be “so “ secure and locked down internally.

    Why not use M & S for shirts for example – the worst thing that can happen is that M & S get hacked by another foreign country and he potentially gets the wrong shirt size delivered !

    (For the non-UK based folks, M & S is Marks & Spence, a UK department store similar to Nordstrom).

    Overall, it was a great presentation, making the case for using technology to it's fullest extent, and making sure people are comfortable with whatever model you set up.

    29Apr/110

    Amazon in the Federal space? Absolutely!

    I work for Quest Software, and focus on making sure that we are helping our Federal customers. So when one of our account managers asked me to me with Amazon, I was a little skeptical.  Amazon? Really? I mean, I know the Feds have the "Cloud-First" initiative, and there are some things here and there, but  can we really work with Amazon? What can we do with an organization that doesn't do anything with most of the platforms we support?

    There's no Oracle, no Microsoft, PeopleSoft, SAP and so on.  They don't even offer email that we can migrate to or from.

    Well, it was an  hour and a half well spent, and I am absolutely jazzed about some of the things we discussed. It turns out they are very, very serious about Security (with a capital S). In the Fed space, that is the number one objection to  anything Cloud related. But they not only went through some of the certs and reviews they'd had (even a SAS 70 audit, which I think pretty highly of, having been involved in one years back) but their overall architecture and philosophy. They are keenly aware of the federal requirements that are out there, and have made a substantial commitment to making sure that their federal customers are able to use their solutions

    Yes, they had an outage last week, but if you have concerns about these things, you should talk to Amazon about what can be done to prevent it. The outage was Amazon's fault, but there's also options (that obviously cost) which could be implemented to avoid such a thing happening. You can find a lot more info on the outage here.

    In any case, it turns out they have a lot of options, including all the way down to a VPN secured environment, if that is your requirement.  And because everything is web-based, and often accessible via web services, I think there are a few solutions that Quest have that could be used with Amazon's platforms. The conversation ranged from monitoring, to management and provisioning of resources, as well as integration with internal systems, and potentially even things like SSO and access controls. Lou J (the Quest Account Manager) even learned about Cloudbursting.

    I hope to explore those options in the coming months, especially with some specific Federal customers in mind.

    20Apr/110

    Yet another ‘what is the cloud’ definition

    Last week, I was having lunch with a business associate.  He does a lot of work with many IT vendors and the Federal government, and hears "we want help with the cloud" on a regular basis.  However, he was still trying to wrap his head around what "the cloud" was.  The problem is that everyone has their own definition, and (of course) it's tailored to whatever it is they do.  Even if it means jamming a square peg in a round hole. Everyone is claiming they have a cloud solution (even if it's a "wolf in sheep's clothing" scenario) and if you try and go to someone like NIST, their definition can make your head spin.

    So here is what I told him.  And I did warn him that this was very, very simplified, but I think it was enough to get him to start understanding why "Cloud" is different from what has been done before.

    In my mind, cloud computing is the old outsourced hosting, or ASP (Application Service Provider) model, but with a twist.  The twist is the self service piece.  The idea is that the user, who is somewhat tech savvy, and knows what end result they want, are able to get what they need without ever picking up a phone, sending an email, or opening a service desk ticket.  The user requests a service, app, or virtual machine, one is provided rather quickly, and the user is billed (or department charged).  When the user is done with the resource, it is either taken away (on a schedule) or the user initiates the process, again with no other human involvement.

    That's it.  Yes, I'm sure the "cloud faithful" will howl that there is much more to it than that.  And they would be right, but the folks my business associate and I work with are not comfortable with the idea of the public cloud, nor are they at the point to discuss things like 'cloud-bursting' and 'micro-cloud' but they do know managed services.  And this allows them to make the connection between something that is known and comfortable to something very convoluted and over-hyped.

    Filed under: Cloud First No Comments
    4Apr/110

    The Federal CIO’s guide to partnering with Quest Software for Data Center Consolidation, Part III

    Note 1: the bulk of this blog post was done on an Apple iPad - I point this out not because of a fascination with the iPad, but because of the fact that such long documents were not readily possible from a mobile platform only a few years ago. That still amazes me.

    Note 2: this is a very rough, stream of consciousness blog entry. Grammar, spelling and other writing errors should be ignored. If you want a nice, clean "white paper" type of document, please contact me offline, and I'll get you something cold and clinical.

    ------------------
    Preparation and migration
    --
    Preparing for the move, beyond the simple assessment seems a no-brainer. But a migration? Really? Right before a move? And I say "yes." First, let's be clear and qualify that "migration" (to me) means a cut over to another platform. This could be email, user directory, database, etc, and it may be to a new version of some current software, but it's definitely getting off the current version. The idea is to make the software and version of that software as current as possible before you actually move to a new location/environment. The only thing this excludes is the move from physical to virtual. That comes later, and I'll explain why then.

    There are several reasons for you to consider doing a migration before consolidating environments. First, and foremost, the migration is going to happen sooner or later, and aren't you better off doing the migration in a comfortable and stable environment instead of your new one? Plus, the migration may actually shake some things out and make the consolidation easier. For example, if you use QMM to migrate mailboxes from an old version of Exchange to 2010, or use QMM for AD to restructure your Active Directory environments, you may actually find that there are many users, mailboxes, groups and other objects that could be deleted/abandoned.

    The same goes for databases. If you're running an old version of Oracle for your databases, it's time to cut over, and see what features and benefits you get. And we even have a tool that let's you mix versions to replicate data while you make this sort of move, so it's not a jarring, "all at once" process.  That tool, BTW, is Shareplex.  And it also let's you replicate mixed versions of databases, which is pretty cool.

    But why else should you do the migration before starting the actual move? Well, frankly, because support is easier. Sure, you can migrate a Windows 2003 server, or an Oracle 9i database into your new environment, but if there's a problem, what will the vendor tell your team? Most likely, they'll tell you to upgrade to the latest version.

    It's not widely discussed, but the reality is that most software companies want you on the latest and greatest version of their software when you call support. It's usually the version their developers are currently using as the basis for the next release. It's the version that support is working with most, and one that they have set up locally to recreate your problems. One or two versions is often ok, but if you're running something more than 2-3 years old, I think you're asking for trouble. Get to the latest versions however you can, because you don't want to consolidate and move old software around.

    Another reason is your staff's personal development. I've run IT groups in the past, and the most common question was always, "when are we going to get to the latest version of X?" where X was some database, operating system or programming language. If you are the CIO at a federal agency, your staff knows that data center consolidation is coming, in some form or fashion. You want them ready and energized for the task. Letting them get to the latest version of whatever software they work with will excite them, and you'll have a happy crew moving into the consolidation.

    Now, I did mention that cutting over to a virtual environment should be last. The reason is that this is really the same as a hardware move. No matter what hardware your environment is using, something is sure to change. And your software may react adversely. Plus, if you couple that with a version upgrade, you are changing a lot out from under your user base as well as IT staff. The idea is to minimize risk, and that just doesn't cut it.  So do the "migration" first, get it settled, and then cut over hardware (which can be P2P or P2V).

    And if you're contemplating a P2V move, you should definitely check out vConverter.  Not only does it work with P2V, but V2V (you may want to switch hypervisors, or try out multiple hypervisors with the same ), and even V2P in case you absolutely have to back out.  Or even want to switch hardware, using the move to virtual as a stepping stone.

    Finally, if you upgrade, migrate and virtualize in a single move, how do you know where you got performance gains or losses? If you read my last post on this topic, you'll know I propose baselining before starting.  The only way to do that is to start with a known point, but then make incremental moves so you can collect more information on what impact each part of the upgrade, migration and consolidation has on your environment.

    Copyright (C) 2010-2011 Dmitry Kagansky – All opinions expressed are those of the respective author and do not reflect the views of any affiliate, partner, employer or associate.