Sunday, June 6, 2010

Cloud Computing: an insight view


Nowadays the word "Cloud" is very well known and one of the most hyped & overloaded term in the recent history of IT: just enter "Cloud Computing" in your search engine of choice and be prepared to navigate a huge result set.

Cloud Computing is mainly a new deployment model.

Let's say you are the solution architect of an enterprise, and you are in the process of setting up a new capability for your company. As usual, the two big alternatives are build the solution yourself, buy it as a service if available or all the intermediate approaches which combine the two. If you decide to build even just a little piece of the solution, you are implicitly stepping up for running it too: making sure your datacenter is up to the task (and beef it up if it's not), installing, updating, handling downtimes and security patches, walking the tightrope between reacting to spikes in workload and keeping costs to a reasonable level, keeping an eye on health indicators, making sure that integration with other solutions runs smoothly... business as usual. There are many cases in which the above is just a symptom of the tight grip you want to keep on your system: the more aspects you want to control the higher the overhead associated to it, and there are often good reasons for having full control. OTOH there are a great many cases in which the above is *really* an artifact of how the IT works today (habitual readers, I II & III: think of the attributes store that appeared necessary in the pre-token era but was just an artifact of using pure credentials and not identities), and you'd gladly forsake control on some details if it would mean easing the administrative burden. If you belong to the second category, cloud computing is for you. Imagine that a vendor comes to you and offers to host components your solution on his datacenter. With "host" I don't mean just giving you a slot in their racks, nor just a virtual directory in their web servers. I mean hosting your tables in their store and performing queries for you, hosting service endpoints with ESB-like capabilities, running workflows & long running processes... all things you'd do on your own application servers on your datacenter, with some important differences:

  • all the basic IT chores (patching, air conditioning...) are being taken care of
  • workload handling is not an issue: the vendor accommodates many customers, hence it operates a datacenter much bigger than you'd ever dream to have in-house, and its architecture will be necessarily be designed for scale. With some luck, your vendor may even offer dynamic workload (spawning more instances automatically as demand increase)
  • costs are likely to be proportional to actual utilization, which finally frees you (somewhat) from the nightmare of capacity planning
  • advanced capabilities are inherited "for free" by the sheer fact of being hosted in the vendor's infrastructure. If you want name resolution, advanced messaging capabilities like pub/sub, discoverability and similar, chances are you just need to check an option in the contract as opposed to set up an entire ESB yourself
  • ...and many others

This is huge. Think of a startup with a great idea but not the financial prowess to afford its own datacenter, the pay as you use model is ideal; and even for bigger player the advantages are obvious, just imagine how easy it is to set up a test environment and dismantle it after your proof of concept is done.

You'll find no shortage of hype about this, the idea IS exciting and important after all, but I'd urge you to resist the temptation of being carried away and burn all the old toys. Even Mr. "IT doesn't matter" Nicholas Carr envisions a future where traditional and and cloud approaches are used together:

“…larger companies…can be expected to pursue a hybrid approach for many years, supplying some hardware and software requirements themselves and purchasing others over the grid. One of the key challenges for corporate IT departments, in fact, lies in making the right decisions about what to hold on to and what to let go.” from "The Big Switch", Nicholas Carr.

Let me reiterate my initial point: Cloud Computing is mainly a new deployment model. Another arrow in the quiver of companies and solution architect, that will work well for slaying certain kind of monsters.

What about the relationship between Cloud Computing and S+S & SaaS? The answer depends by which team you play in. If you are an ISV, the Cloud is a great way of hosting & offering your services. It saves you many of the headaches you'd have to deal with yourself, the dynamic workload is great, and so on. I an sure you'll hear a lot about deep implications or architecture & business models.

If you are an enterprise, for you it is probably a bigger shift. You can think of it as enabling an S+S model where you are both the client and the service provider: you reap the benefits of the S+S approach (IT savings etc) and you have control on the services themselves. Control, as we know, cuts both ways: hence you get back some of the responsibilities (especially at the design level) that you deferred to others. If the service being build is your core business, where you bring IP and expertise, it is probably a good thing: if it is a general "utility" service, already covered by a specialized ISV you could use right away, probably not so much.

"But Vittorio, isn't this blog supposed to be about identity?". Right, I forgot: read on :-)

Enterprise Identities and Cloud

Let's say the vendor convinces you to move some of your services "in the cloud"; you pick one of the services with the most erratic CPU utilization pattern and you deploy it in the cloud. Great! In the figure below you can see your new situation, with a "hole" where the service now in the cloud used to be.

image

The service now in the cloud is part of a LoB application, which features a sophisticated authorization policy. This is one of the many benefits you reap from running a great directory (red pyramid). But hey, wait a second! What happens when one of your employees calls the service in the cloud? The service is no longer under the jurisdiction of your directory, hence the kerberos token that states your employee's affiliation with the Managers group is gibberish: how is the cloud infrastructure supposed to handle authorization the way you originally envisioned?

Now that I think of it, we have issues also in the opposite direction. As part of the LoB application, the service in the cloud will likely invoke other services:

image

Unfortunately it cannot do so with the blessing of the directory of your company, since it is hosted elsewhere: as a result, its siblings still deployed within your boundaries won't be able to authenticate the call.

After the initial shock, we soon realize there's no reason to panic. We know how to handle the situation, this is exactly like talking with a partner: we can set up a federation with the cloud provider. It is a bit unusual, after all it is our own very service code we are dealing with, but it can be done. In fact, this federation has another unusual aspect: while a classic partnership translates and mediates between two organizations, here one of the parties is pretty much an empty shell. The cloud provider has no users, hierarchies & roles of its own, it is simply an environment designed for running other's code. It has a "corporate" identity, sure, but it has no claims of its own that need to be translated into ours (and vice versa). Maybe we can solve the situation with something simpler than a full fledged federation in the classic sense of the word: in fact, I believe that a simple R-STS can save the day. Consider the picture below:

image

On top the directory pyramid I added an STS, which can give to employees portable identities in form of interoperable tokens. On the cloud side I added an R-STS, which sits on the multitenant application that takes care of authenticating the calls of the companies that subscribed to the cloud hosting offering. The scroll on the top right corner represents the configuration for our enterprise: you may notice that it contains a small copy of our directory STS, it symbolizes the fact that our cloud infrastructure (read: our R-STS) will trust tokens from our directory (read: it will accept those tokens and will issue transformed tokens in return).

How does that help? Simple. It is reasonable to assume that a service hosted in the cloud infrastructure will be configured to accept tokens issued by the cloud R-STS; and with the trust relationship just described, we just ensured that our employees can obtain such a token.

The above covers beautifully the authentication part: we've yet to deal with the authorization, though. If your services are already claim-aware, you are already OK: the R-STS can simply repackage the claims it receives from your directory STS (the approach would not work if you'd need tokens from third parties, since you'd want to verify their signature on the original token, but that's another story). If your service was not claim aware, you can still use the solution above and handle authorization. If the R-STS allows you to execute your own claim mapping rules when transforming tokens coming from your directory STS, the R-STS will basically issue a token which contains your authorization directives: at this point it is enough that the service hosting infrastructure is intelligent enough to enforce such directives. This would be a great point to place a link to my post about attribute vs authorization claims, but unfortunately I didn't finish it yet.

Let me summarize here: the cloud provider can handle authentication of incoming requests to the service it hosts with an R-STS. You can easily manage access control by having that R-STS trust you, and by having a say in the claim transformation rules applied by the R-STS. If you recognized in this what I described in this other post, congratulations: you are on the right track.

I'll leave the mirror case as exercise to the reader, just apply the same logic. If you are in the mood of some out-of-the-box thinking, you may even try to imagine how easy it would be to handle the case of the service hosted in the cloud calling home IF the services still hosted at home would have a network addressable presence in the same cloud... but I told you the solution already ;-)

ISVs, Identities and Cloud

Fine, enterprises can launch their services in orbit and yet make them available to their employees still on the ground. What about ISVs? I guess that some would say they have an even better deal with the cloud when it comes to access control & management.

Before we've seen how the enterprise twisted the behavior of the cloud R-STS for achieving this unusual eigenauthorization, gaining extra advantages (especially if doing so it discovered the power of claims) but substantially being motivated by protecting its investment during the move to a hybrid model. For the ISV, instead, the R-STS is the ultimate decoupler & the perfect trust broker: it can take care of the onboarding, authentication and integration of the ISV customers while standardizing the credentials that the ISV service itself needs to understand.

Let's consider a S+S ISV who offers a CRM solution (how original of me) from its own datacenter. Apart from the usual IT problems we already mentioned for the enterprise, the ISV needs to worry about authenticating "foreign" users. The easier is to integrate the service with customer's IT environments, the lower the entry bar and the higher the probability of doing business: that basically means that the more people you want to onboard, the more swivel chair integration you should expect to do. You have your authentication criteria and application roles (or claims if you're advanced), and you have to map those with whatever your customer has; somebody will have directories and hierarchies that will map well, smaller shops may simply collapse roles in explicit accounts (user A can do X, user B can do Y). Now consider for a moment the picture below: it represents the service we've seen before, this time from the perspective of external consumption.

image

Also in this case, authenticating somebody is simply a matter of telling the R-STS that it's OK to emit a token for them; and for the mapping and authorization, again we can take advantage of the policy engine of the cloud provider. The ISV does not need to worry about handling credentials anymore; it becomes a matter of administering the authorization rules, which is more a business problem than a technical one (how the ISV explains to a potential customer what is the meaning of the various claims/roles, so that the prospect can take informed decisions about what maps in where? Here we do have a need for claim mapping across orgs). In fact, one could even imagine that if a prospect of the ISV is already registered for other reasons with the cloud provider (for example because it is already customer of another ISV that hosts its services on the same cloud provider), onboarding should be a breeze.

In fact, the picture above suggests an interesting twist: if once a service is hosted in the cloud it is so easy to make it available to others, perhaps in some cases enterprises will end up recovering costs and fighting inefficiencies by acting as ISVs. Think of situations in which you have excess capacity, like if you have a wonderfully automated warehouse that ends up being empty most of the time because you got the sizing wrong. You could "rent" warehouse services (receiving, stocking, retrieving, packaging, shipping) just by taking the services that front the warehouse function already in the cloud and opening them to third parties just by changing some policies. I know I said that already, but it's *huge*.


No comments: