Sunday, June 6, 2010

Cloud Computing: an insight view


Nowadays the word "Cloud" is very well known and one of the most hyped & overloaded term in the recent history of IT: just enter "Cloud Computing" in your search engine of choice and be prepared to navigate a huge result set.

Cloud Computing is mainly a new deployment model.

Let's say you are the solution architect of an enterprise, and you are in the process of setting up a new capability for your company. As usual, the two big alternatives are build the solution yourself, buy it as a service if available or all the intermediate approaches which combine the two. If you decide to build even just a little piece of the solution, you are implicitly stepping up for running it too: making sure your datacenter is up to the task (and beef it up if it's not), installing, updating, handling downtimes and security patches, walking the tightrope between reacting to spikes in workload and keeping costs to a reasonable level, keeping an eye on health indicators, making sure that integration with other solutions runs smoothly... business as usual. There are many cases in which the above is just a symptom of the tight grip you want to keep on your system: the more aspects you want to control the higher the overhead associated to it, and there are often good reasons for having full control. OTOH there are a great many cases in which the above is *really* an artifact of how the IT works today (habitual readers, I II & III: think of the attributes store that appeared necessary in the pre-token era but was just an artifact of using pure credentials and not identities), and you'd gladly forsake control on some details if it would mean easing the administrative burden. If you belong to the second category, cloud computing is for you. Imagine that a vendor comes to you and offers to host components your solution on his datacenter. With "host" I don't mean just giving you a slot in their racks, nor just a virtual directory in their web servers. I mean hosting your tables in their store and performing queries for you, hosting service endpoints with ESB-like capabilities, running workflows & long running processes... all things you'd do on your own application servers on your datacenter, with some important differences:

  • all the basic IT chores (patching, air conditioning...) are being taken care of
  • workload handling is not an issue: the vendor accommodates many customers, hence it operates a datacenter much bigger than you'd ever dream to have in-house, and its architecture will be necessarily be designed for scale. With some luck, your vendor may even offer dynamic workload (spawning more instances automatically as demand increase)
  • costs are likely to be proportional to actual utilization, which finally frees you (somewhat) from the nightmare of capacity planning
  • advanced capabilities are inherited "for free" by the sheer fact of being hosted in the vendor's infrastructure. If you want name resolution, advanced messaging capabilities like pub/sub, discoverability and similar, chances are you just need to check an option in the contract as opposed to set up an entire ESB yourself
  • ...and many others

This is huge. Think of a startup with a great idea but not the financial prowess to afford its own datacenter, the pay as you use model is ideal; and even for bigger player the advantages are obvious, just imagine how easy it is to set up a test environment and dismantle it after your proof of concept is done.

You'll find no shortage of hype about this, the idea IS exciting and important after all, but I'd urge you to resist the temptation of being carried away and burn all the old toys. Even Mr. "IT doesn't matter" Nicholas Carr envisions a future where traditional and and cloud approaches are used together:

“…larger companies…can be expected to pursue a hybrid approach for many years, supplying some hardware and software requirements themselves and purchasing others over the grid. One of the key challenges for corporate IT departments, in fact, lies in making the right decisions about what to hold on to and what to let go.” from "The Big Switch", Nicholas Carr.

Let me reiterate my initial point: Cloud Computing is mainly a new deployment model. Another arrow in the quiver of companies and solution architect, that will work well for slaying certain kind of monsters.

What about the relationship between Cloud Computing and S+S & SaaS? The answer depends by which team you play in. If you are an ISV, the Cloud is a great way of hosting & offering your services. It saves you many of the headaches you'd have to deal with yourself, the dynamic workload is great, and so on. I an sure you'll hear a lot about deep implications or architecture & business models.

If you are an enterprise, for you it is probably a bigger shift. You can think of it as enabling an S+S model where you are both the client and the service provider: you reap the benefits of the S+S approach (IT savings etc) and you have control on the services themselves. Control, as we know, cuts both ways: hence you get back some of the responsibilities (especially at the design level) that you deferred to others. If the service being build is your core business, where you bring IP and expertise, it is probably a good thing: if it is a general "utility" service, already covered by a specialized ISV you could use right away, probably not so much.

"But Vittorio, isn't this blog supposed to be about identity?". Right, I forgot: read on :-)

Enterprise Identities and Cloud

Let's say the vendor convinces you to move some of your services "in the cloud"; you pick one of the services with the most erratic CPU utilization pattern and you deploy it in the cloud. Great! In the figure below you can see your new situation, with a "hole" where the service now in the cloud used to be.

image

The service now in the cloud is part of a LoB application, which features a sophisticated authorization policy. This is one of the many benefits you reap from running a great directory (red pyramid). But hey, wait a second! What happens when one of your employees calls the service in the cloud? The service is no longer under the jurisdiction of your directory, hence the kerberos token that states your employee's affiliation with the Managers group is gibberish: how is the cloud infrastructure supposed to handle authorization the way you originally envisioned?

Now that I think of it, we have issues also in the opposite direction. As part of the LoB application, the service in the cloud will likely invoke other services:

image

Unfortunately it cannot do so with the blessing of the directory of your company, since it is hosted elsewhere: as a result, its siblings still deployed within your boundaries won't be able to authenticate the call.

After the initial shock, we soon realize there's no reason to panic. We know how to handle the situation, this is exactly like talking with a partner: we can set up a federation with the cloud provider. It is a bit unusual, after all it is our own very service code we are dealing with, but it can be done. In fact, this federation has another unusual aspect: while a classic partnership translates and mediates between two organizations, here one of the parties is pretty much an empty shell. The cloud provider has no users, hierarchies & roles of its own, it is simply an environment designed for running other's code. It has a "corporate" identity, sure, but it has no claims of its own that need to be translated into ours (and vice versa). Maybe we can solve the situation with something simpler than a full fledged federation in the classic sense of the word: in fact, I believe that a simple R-STS can save the day. Consider the picture below:

image

On top the directory pyramid I added an STS, which can give to employees portable identities in form of interoperable tokens. On the cloud side I added an R-STS, which sits on the multitenant application that takes care of authenticating the calls of the companies that subscribed to the cloud hosting offering. The scroll on the top right corner represents the configuration for our enterprise: you may notice that it contains a small copy of our directory STS, it symbolizes the fact that our cloud infrastructure (read: our R-STS) will trust tokens from our directory (read: it will accept those tokens and will issue transformed tokens in return).

How does that help? Simple. It is reasonable to assume that a service hosted in the cloud infrastructure will be configured to accept tokens issued by the cloud R-STS; and with the trust relationship just described, we just ensured that our employees can obtain such a token.

The above covers beautifully the authentication part: we've yet to deal with the authorization, though. If your services are already claim-aware, you are already OK: the R-STS can simply repackage the claims it receives from your directory STS (the approach would not work if you'd need tokens from third parties, since you'd want to verify their signature on the original token, but that's another story). If your service was not claim aware, you can still use the solution above and handle authorization. If the R-STS allows you to execute your own claim mapping rules when transforming tokens coming from your directory STS, the R-STS will basically issue a token which contains your authorization directives: at this point it is enough that the service hosting infrastructure is intelligent enough to enforce such directives. This would be a great point to place a link to my post about attribute vs authorization claims, but unfortunately I didn't finish it yet.

Let me summarize here: the cloud provider can handle authentication of incoming requests to the service it hosts with an R-STS. You can easily manage access control by having that R-STS trust you, and by having a say in the claim transformation rules applied by the R-STS. If you recognized in this what I described in this other post, congratulations: you are on the right track.

I'll leave the mirror case as exercise to the reader, just apply the same logic. If you are in the mood of some out-of-the-box thinking, you may even try to imagine how easy it would be to handle the case of the service hosted in the cloud calling home IF the services still hosted at home would have a network addressable presence in the same cloud... but I told you the solution already ;-)

ISVs, Identities and Cloud

Fine, enterprises can launch their services in orbit and yet make them available to their employees still on the ground. What about ISVs? I guess that some would say they have an even better deal with the cloud when it comes to access control & management.

Before we've seen how the enterprise twisted the behavior of the cloud R-STS for achieving this unusual eigenauthorization, gaining extra advantages (especially if doing so it discovered the power of claims) but substantially being motivated by protecting its investment during the move to a hybrid model. For the ISV, instead, the R-STS is the ultimate decoupler & the perfect trust broker: it can take care of the onboarding, authentication and integration of the ISV customers while standardizing the credentials that the ISV service itself needs to understand.

Let's consider a S+S ISV who offers a CRM solution (how original of me) from its own datacenter. Apart from the usual IT problems we already mentioned for the enterprise, the ISV needs to worry about authenticating "foreign" users. The easier is to integrate the service with customer's IT environments, the lower the entry bar and the higher the probability of doing business: that basically means that the more people you want to onboard, the more swivel chair integration you should expect to do. You have your authentication criteria and application roles (or claims if you're advanced), and you have to map those with whatever your customer has; somebody will have directories and hierarchies that will map well, smaller shops may simply collapse roles in explicit accounts (user A can do X, user B can do Y). Now consider for a moment the picture below: it represents the service we've seen before, this time from the perspective of external consumption.

image

Also in this case, authenticating somebody is simply a matter of telling the R-STS that it's OK to emit a token for them; and for the mapping and authorization, again we can take advantage of the policy engine of the cloud provider. The ISV does not need to worry about handling credentials anymore; it becomes a matter of administering the authorization rules, which is more a business problem than a technical one (how the ISV explains to a potential customer what is the meaning of the various claims/roles, so that the prospect can take informed decisions about what maps in where? Here we do have a need for claim mapping across orgs). In fact, one could even imagine that if a prospect of the ISV is already registered for other reasons with the cloud provider (for example because it is already customer of another ISV that hosts its services on the same cloud provider), onboarding should be a breeze.

In fact, the picture above suggests an interesting twist: if once a service is hosted in the cloud it is so easy to make it available to others, perhaps in some cases enterprises will end up recovering costs and fighting inefficiencies by acting as ISVs. Think of situations in which you have excess capacity, like if you have a wonderfully automated warehouse that ends up being empty most of the time because you got the sizing wrong. You could "rent" warehouse services (receiving, stocking, retrieving, packaging, shipping) just by taking the services that front the warehouse function already in the cloud and opening them to third parties just by changing some policies. I know I said that already, but it's *huge*.


Friday, April 16, 2010

Implementing MVC Design Pattern

Here you will know about MVC pattern implementation in .Net

in reference to: Implementing MVC Design Pattern in .NET (view on Google Sidewiki)

Thursday, November 15, 2007

Web Server



Good interviewers always have some hidden agenda behind their questions. Questions are more or less designed around the specific position you are interested in; however, you should expect some common questions irrespective of the job description.



What is a web server?



A Web server is a software program which serves web pages to web users (browsers). A web server delivers requested web pages to users who enter the URL in a web browser. Every computer on the Internet that contains a web site must have a web server program.The computer in which a web server program runs is also usually called a "web server". So, the term "web server" is used to represent both the server program and the computer in which the server program runs.



Characteristics of web servers:



A web server computer is just like any other computer. The basic characteristics of web servers are:



- It is always connected to the internet so that clients can access the web pages hosted by the web server.



- It has an application called 'web server' running always.In short, a 'web server' is a computer which is connected to the internet/intranet and has a software called 'web server'. The web server program will be always running in the computer. When any user try to access a website hosted by the web server, it is actually the web server program which delivers the web page which client asks for.All web sites in the internet are hosted in some web servers sitting in different parts of the world.



Web Server is a hardware or a software ?



From the above definition, you must have landed up in confusion “Web server is a hardware or a software”Mostly, Web server refers to the software program, which serves the clients request. But as we mentioned earlier in this chapter, the computer in which the web server program is also called 'web server".





Now that you are reading this page, have you ever had a thought how the page is made available to the browser?Your answer would be, “I typed in the URL http://www.faqs-for-all.blogspot.com/ and clicked on some link, I dropped into this page.” But what happed behind the scenes to bring you to this page and make you read this line of text.So now, lets see what is actually happening behind the scene. The first you did is, you typed the URL http://www.faqs-for-all.blogspot.com/ in the address bar of your browser and pressed your return key.

We could break this URL into two parts,


1.The protocol we are going to use to connect to the server (http)
2.The server name (http://www.faqs-for-all.blogspot.com/)

The browser breaks up the URL into these parts and then it tries to communicate with the server looking up for the server name. Actually, server is identified through an IP address but the alias for the IP address is maintained in the DNS Server or the Naming server. The browser looks up these naming servers, identifies the IP address of the server requested and gets the site and gets the HTML tags for the web page. Finally it displays the HTML Content in the browser.


Where is my web server ?

When you try to access a web site, you don't really need to know where the web server is located. The web server may be located in another city or country, but all you need to do is, type the URL of the web site you want to access in a web browser. The web browser will send this information to the internet and find the web server. Once the web server is located, it will request the specific web page from the webserver program running in the server. Web server program will process your request and send the resulting web page to your browser. It is the responsibility of your browser to format and display the webpage to you.


How many web servers are needed for a web site?

Typically, there is only one web server required for a web site. But large web sites like Yahoo, Google, MSN etc will have millions of visitors every minute. One computer cannot process such huge numbers of requests. So, they will have hundreds of servers deployed in different parts of the world so that can provide a faster response.


How many websites can be hosted in one server?

A web server can hosted hundreds of web sites. Most of the small web sites in the internet are hosted on shared web servers. There are several web hosting companies who offer shared web hosting. If you buy a shared web hosting from a web hosting company, they will host your web site in their web server along with several other web sites for a Fee.Examples of web server applications1. IIS2. ApacheFind A web server can hosted hundreds of web sites. Most of the small web sites in the internet are hosted on shared web servers. There are several web hosting companies who offer shared web hosting. If you buy a shared web hosting from a web hosting company, they will host your web site in their web server along with several other web sites for a Fee.

Examples of web server applications

1. IIS

2. Apache


Acronyms in .NET


ADO - ActiveX Data Object - Microsoft ActiveX Data Objects (ADO) is a collection of Component Object Model objects for accessing different types of data sources.

AJAX - Asynchronouse Javascript and XML - Ajax is a web development technology used for creating interactive web pages with fast data rendering by enabling partial postbacks on a web page (That means a section of the web page is rendered again, instead of the complete web page. This is achieved using Javascript, XML, JSON (Javascript Notation Language) and the XMLHttpRequest object in javascript.

ASP - Active Server Pages - Microsoft's Server side script engine for creating dynamic web page.

C# - C Sharp - Microsoft Visual C# is an object oriented programming language based on the .NET Framework. It includes features of powerful languages like C++, Java, Delphi and Visual Basic.

CAO - Client Activated Object - Objects created on the server upon the client's request. This is used in Remoting.

CCW - COM Callable Wrapper - This component is used when a .NET component needs to be used in COM.

CIL - Common Intermediate Language - Its actually a low level human readable language implementation of CLI. All .NET-aware languages compile the source oode to an intermediate language called Common Intermediate Language using the language specific compiler.

CLI - Common Language Infrastructure - This is a subset of CLR and base class libraries that Microsoft has submitted to ECMA so that a third-party vendor can build a .NET runtime on another platform.

CLR - Common Language Runtime - It is the main runtime machine of the Microsoft .NET Framework. It includes the implementation of CLI. The CLR runs code in the form of bytes, called as bytecode and this is termed MSIL in .NET.

CLS - Common Language Specification - A type that is CLS compliant, may be used across any .NET language. CLS is a set of language rules that defines language standards for a .NET language and types declared in it. While declaring a new type, if we make use of the [CLSCompliant] attribute, the type is forced to conform to the rules of CLS.

COFF - Common Object File Format - It is a specification format for executables.

COM - Component Object Model - reusable software components. The tribe of COM components includes COM+, Distributed COM (DCOM) and ActiveX® Controls.

CSC.exe - C Sharp Compiler utility

CTS - Common Type System - It is at the core of .NET Framework's cross-language integration, type safety, and high-performance code execution. It defines a common set of types that can be used with many different language syntaxes. Each language (C#, VB.NET, Managed C++, and so on) is free to define any syntax it wishes, but if that language is built on the CLR, it will use at least some of the types defined by the CTS.

DBMS - Database Management System - a software application used for management of databases.

DISCO - Discovery of Web Services. A Web Service has one or more. DISCO files that contain information on how to access its WSDL.

DLL - Dynamic Link Library - a shared reusable library, that exposes an interface of usable methods within it.

DOM - Document Object Model - is a language independent technology that permits scripts to dynamically updated contents of a document (a web page is also a document).

ECMA - European Computer Manufacturer's Association - Is an internation organisation for computer standards.

GC - Garbage Collector - an automatic memory management system through which objects that are not referenced are cleared up from the memory.

GDI - Graphical Device Interface - is a component in Windows based systems, that performs the activity of representing graphical objects and outputting them to output devices.

GAC - Global Assembly Cache - Is a central repository of reusable libraries in the .NET environment.

GUI - Graphic User Interface - a type of computer interface through which user's may interact with the Computer using different types of input & output devices with a graphical interface.

GUID - Globally Unique Identifier - is a unique reference number used in applications to refer an object.

HTTP - Hyper Text Transfer Protocol - is a communication protocol used to transfer information in the internet. HTTP is a request-response protocol between servers and clients.

IDE - Integrated Development Environment - is a development environment with source code editor with a compiler(or interpretor), debugging tools, designer, solution explorer, property window, object explorer etc.

IDL - Interface Definition Language - is a language for defining software components interface.

ILDASM - Intermediate Language Disassembler - The contents of an assembly may be viewed using the ILDASM utility, that comes with the .NET SDK or the Visual Studio.NET. The ildasm.exe tool may also be used in the command line compiler.

IIS - Internet Information Server - Is a server that provides services to websites and even hosts websites.

IL - Intermediate Language - is the compiled form of the .NET language source code. When .NET source code is compiled by the language specific compiler (say we compile C# code using csc.exe), it is compiled to a .NET binary, which is platform independent, and is called Intermediate Language code. The .NET binary also comprises of metadata.

JIT - Just in Time (Jitter) - is a technology for boosting the runtime performance of a system. It converts during runtime, code from one format into another, just like IL into native machine code. Note that JIT compilation is processor specific. Say a processor is X86 based, then the JIT compilation will be for this type of processor.

MBR - MarshallByReference - The caller recieves a proxy to the remote object.

MBV - MarshallByValue - The caller recieves a copy of the object in its own application domain.

MDI - Multiple Document Interface - A window that resides under a single parent window.

MSIL - Microsoft Intermediate Language - now called CIL.

Orcas - Codename for Visual Studio 2008

PE - Portable Executable - an exe format file that is portable.

RAD - Rapid Application Development

RCW - Runtime Callable Wrapper - This component is used when a .NET needs to use a COM component.

SAX - Simple API for XML - It is a serial access parser API for XML. The parser is event driven and the event gets triggered when an XML feature is encountered.

SDK - Software Development Kit

SMTP - Simple Mail Transfer Protocol - a text based protocol for sending mails.

SN.exe - Strong Name Utility - a tool to make strong named assemblies.

SQL - Structured Query Language - a language for management of data in a relational structure.

SOAP - Simple Object Access Protocol - a protocol used for exchange of xml based messages across networks.

TCP - Transmission Control Protocol - data exchange protocol across networks using streamed sockets.

UI - User Interface

URI - Uniform Resource Identifier

URL - Uniform Resource Locator

UDDI - Universal Description, Discovery and Integration - it is a platform independent business registration across the internet.

WAP - Wireless Access Protocol - a protocol that enables access to the internet from mobile phones and PDAs.

WC - Windows Cardspace - Part of .NET 3.0 framework, that enables users to secure and store digital identities of a person, and a provision to a unified interface for choosing the identity for a particular transaction, like logging in to a website.

WCF - Windows Communication Foundation - Part of .NET 3.0 framework, that enables communication between applications across machines.

WF - Windows Workflow Foundation - Part of .NET 3.0 framework, used for defining, execution and management of reusable workflows.

WKO - Well Known Object - These are MBR types whose lifetime is controlled by the server's application domain.

WPF - Windows Presentation Foundation - Part of .NET 3.0 framework, is the graphical subsystem of the .NET 3.0 framework.

WSDL - Web Services Description Language - is an XML based language for describing web services.

WML - Wireless Markup Language - is a content format for those devices that use Wireless Application Protocol.

VB.NET - Visual Basic .NET - .NET based language. Its the .NET implementation of VB6, the most widely used language in the world.

VBC.exe - VB.NET Compiler

VES - Virtual Execution System - It provides the environment for execution of managed code. It provides direct support for a set of built in data types, defines a hypothetical machine with an associated machine model and state, a set of control flow constructs, and an exception handling model. To a large extent, the purpose of the VES is to provide the support required to execute the Common Intermediate Language instruction set.

VS - Visual Studio

VSS - Visual Source Safe - An IDE by Microsoft, to maintain source code versions and security.

VSTS - Visual Studio Team Suite - Visual Studio Team System - it is an extended version of Visual Studio .NET. It has a set of collaboration and development tools for software development process.

XML - Extensible Markup Language - is a general purpose well formed markup language.

CLR - CTS - CLS

The .NET Framework provides a runtime environment called the Common Language Runtime or CLR (similar to the Java Virtual Machine or JVM in Java), which handles the execution of code and provides useful services for the implementation of the program.

The Common Language Runtime is the underpinning of the .NET Framework. CLR takes care of code management at program execution and provides various beneficial services such as memory management, thread management, security management, code verification, compilation, and other system services. The managed code that targets CLR benefits from useful features such as cross-language integration, cross-language exception handling, versioning, enhanced security, deployment support, and debugging.

Common Type System (CTS) describes how types are declared, used and managed in the runtime and facilitates cross-language integration, type safety, and high performance code execution.

The Common Language Specification (CLS) is an agreement among language designers and class library designers to use a common subset of basic language features that all languages have to follow.

CLR Execution Model:

To know more about CLR /CTS/ CLS, articles and books - Click here!

Latest Resources:

Common Language Runtime Overview Good introduction to the CLR.

The Common Language Infrastructure (CLI) The Shared Source CLI provides developers with the source code for a working CLI implementation.

Common Language Runtime (CLR) Fundamentals To understand the fundamental concepts of programming in the Common Language Runtime (CLR) environment.

This article About the Common Language Runtime (CLR) provides fine, clear points about the CLR.