IT strategies are at a pivotal point in their evolution, which could change the way people think when it comes to the delivery of services. IT organizations, whether they realize it or not, have always delivered services. Most IT professionals have been involved in some way with ITIL and ITSM methodologies. These are terrific tools that deal with the development, creation, implementation, and management of services. However, to take that next step in our evolution, we need to alter that concept of what is a service and what it is made of. This means breaking a service down to the quantum level.
According to ITILv1, a service is “A set of related functions provided by IT systems in support of one or more business areas, which in turn may be made up of software, hardware and communications facilities, perceived by customer as a coherent and self-contained entity.” I use the ITILv1 definition instead of the definitions from versions 2 and 3 because they are more abstract. Everything is abstract today, which is a good thing, but you’ll see in the next section where quantum service theory is a better model.
WHAT IS QUANTUM SERVICE THEORY?
From the ITIL definitions, a service is usually made up of different components, which could be people, processes, and technologies. Here’s where the ideas diverge. If you examine the current way a service is perceived, there is a pattern that a service is made up of things related to the service. But these items are not just things. These things are services themselves and in turn are also made up of one or more services. If you put an e-mail service in a super-collider and split that object, what would result? A group of lower level services would be the resulting objects. Not people, processes, or technologies but more services. Also, services are objects, not to be confused with programming objects. However, service objects have a lot in common with programming objects, as we will see. Objects have attributes, dependencies, and methods.
The theory has the following principles:
- Everything within IT is a service. Everything, from what the end-user sees all the way to the power going to the servers.
- Every service is an object which contains attributes and methods
- Every service object has a set of basic, but not limited to, attributes
d. Service Owner
g. SLA for fault response
h. Dependencies – We could go deeper into how this service uses these other services but we will stay at this level for this discussion.
i. Expected level performance
j. Expected nominal behaviors. In essence, how do we monitor this service.
- Every service is broken down into lower level services.
- The breakdown continues until there are no more parts that we can assign the above attributes to.
- Combine like services into shared services. Examples of this area are: compute, network, and storage but this also means moving up the layers to the middleware and combining web services and databases. Move as far up the abstraction layers as you can until you cannot combine services any longer.
So the main difference in my theory is that, ITIL views a service as a group of things that make up a service. And the upper service is what has attributes. I believe a service is a group of services which all are objects with attributes that need to be broken down all the way to the quantum level.
WHY QUANTUM SERVICE THEORY
The benefits of quantum service theory (QST) include: easier to understand services, great value in troubleshooting, operations will have a much greater understanding of a high level service when presented with the new service documents, and I could go on. These are basic, inherent benefits of this methodology, however; there some others that become very interesting when you think of the broader, bigger picture.
There is the growing trend of: Everything as a Service. The definition of this differs depending on a person’s background. A developer will look at XaaS and say, “I can break my applications down into service components, which can thus be rendered as reusable, shared application services.” And there’s the infrastructure viewpoint, which they would think examples of XaaS would be compute, storage, archiving, backups, security, authentication, monitoring, telecom, etc. All of these can be services and can be rendered on premise and off premise.
You can see that there is a relationship between QST and XaaS. QST is XaaS but taken down to the lowest possible part that provides a usable resource. XaaS is an opportunistic concept where you break down and create a service out of something that is common and reusable and make it agile and easy to integrate into. QST requires that you treat everything we have and do as a service. There is also better transitivity of service levels and qualities. In QST, the lower level services inherited the quality and availability characteristics of the services above them. When performing architectural functions, this saves a large amount of time determining where can we place an application and if an existing shared service can handle the new application/service? XaaS might suggest moving a service like a shared drive to an external provider. QST would suggest moving all end-user activates that can be moved to a third party or consolidate.
The next concept that I found that comes from QST is the idea of what I define as mobility boundaries. A mobility boundary is a grouping of services that, when grouped as such, can be moved from one location to another. The location can be another data center or even a cloud provider. When you apply QST and group the services correctly, mobility boundaries are easy to identify since everything is a service and you have identified all the dependencies. If the correct attributes are collected and identified, the mobility boundaries can be made more complex. For example, an application might use a shared database. You have identified, because of the service object attributes, the performance levels required for the database service. You could have the database service in a separate mobility boundary if, when it is moved, the performance rendered to the applications using it are within specs. This means you can have web servers in the cloud and database servers’ on premise. That was a simple example, but what if you want to consolidate databases to a central site to further enable the use of big data? If you use the principles of the theory, this can be accomplished safely. You can also use this concept to move services groupings to other locations temporarily while maintenance is occurring.
What was presented to you in this article was just a theory. Its purpose, like all theories, is to generate conversations and motivate research in the area. Further research should be done to solidify the methodology proposed or integration of the methodology into enterprise architecture frameworks. The introduction of the VDC, virtualized data center, revolutionized architectures of data centers everywhere in the world. The next evolution is to look at your enterprise and not just think of services that fit nicely into a service catalog, service catalogs are a must by the way, but view everything within the data center and even the organization as a grouping of smaller services. This will enable you to take advantage of new services externally and also provide much more agility within your EA designs. In a future issue, we’ll discuss the quantum theory of business services and how the two theories tie together.