Welcome!

PHP Authors: Liz McMillan, Carmen Gonzalez, Hovhannes Avoyan, Lori MacVittie, Trevor Parsons

Related Topics: Microservices Expo

Microservices Expo: Article

Getting Greater Business Value from a Web ServiceThinking of bigger and wider opportunities

Getting Greater Business Value from a Web ServiceThinking of bigger and wider opportunities

Okay, so you've developed what you believe is a useful piece of code and exposed it to the world as a Web service. All comers may now rejoice in the warmth and glow of your artful coding. But who are the people and what are the business systems benefiting from your Web service? Are you reaching all the possible consumers of the service? What more would you have to do to make it as useful as possible to an even greater audience? And, to ask a simple but important question, do you even know exactly who's on the other end using the service?

The truth is, there's only so far you can go in order to make certain that the people you want to have access to your service are the ones actually using it. Thus begins the careful balancing act between concepts like functionality and performance...and the increasingly important issue of security. Wondering where this is going? Keep reading.

Moving Beyond Tic-Tac-Toe
Microsoft's .NET vision of XML Web services, together with the powerful set of tools MS provided for Web services deployment and use, helped create a new paradigm with tremendous impact. The ability to provide services between systems in the same way we use Web pages to deliver services creates value, as it gives us the opportunity to realize the promise of distributed computing among disparate platforms.

The current use of Web services focuses more on solving real business problems and less on performing the trivial functions initially used to illustrate the concept. (Although personally I still think playing tic-tac-toe via SOAP requests is a great way to experiment with new technology!) Businesses adopt Web services technology to generate revenue and lower costs. In doing so, they deal with the challenge of exposing their data as a service to a particular audience. When fulfilling that service, they decide what degree of security is needed so only that audience (and hopefully it's a paying one) has access to it.

Multiple Audiences, Multiple Security Levels
If you're reading this article you've probably already written a Web service or two, and you've probably also read plenty of white papers and articles on authentication and authorization options in Web services. The examples such articles use frequently imply that a Web service either has a particular level of security or it does not. This in turn suggests that a Web service has only a single audience, for whom that level of security is sufficient.

As developers, we want our code to be as useful as possible to the greatest possible number of users. Why? Well, job security may be one reason. More often it's pride and hubris (first noted as some of the virtues of a truly great programmer by Larry Wall and others in Programming Perl, a book now standard in the personal library of most developers). We love the idea that something we created is out there helping to make the world a better place.

But for us to keep playing with a new technology like XML Web services, somebody has to be footing the bill. You got it, business users! And what do they want? They want the greatest possible revenue for the least possible investment. Unfortunately, business users don't often believe that first-person shooter games played over the company network embody the greatest revenue opportunities.

So you need an approach to developing Web services that gives your code the greatest possible use and produces the greatest possible revenue. This means moving away from the idea of services that only meet the needs of a singular group of stakeholders. And this in turn means getting away from the idea that a service is either secured or it isn't. We need to think bigger and more broadly about what a Web service can do for the business.

Different Strokes for Different Folks
I was up late last night trying to think of some good examples to explain this approach. Okay, that's a lie. I was supposed to be up late working on this article but instead I was online checking the stats of my fantasy hockey team. It's annoying, because it takes forever to go through all the box scores and then calculate in my head how many points I've earned using our league's scoring system. Of course I could have just waited until the next morning when the fantasy site grabs the box scores from a major online news provider and updates the official league scoring. But that requires waiting and I hate waiting (impatience being another of the virtues of a programmer).

It then occurred to me there's an organization out there that controls the origination of that statistical data and publishes that data to subscribing sources. Who's interested in that data? Sports sites, portal/content providers, newspapers, and of course, fantasy hockey sites - to name just a few. Each one of these data consumers varies in its needs as to the timeliness of the data. Accordingly, each attaches a particular value to the data. The sports Web sites may want the data immediately with extensive detail, while the weekly newspaper in a small town may prefer consolidated information weekly.

The important point to consider here is that there's an initial owner of that data who then publishes it out to multiple consumers (and don't forget to consider both the potential internal and external consumers of the service). Honestly, I've no idea how this happens today, but the idea illustrates my point well. Data already exists that can be delivered to multiple stakeholders either for profit or, in other instances, as a cost-cutting measure.

The goal then is to create Web services that provide different levels of service or data to those specific consumers. Perhaps various rates are charged depending on the service. Perhaps consumers even get the lowest level of service free. To accomplish this, there are many security issues, such as limiting access according to service bought, while providing sufficient functionality and performance.

Figure 1 illustrates the initial model for this approach. If the hockey stats example doesn't seem relevant enough to you and your company, then let's consider other possibilities. How about a manufacturer that updates its production schedule every few hours based on actual orders received through trading partner Web sites and other sales order entry systems (see Figure 2)? While supply chain partners would benefit in receiving hourly updates on materials needed for production, other stakeholders - such as company managers and executives - may need only weekly roll-ups of the data, and this information could be supplied to them through personal portals on the intranet. Still other employees may be interested only in monthly roll-ups, just to give them an idea of how the company is doing. Additionally, investors or analysts may be allowed access only to quarterly roll-ups on a delayed basis. Get the idea? It's all about thinking bigger, in terms of all the different people benefiting from the service.

So how do you develop and implement a service to answer the broad range of needs of all stakeholders? The answer is not much different from the way you're already developing Web services. This article is aimed more at discussing the overall approach to planning and using Web services, and how business should think of them, than it is about a change in the way we actually develop.

A Working Example
For discussion purposes, let's walk through an example of how we might do this. Let's go back to the hockey game because, well, I like hockey.

We completed the first steps. We identified the data under our control that is useful and valuable to a number of consumers. Then, we identified the possible consumers and determined levels of value they might attach to the data and the probable details desired. And then we considered the degree of security required.

For our example, let's assume that, as games are played, the data is entered using an application at the stadium location and then submitted to a central data store that we control. Using VisualStudio.NET, we create a Visual Basic COM object that queries the data store and formats the data into an XML document every 60 seconds (including trimmed down versions at the various delayed levels) as a Web service. We'll name it Statistics Service, which is made available through an ISAPI Listener. The Microsoft development environment makes it easy for us to create test clients to be certain the service works as planned.

Authorization and Authentication
On receiving a request for the document, we identify the principal making the request, then determine the authorization level. Finally, we answer the request with the XML document the subscriber is authorized to receive. If the subscriber is one of the sports Web sites requesting up-to-the-minute information, we want to corroborate the subscriber, since a premium is paid for this data. But if the requestor is permitted access only to the most recent weekly roll-up, then corroboration isn't critical. In fact, if a request comes in and we can't identify the principal or for some reason authentication or authorization fails, we simply send the weekly roll-up - offering the lowest level of service as a default.

Let's start with the first piece - identifying the principal. The Web service should be designed to handle several security levels accordingly. Since this service will be available over the Internet we won't know the specific IP addresses of all of the consumers, so we want to leverage the authentication features of the protocol used to exchange messages, i.e. HTTP.

Microsoft Internet Information Server 5.0 (IIS) supports all of the various authentication mechanisms for HTTP shown in Table 1 (which is borrowed from MSDN).

Regardless of the security mechanism chosen at each level, we'll create Windows 2000 accounts for each consumer allowed to access the Web service and configure ACLs appropriately on the resources that make up the service. For default we also create an anonymous guest account that will have access to the lowest level of service.

For a nonpaying user there's no need for authentication - we'll allow a standard or guest login. Paying users will have their own unique usernames and passwords. For daily and up- to-the minute users there should be a more secure method that discourages hackers. This data is usually possible to obtain by alternative methods (e.g., one could view it free on a subscribing sports site or watch all the games on home satellite and write really, really fast). However, since some subscribers are paying a relative premium to get this data fed directly into their systems on request, it should be just difficult enough to make hacking it, or hijacking the session, not worth the trouble.

A reasonable mechanism in this case is to use hashing to transmit client credentials. Usernames and passwords should not be transported as plain text, so SSL would be used. But using SSL for document transmission would be detrimental to performance. One option to consider is splitting Statistics Service into two services, Stats Service (occurring over HTTP) and Logon Service (which precedes Stats Service for paying customers and will require SSL). Additional options for security mechanisms based on your specific business requirements can be found in the Global XML Web Services Specifications - which offer a security language for Web services. The specifications describe enhancements to SOAP messaging providing three capabilities: credential exchange, message integrity, and message confidentiality. These three mechanisms can be used independently or in combination to accommodate a wide variety of security models and encryption technologies.

Logon Service
In our example the client application begins by sending a logon request over SSL to the Logon Service that contains two parameters (username, password). The request is dispatched to a Logon COM object that accepts the username and password as input. They are authenticated against the known user accounts. The COM object then generates a hashed logon key (hashedkey) which is passed back to the client application and will expire in minutes or hours. The key is generated using Windows Cryptography APIs. The hashedkey becomes the first parameter of each request to Stats Service. The hashedkey will be used as an identifier to map to each user's authorization level according to their corresponding ACL so each one can then be served the corresponding XML document. If the username/password combination sent to Logon Service was the guest ID, then a hashedkey is still returned (always a standard and static hashedkey for guest). When the subsequent request comes to Stats Service, it's mapped to the guest access level. The Stats COM object will only grant access to the weekly roll-up of the statistics and dispatch the stripped-down XML document back to the client.

Statistics Service also needs to maintain some level of service in the event that authentication and authorization fail. This is done at two levels. The first potential failure is that Logon Service or Logon Object fails to find the username and password supplied. In Logon COM object, if it can't locate a non-guest username in the list of known users, it will by default return the guest hashedkey back to the client. This needs to be known by the client application if it's a paying customer and receiving limited "guest" information is not acceptable or even useful. Two parameters go back to the client application - hashedkey and accesslevel. The client application now makes the determination whether or not to place the subsequent request to Stats Service based on needs. The next consideration is if the Logon Service or Object failed altogether. In this instance, the client service receives no hashedkey. The client service submits that generic, static hashedkey that permanently maps to guest level authentication. (Someone wanting guest access would never even need to use Logon Service, but we'll anticipate the fact it may happen nonetheless.)

The description of these in and out parameters, as well as the static guest key, would be published through UDDI in the WSDL for the services. For a public service you may choose to publish it at http://uddi.org, but for internal or more private services the Microsoft platform makes this part even simpler since Windows.NET server (when released) will have UDDI built in.

So for Logon Service there are the following parameters:

Logon([in]username, [in]password, [out]hashedkey, [out]accesslevel)
And for Stats Service:
Stats([in]hashedkey, [out]statsdoc)
There's nothing groundbreaking about the coding methods here. What's different is the business approach to delivering the service of a single coding effort to the greatest possible number of consumers, despite varying needs surrounded by varying security levels.

There are further security measures surrounding the COM objects that you can take advantage of through using Common Language Runtime (CLR). However, this article isn't about security, you can find out more about that by studying it separately elsewhere. Even in a more common scenario like the manufacturing example, the goal is still to allow code written once to meet the needs of various stakeholders; and, for the default, to always offer the option for the lowest denominator of service. For highly secure data, a variation of this allows you to include a check for a known IP Address and/or Certificate from a given partner before you publish to them the requested XML document. Without that, it still offers the rolled-up delayed data.

It's a Web service for the masses, or at least for a whole lot more people than I might have tried to target previously. Now if only there was a Web service that could have accurately predicted that my star goalie would be out several weeks with a twisted ankle.

More Stories By Paul Hernacki

Paul Hernacki is a technical evangelist for Extreme Logic and is the bridge between the technical and business audiences. He articulates Extreme Logic’s solutions and explains how the Microsoft .NET platform delivers business value.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...