|By Jason Bloomberg||
|February 22, 2014 04:00 PM EST||
Whether you're a Cloud Computing aficionado, an enterprise integration specialist, or an IT executive, it's hard to have a conversation today without mention of REST. Representational State Transfer, or REST to the cognoscenti, is an architectural style that treats distributed computing problems as though they were Web problems. On the Web you have browsers chatting via HTTP to Web servers, and those servers can work whatever magic they need to in order to serve up the full wealth we've all come to expect from the World Wide Web. Take those basic Web patterns, extend them to general distributed computing problems (including Cloud and legacy integration), and voila! You have REST.
However, while REST has achieved substantial success in simplifying software interfaces and thus facilitating many forms of integration, it is still inherently inflexible. What works well for humans using browsers often doesn't apply to arbitrary software clients. And most fundamentally, REST does not address the most difficult distributed computing challenge of all: how to deal with dynamic business context.
Freeing Ourselves from REST's Four Constraints
The majority of RESTafarians, as the aforementioned cognoscenti have so eloquently dubbed themselves, treat REST as an Application Programming Interface (API) style. After all, REST does call for a uniform interface, a requirement that in one fell swoop addresses many of the knottier problems of Web Services and other, more tightly coupled API styles that came before. Compared to the complexities of Web Service operations or object-oriented remote method calls, REST's uniform interface is the essence of simplicity. Furthermore, there's no question that REST's uniform interface requirement is at the heart of what analysts like to refer to as the API Economy.
But there is more to REST than a uniform interface. In fact, REST isn't an API style at all. It's an architectural style. As an architectural style, REST consists of a set of constraints on software architecture. In other words, feel free to follow what architectural rules you like, but if you want to follow REST you must comply with the following RESTful constraints:
- Separation of resources from representations
- Manipulation of resources by representations
- Self-descriptive messages
- Hypermedia as the engine of application state, or HATEOAS
Let's take a quick tour of these constraints to put them in plain language. In so doing, we'll also why REST fails to adequately address the problem of dynamic business context.
The first constraint is essentially the encapsulation requirement. Resources are abstractions of capabilities on a server, while the representations are what the resources provide to the client. For example, a resource might be a php script running on a Web server, and the representation might be an HTML file it returns when a browser makes a GET request of it. But the browser never, ever gets the php itself; it only sees the HTML. The php is forever hidden from view.
The second constraint calls for the uniform interface. The only way that clients are able to interact with resources is by following hyperlinks in representations - in other words, making GET, POST, PUT, or DELETE requests to the URI of the resource, assuming we're using HTTP as our transport protocol, which we usually are.
The third, self-descriptive message constraint is actually quite straightforward: all the data as well as all the metadata the resource needs to process a request must be contained in that request, and correspondingly, the resource must send all necessary metadata in the representation response to the client that the client will need to understand the representation. In other words, REST requires that there be no out-of-band metadata: information pertinent to the interaction that doesn't actually appear in the interaction. Furthermore, the interaction must be stateless: the resource isn't expected to keep track of any information pertinent to any particular client.
The problem with this third constraint, of course, is that out-of-band metadata is very handy in many situations. Take security-related metadata, for example. REST calls for all such metadata to be in every request, which led to the development of the OAuth (Open Authorization) standard. Yes, OAuth is quite powerful and Web friendly. Yes, OAuth is making inroads into the enterprise. But do you really want to restrict the security protocols for all of your interactions to OAuth and nothing but OAuth? Probably not.
If you have a more complex interaction than a simple request, then the ban on out-of-band metadata becomes increasingly impractical. For example, let's say you're trying to support a complex business process by building a composite application. You're trying to follow REST so you're composing resources. But then you find you need to somehow deal with a range of policies, business rules, or other out-of-band metadata that impact the behavior of your composite application for certain users but not others. REST alone simply doesn't deal well with such complexities.
And then there's the fourth constraint: the dreaded HATEOAS. REST separates state information into two types: resource state and application state. Resource state is shared or persisted state information on the server, while application state is specific to the individual client, who negotiates the application (think abstracted Web site) by following hyperlinks. The HATEOAS constraint hammers home the fact that the point of REST is building distributed hypermedia systems, where the client is responsible for running hypermedia-based applications. In other words, the hypermedia contain the business context for the interactions between client and server.
Hypermedia drive the Web, of course, but once you start breaking down REST's notions of client and server, however, then the power of hypermedia starts to wane. After all, enterprises often want to build or leverage business applications that offer more than a simple Web site, especially when there's a shared business context across nodes, where those nodes are more than just clients and servers. Hypermedia - and REST - simply weren't built for such complex, dynamic situations.
The Devil in the Details
There's a very good reason why REST eschews out-of-band metadata and shared business context beyond the scope of hypermedia: both of these requirements are inherently dynamic, and furthermore, depend upon multiple actors - actors who may change over time. By constraining the architecture to avoid such complexities, REST provides a useful set of simplifications that have provided unquestionable value throughout the API economy.
The challenges in the section above, however, go well beyond REST. The problems we're discussing have plagued software interfaces in general, from the earliest screen-scraping programs to object-oriented APIs to Web Services to today's RESTful APIs. The entire notion of a software interface is an agreement between the people building the software provider and consumer endpoints that the interface behaves a particular way. Loose coupling, after all, relies upon an interface contract that fixes the behavior of the API so that the parties involved can make various decisions about their software under the covers without breaking the interaction. But woe to those who dare to change the contract, or who want to consider metadata the contract knows nothing about!
Out-of-band metadata and business context outside hypermedia applications are by definition exterior to the contract, and thus aren't amenable to any distributed computing architectural style that relies too heavily on static APIs. Therein lies the essential challenge of the API. To those analysts trumpeting the API Economy I say: the API Economy has nearly run its course. We've solved as many problems as we're going to solve with contracted software interfaces. But the business stakeholders still aren't happy. After all, it's their context - the business context - that APIs (whether RESTful or not) are so woefully unable to deal with. It's time for another approach.
The EnterpriseWeb Take
I'm not saying that we don't need APIs, of course, or that REST doesn't serve a useful purpose. I am declaring, however, that something critically important is missing from this picture. APIs are far too static to address issues of dynamic business context. We tried to address these issues with the SOA intermediary (typically an ESB), where the intermediary executed policy-based routing and transformation rules to abstract a set of inflexible Service interfaces, thus providing the illusion of flexibility, much as a flip deck provides the illusion of motion. But even the most successful implementers of SOA were still unable to deal with most out-of-band metadata - and dynamic business context? That nut no one has been able to crack.
What we really need is an entirely different kind of intermediary. A smarter intermediary that knows how to deal with all types of metadata and furthermore, can resolve the more difficult challenge of business context - in real time, where the business lives. In the next issue of Loosely-Coupled I'll discuss how such a smarter intermediary might actually work. And naturally, if you want to see one in action, drop us a line.
Image credit: lin440315
Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.
Feb. 14, 2016 10:00 AM EST Reads: 103
Advances in technology and ubiquitous connectivity have made the utilization of a dispersed workforce more common. Whether that remote team is located across the street or country, management styles/ approaches will have to be adjusted to accommodate this new dynamic. In his session at 17th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., focused on the challenges of managing remote teams, providing real-world examples that demonstrate what works and what do...
Feb. 14, 2016 10:00 AM EST Reads: 367
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
Feb. 14, 2016 10:00 AM EST Reads: 238
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee...
Feb. 14, 2016 09:15 AM EST
The Quantified Economy represents the total global addressable market (TAM) for IoT that, according to a recent IDC report, will grow to an unprecedented $1.3 trillion by 2019. With this the third wave of the Internet-global proliferation of connected devices, appliances and sensors is poised to take off in 2016. In his session at @ThingsExpo, David McLauchlan, CEO and co-founder of Buddy Platform, will discuss how the ability to access and analyze the massive volume of streaming data from mil...
Feb. 14, 2016 09:00 AM EST Reads: 103
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, will discuss using predictive analytics to ...
Feb. 14, 2016 08:45 AM EST Reads: 428
WebSocket is effectively a persistent and fat pipe that is compatible with a standard web infrastructure; a "TCP for the Web." If you think of WebSocket in this light, there are other more hugely interesting applications of WebSocket than just simply sending data to a browser. In his session at 18th Cloud Expo, Frank Greco, Director of Technology for Kaazing Corporation, will compare other modern web connectivity methods such as HTTP/2, HTTP Streaming, Server-Sent Events and new W3C event APIs ...
Feb. 14, 2016 08:30 AM EST
Join us at Cloud Expo | @ThingsExpo 2016 – June 7-9 at the Javits Center in New York City and November 1-3 at the Santa Clara Convention Center in Santa Clara, CA – and deliver your unique message in a way that is striking and unforgettable by taking advantage of SYS-CON's unmatched high-impact, result-driven event / media packages.
Feb. 14, 2016 08:30 AM EST Reads: 118
At first adopted by enterprises to consolidate physical servers, virtualization is now widely used in cloud computing to offer elasticity and scalability. On the other hand, Docker has developed a new way to handle Linux containers, inspired by version control software such as Git, which allows you to keep all development versions. In his session at 17th Cloud Expo, Dominique Rodrigues, the co-founder and CTO of Nanocloud Software, discussed how in order to also handle QEMU / KVM virtual machin...
Feb. 14, 2016 08:15 AM EST Reads: 163
SYS-CON Events announced today that FalconStor Software® Inc., a 15-year innovator of software-defined storage solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. FalconStor Software®, Inc. (NASDAQ: FALC) is a leading software-defined storage company offering a converged, hardware-agnostic, software-defined storage and data services platform. Its flagship solution FreeStor®, utilizes a horizonta...
Feb. 14, 2016 07:30 AM EST
Silver Spring Networks, Inc. (NYSE: SSNI) extended its Internet of Things technology platform with performance enhancements to Gen5 – its fifth generation critical infrastructure networking platform. Already delivering nearly 23 million devices on five continents as one of the leading networking providers in the market, Silver Spring announced it is doubling the maximum speed of its Gen5 network to up to 2.4 Mbps, increasing computational performance by 10x, supporting simultaneous mesh communic...
Feb. 14, 2016 05:00 AM EST
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
Feb. 14, 2016 04:30 AM EST Reads: 407
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
Feb. 14, 2016 04:00 AM EST Reads: 493
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Feb. 14, 2016 04:00 AM EST Reads: 264
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
Feb. 14, 2016 03:45 AM EST Reads: 484