Craig Burton

Logs, Links, Life and Lexicon: and Code

Craig Burton header image 1

The Compuserve of Things

June 19th, 2014 · Daily Thesis

My good friend and mentor Phil Windley recently published “The Compuserve of Things”. As usual the information is well thought out and clearly articulated. It is so good I wanted to reiterate portions. Here is the summary:

 

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980′s, or will we learn the lessons of the Internet and build a true Internet of Things?

This also reminds me very much of the saying Doc Searls gave me:

Freedom of choice does not equate to choice of captor

The way the internet is being used today, we have become numb to the process of being herded into silos of captivity. The first step to remedy this is awareness.

A Real Open Internet of Things

If we were really building the Internet of Things, with all that that term implies, there’d be open, decentralized, heterarchical systems at its core, just like the Internet itself. There aren’t. Sure, we’re using TCP/IP and HTTP, but we’re doing it in a way that is closed, centralized, and hierarchical with only a minimal nod to interoperability using APIs.

When we say the Internet is “open,” we’re using that as a key word for the three key concepts that underlie the Internet:

  1. Decentralization
  2. Heterarchy (what some call peer-to-peer connectivity)
  3. Interoperability
I really like these concepts. It all begins with awareness.


→ No CommentsTags:

API.json–API Meta data

May 21st, 2014 · Daily Thesis, The API Economy

Kin Lane–known as the API Evangelist–and Steve Willmott of CEO of 3Scale announced a new api format known as APIS.json that defines a simple but elegant way to track and discover API definitions. They also introduced APIs.io which is a search engine for APIs that use the format.

The format is licensed under the API Commons agreement. Nice. And the source to the search engine is publshed and licensed under the API commons as well. Double nice.

An excerpt from the spec:

Web APIs have quickly become and important part of Web Infrastructure. However the amount of machine readable meta-data about the location and capabilities of these APIs remains extremely limited.

A number of machine readable formats for describing Web API Interfaces have emerged. Unfortunately however adoption of these formats is not widespread and further, where they are used it is in turn not clear where to look for these definition files themselves. This document takes no position on which of these formats is best, but aims to solve the problem of where to find the descriptions in the first place.

In other words, it means to provide a mechanism to bootstrap
meta-data discovery for APIs.

The objective of the format defined in this document is to provide a lightweight means for individuals and organizations to document the location of their APIs, the associated descriptions, human and machine readable specification and ancillary information (such as licensing, maintainers and so forth).

This is such a good idea. I hope it gets well adopted. the current methods for tracking APIs are poor and cumbersome at best. Let me know what you think.

→ No CommentsTags:

From Nightmare to Wetdream

May 19th, 2014 · Daily Thesis

On the 9th of May, the worst place scenario was handed down by the court of appeals. Oracle was awarded the right to copyright an API.  The implications are staggering. I won’t go into it as it has been talked about by everyone everywhere. 

The curious thing is–a week later on the 16th of May–Apple and Motorola (Google)  announce that they are dropping any outstanding lawsuits from each other and will collaborate on patent reform.  

Now this week there are rumors that Apple and Samsung have opened out of court negociations to drop their respective litigation. 

Right away my knee jerk reaction is “did the Oracle v Goggle stupidity bring everyone else to their senses?”

Nah. Stuff like this never happens in just a week. There are lawyers involved. 

But I can pretend.

→ No CommentsTags:

Axiom #5: Organizations Must Consume Core Competences of others through APIs

May 16th, 2014 · Daily Thesis, Identity, The API Economy

This is a joint post by Guest myself and 3Scale’s Steven Willmott (njyx on twitter). I dreamt up the the original axiom approach we’ve iterated together since.

The Five Axioms of the API Economy

This blog post is the fifth in a series of blog posts outlining the axioms of the API economy – and covers the fifth and last axiom. However, it won’t be the last post in the series – once we’ve covered all the axioms, we’ll proceed and talk about their consequences also.

The five axioms we’re covering are as follows, in order:

  1. Everything and everyone will be API enabled.

  2. APIs are core to every cloud, social and mobile computing strategy.

  3. APIs are an economic imperative.

  4. Organizations must provide their core competence through APIs.

  5. Organizations must consume core competences of others through APIs.

Axiom #5: Organizations Must Consume Core Competences of others through APIs

The fifth and final axiom is the converse of the fourth axiom. Just as there is value in exposing APIs for others to consume, there is value in consuming the APIs of others. So much value in fact it is very likely a critical failure not to do so. Examples of this have already been given with Axiom #2 – for social, cloud and mobile (the former two in particular), the amount of functionality now available via API to simple use “out of the box” is staggering.

More specifically – the effort required to rebuild the features of many PAAS, IAAS, SAAS systems to the same level of functionality and consistency is beyond the capacity of all but the largest companies. Replication of some services is furthermore impossible – while it is feasible to write code for a microblogging service, Twitter is Twitter since it commands a global audience, not because of the functionality it provides.
The richness of functionality now available “by API” extends into multiple different realms:

  • Data: impressive data catalogues of many types – many unique.

  • Communications: SMS, Telephony, Instant Messaging, Video, Email.

  • Processing: IAAS, PAAS platforms or specialized services like Aniomoto.

  • Transactions: Payment APIs, Subscription Billing and others.

With such a rich set of tools available, developing systems has become significantly easier than it was 5-10 years ago. While choices need to be made both between providers and which elements should be developed in-house it is clear that many of these systems have the possibility to be of very significant value to an organization.

Any work an organization does that this is not focused on its core competence is arguably overhead. In other words, if comparable competitors were to use external APIs to achieve some important but non-core / non-differentiating functionality and this worked at least as well as an in-house build, this competitor would have freed up time to move further ahead on their core product. The opportunity cost of in-house builds of non-differentiating services is great.

It is therefore an imperative for organizations to:

  • Identify what their own core competence / differentiation is and ensure that the majority of effort is targeted towards this.

  • Where possible consume APIs of others to bring in functionality that is non-core in in this way.

There is a final, subtle angle to this axiom – it suggests consuming the core competences of others, not any competence. This means that if an API is chosen for use, it should ideally be something which is clearly and obviously linked to the core business of the organization providing it. This ensures that:

  • The provider is likely to remain committed to the API.

  • They derive value (economic or otherwise) from third party use of the API and hence should continue to support it.

These are all very strong indicators that the API will be maintained and improved over time. They also means that in many cases (though not all) there are likely competing companies providing similar APIs – providing a switching path in the case of failures on the part of the chosen provider.

If an API is a secondary, side business for an API Provider or does not seem to be core, this may be a higher risk integration – generally assurances from the API Provider should be sought.

Summary

Axiom #5 makes it clear that any API strategy not only includes the publishing of APIs for other constituencies to participate with but for organizations to plan on integrating the core competencies of their constituencies into their applications services and infrastructure. Make sure your organization understands the dual nature of API publishing an API consuming.

The Five Axioms are statements about the current state of APIs and where we may be headed. In themselves they already provide some insights into how organizations should consider acting – but this is where we’re headed next. Next week we’ll dive into what the consequences of these axioms might be for organizations and individuals.

→ No CommentsTags:

The Five Axioms of the API Economy, Axiom #4—Organizations must provide core competence through APIs

May 9th, 2014 · feature, Identity, The API Economy

This is a joint post by myself and 3scale’s Steven Willmott (njyx on twitter). The original axiom approach was dreamt up by me and we’ve iterated together since.

The Five Axioms of the API Economy

This blog post is the fourth in a series of five blog posts outlining the axioms of the API economy. The post follows on from the first three Axiom posted (also see an intro to the series in the first Axiom). The five axioms we’re covering are as follows, in order:

  1. Everything and everyone will be API enabled.
  2. APIs are core to every cloud, social and mobile computing strategy.
  3. APIs are an economic imperative.
  4. Organizations must provide their core competence through APIs.
  5. Organizations must consume core competences of others through APIs.

Axiom #4 addresses the issue of what should an organization choose to API-enable first and why.

Axiom #4: Organizations Must Provide Core Competence through APIs

The fourth axiom contains two core elements. When an Organization chooses to provide APIs:

  • Those APIs should provide substantial value to their target audience.
  • Those APIs should cover the Organization’s core competency.

In other words, an organization should provide APIs that some other individuals or organizations (customers, partners or the world at large) can consume and find useful. These APIs should generally be related to the organizations core business – and not a small side detail.

While these are separate, it’s easiest to understand them together. As already stated in the previous axiom, APIs have the power to generate significant economic value but:

  • This value is only unlocked if a particular API set has a user-base. These users may be customers, employees, third party partners or the world at large – but without a user base an API is a wasted investment.
  • An API is only as valuable as the data or functionality to which it provides access. So if an organization delivers a particular core competence (e.g. shipping, book retail, flight reservations, car hire), it stands to reason that the most valuable APIs this organization could provide link to and drive volume to exactly these core competencies.

Axiom 4 Graphic

Figure 1—Core Competence to Value Matrix

Figure 1 illustrates the relationship between providing value to the target audience and using the provider’s core competence when designing and delivering an API suite. Obviously the provider should go for the “Sweet Spot” and develop APIs that focus on the provider’s core competence and deliver high value to the target audience. Anything less will meet with undesirable results.

In a theoretical example, a car hire firm may consider two APIs: 1) a data API providing historical trend analysis on US driving patterns, 2) transaction capability to book hire cars from the company’s fleet.
Both of these APIs are interesting and potentially valuable to someone (and both are worth considering). However, if the organization needs to choose between the two projects the second is clearly more valuable both to itself and its customers, partners. This is because:

Transactions on the API drive the hire company’s core business.
Transactions on the API help partners/customer carry out the key business with the organization in a new way.
The company can safely say it is likely to continue support for the API if it succeeds since it will be contributing to core business metrics.

For the second API however, while this could certainly be valuable, it represents a deviation from core business. The car hire company would likely not be able to back the API with as much resource. Further, the API would need a new, separate business model. Though this could certainly be welcome if revenues were significant, they would likely need to be very significant compared to the company’s core business. Worse, if the API gets significant traction, costs could rise – potentially forcing a shut down. In other words, this second API is a much more fragile proposition both for the provider and potential users of the API.

In order to reinforce this point consider some examples of companies who operate APIs that directly drive their core business:

  • The Walgreens API: subscription filling and photo printing
  • Nike+: social network enabled sports clothing
  • Twitter: read and write tweets
  • Getty Images API: stock photo search in real time

In each of these cases, APIs form a channel directly onto the company’s core business and if successful will drive more business (which would likely grow over time). This channel delivers a very clear advantage over their competitors since potential partners and customers now have increasing reasons to partner with Walgreen’s, Nike, Twitter and Getty Images specifically.

It could even be argued that over time, an API could become the single most important business channel for many companies, since mobile, partners, and product integrations could all tie into a single channel.

Interestingly, it has taken a while to learn this lesson. Many of the early APIs that were created by large organizations were offshoots or experiments that were deliberately not related to the organization’s core business. In other words, organizations wanted to explore the impact of APIs but not affect their core businesses – this approach often leads to failure of the API program since:

Summary

Becoming an API provider is driven by one key realization: ensuring an organization’s core competence is available in the simplest form possible for others to integrate into their own systems such that they can become valuable suppliers, customers and partners.

Also providers should enable API access to core competence and preserve value to the target audience. Focus on the “Sweet Spot.”

Note: This is not intended to be an argument that every organization should have a fully public open developer program. In many cases that goal wouldn’t meet business objectives nor would it provide value. Instead the axiom states that becoming a provider to some kind of audience – typically a subset or superset of an organization’s existing customer base, or a new one they wish to address – is the important focus. A fully public open API is simply the mechanism to reach a particular type of customer base.

Up next is Axiom #5: Organizations must consume core competencies of others through APIs.

→ No CommentsTags:

The Five Axioms of the API Economy, Axiom #3 – APIs are an Economic Imperative

May 2nd, 2014 · feature, Identity, Innovation, The API Economy

This is a joint post by myself  and 3scale’s Steven Willmott (njyx on twitter). The original axiom approach was dreamt up by me and we’ve iterated together since.

The Five Axioms of the API Economy

This blog post is the third in a series of five blog posts outlining the axioms of the API economy. The post follows on from the first and second Axiom posted (also see an intro to the series in the first Axiom). The five axioms we’re covering are as follows, in order:

  1. Everything and everyone will be API enabled.
  2. APIs are core to every cloud, social and mobile computing strategy.
  3. APIs are an economic imperative.
  4. Organizations must provide their core competence through APIs.
  5. Organizations must consume core competences of others through APIs.

Axiom #3 addresses what kind of impact APIs are likely to have. Seen from the outside, APIs may often look like un-important add-ons, but there is a strong and clear argument that economic value is at the core of APIs.

Axiom #3: API’s are an Economic Imperative

As is clear from the first two Axioms, APIs provide significant value in a wide range of scenarios: making new types of products and services possible, reducing integration costs or speeding up existing processes. However, It may still appear that the majority of the beneficiaries of APIs are likely to be primarily digital or technology companies like large social media companies or media organizations. It is also difficult to see past the technical nature of APIs to their business value since much of the current debate around APIs is inherently about implementation details, technical architecture and so on.

However, APIs are, at their core, not a technical device. Instead, they are a means of delivering or providing access to a service or a product. In other words, the precise technology involved may vary but the essential nature of an API is to provide access to something of value. This in turn means that APIs intrinsically provide economic value. For example:

  • Twitter’s API provides the availability to send a tweet and have this visible to the whole of Twitter’s user base: clear value.
  • Salesforce’s API provides the ability to synchronize customer data with third party tools – making those tools and Salesforce itself more useful: clear value.
  • The City of New York’s 311 API provides the means to report problems to the city managers so they can be addressed: clear value.
  • Twilio’s Telephony API allows the sending of an SMS to any phone number in the world in one line of code: clear value.
  • United States Postal Service (USPS) API. The U.S. Postal Service provides a suite of USPS Web Tools that customers may integrate into their own websites to validate or find mailing addresses, track and confirm mail delivery, calculate shipping rates, and create domestic or international shipping labels: clear value.

Interestingly, only in the case of one of these APIs is the actual API invocation charged for (Twilio charges per SMS sent), yet they all still provide value – often to the provider of the API as well as the user.

For digital native companies such as Netflix, this value is already very clear: Netflix API has evolved tremendously since it’s launch and provides the key meta-data and navigation flow that underpins Netflix players deployed on over 1000 different types of devices  http://www.slideshare.net/danieljacobson/maintaining-the-front-door-to-netflix-the-netflix-api. Without the API Netflix players would not be able to nimbly navigate the Video content Netflix delivers or deliver a custom experience on each device. But beyond inherently digital businesses, many more large organizations are deriving clear value from APIs:

  • General Motors: GM via onStar provides a rich set of APIs to control the systems in some of its vehicles. The APIs provide both in-vehicle and remote information and control – enabling third parties to provide new applications and experiences to GM customers.
  • Walgreens: The Walgreens API Program enables partners large and small to integrate and both print photos and file prescriptions – important and valuable services for both the developers writing the apps and for Walgreens itself.
  • Johnson Controls: JCIs Panoptix division amongst others employs APIs to provide access to data and control systems from its in-building installations. This in turn creates a marketplace for new applications compatible with their systems.

To complete the Axiom however, we also need to discuss whether or not APIs are an economic imperative. This is equivalent to asking –

“Yes, but how important is this additional value? Can I live without it? How do I compare it to other valuable initiatives I have going (opportunity cost)?”

While not every industry sector and every player in every sector is in the same situation and hence, the answer to this question may vary. However, there are several reasons to believe that for most organizations there is a clear imperative to deploy APIs:

The value created is likely to have direct impact (being able to do new things) and indirect structural impact (making the organization more agile in the future).

  • The value of APIs is often on the top and bottom line: on the revenue side – APIs enable new products and services or drive more volume for existing products/services, on the cost side – integration costs often drop dramatically when API-driven.
  • Many API strategies enable creation of a partner or customer ecosystem – reinforcing the value of products/services with third party additions. This type of effect is often strongly biased to the first few movers in a space – enabling them to grow proportionally faster than their competitors.

There are industry sectors where no significant players make significant use of APIs, the economic case is clear – once one or more players come out with offerings, they will force transformation amongst all the remaining players.

What is without doubt is that APIs are already starting to have a significant top and bottom line effect on a business. It has already become an imperative to leverage their power– just as cloud, social and mobile have become imperatives in almost every sector.

Summary

Proper use of APIs provide clear value. It is economically imperative that organizations integrate well-designed API strategies into their planning and development processes. These actions will provide value to any organizations top and bottom line economic values.

Axiom #4 will be coming next week.

→ No CommentsTags:

The API Economy Axioms: Axiom #2: APIs are core to any cloud, social and mobile computing strategy

April 25th, 2014 · feature, lexicon, The API Economy

This is a joint post by myself and and 3scale’s Steven Willmott (njyx on twitter). The original axiom approach was dreamt up by myself and Steve and have iterated together since.

The Five Axioms of the API Economy

This blog post is the second in a series of five blog posts outlining the axioms of the API economy. The post follows on from the first Axiom posted here (also see an intro to the series). The five axioms we’re covering are as follows, in order:

  1. Everything and everyone will be API enabled.
  2. APIs are core to every cloud, social and mobile computing strategy.
  3. APIs are an economic imperative.
  4. Organizations must provide their core competence through APIs.
  5. Organizations must consume core competences of others through APIs.

Axiom #2  focuses on the fact that APIs are an integral part of what are arguably the three major forces currently transforming the Web and IT  – The Computing Trifecta—Mobile, Social and Cloud Computing. These transformations have been underway for a while but they are combining increasingly strongly and the effects are still getting deeper.

Axiom #2: APIs are core to very cloud, social and mobile computing Strategy

Early Web systems were single destinations acting as self-contained silos within which a browsing user could act – consuming information, uploading data or authorizing transactions. Many current systems still function this way. However, while this metaphor functioned credibly for a “human powered” Web, it provides no support for the software-to-software interactions that that are increasingly occurring between devices and web services.

As web interactions become more automated, higher velocity and more fragmented (many smaller transactions like sending a tweet – versus large ones  like downloading and browsing an entire web page) software-to-software interactions are a clear requirement for success.

Humans in the loop simply cannot effectively keep up with the velocity and accuracy required. Nowhere is this better exemplified than the implementation of what are commonly the three largest Information technology challenges faced by organizations today: cloud computing, social and mobile.

APIs in the computing trifecta — cloud, social and mobile computing.

Cloud Computing is one of terms that has been used so much that it’s meaning can often be obscured or even completely misunderstood. It is generally accepted that Cloud Computing be thought of as a stack of services classes. The three classes of services are Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Figure 1 illustrates the idea of Cloud Computing being a stack of service classes.

pyramid

Figure 1—Cloud Computing Service Classes Source: Burtonian

Here is how these services classes are viewed:

  • SaaS—applications designed for end users, delivered over the web
  • PaaS—tools and services designed to make coding and deploying SaaS applications quick and efficient.
  • IaaS—hardware and software that powers it all—servers, storage, networks, and operating systems

While for illustration purposes, a diagram showing clear cut differences between these classes is useful, in reality the lines between these service classes  is getting blurry and will continue to do so in the future.

However, the distinction is still useful in understanding how cloud computing and its services classes relate to APIs. In their earliest incarnations, SaaS variants of cloud started with hosted email, blogging and other tools but quickly progressed to key enterprise applications such as CRMs, HR, accounting and other functions. Many of the earliest SaaS apps began life as a browser based alternative to desktop software – and hence human interaction with the application through the browser was their key use.

Modern SaaS apps have evolved way beyond this and providing the means for software integration with a SaaS app for customers and/or partners is table stakes in almost every sector. Further, for PaaS and IaaS layers, the means to integrate with the platform/infrastructure via APIs is in many cases the key driving factor in adoption decisions:

  • SAAS: APIs are crucial for adoption since they enable customers of the service to carry out bulk operations, integrate tightly with other SAAS Applications and internal processes, as well as feeling an extra level of comfort with respect to platform lock in. Almost every major SAAS offering now has at least customer facing APIs to permit easy integration, many also have partner ecosystem APIs to enable to third parties to develop add-ons and modules that benefit their customers.
  • PAAS: Platforms provide compute, storage, messaging and other services and essentially act as hosted APIs onto which developers can layer their code. APIs for PAAS essentially fall into two categories – those available to the code effectively running on the provided PAAS servers themselves and those available to push pull data into and out of the service. This access may be for other parts of the application that are not hosted on the PAAS, Mobile Applications calling the PAAS or for bulk control, management or monitoring operations.
  • IAAS: Last but not least as with PAAS services, APIs play a key role in remote access to the infrastructure being provided. In this case most IAAS providers are providing raw compute power, storage and networking as a resource – hence application development for code which is to run on the IAAS is often directly using the operating system primitives available (Linux, Windows etc.) rather than some higher level, more abstract API as they do in the PAAS. However the external APIs for bulk operations, control, management, monitoring remain critical.

In each of these cases, while it is still technically possible to use a cloud hosted service in a way in which everything about the application is hosted within that single cloud service and requires no external integration, this mode of operation is insufficient for almost all significant use cases.

Trends in the market also shows a strong correlation between the strength of a cloud service provider’s APIs and their relative success in the market.

Social and APIs

Social media has clearly had an enormous impact on both consumer Internet usage and enterprise applications. Facebook now counts more than 1.2 billion users, Twitter 230M, Instagram 15M and popular messaging app Whatsapp was just proposed to be acquired by Facebook for $19B. On the enterprise side, social integration with mainstream networks as well as “social” features to enterprise products all now table stakes for most organizations.

Diverse example of enterprise social products include:

  • In-enterprise social network products such as Salesforce Chatter, Yammer and others.
  • Code collaboration products such as Github and Altlassian Bitbucket.
  • Collaboration tools such as Wikis and Task Managers.

Social logins such as Facebook Connect, Twitter Auth, Google Login and Github Auth are being used with increasing frequency to manage work related identities for enterprise SAAS products, media access and almost everywhere that a user login in required.

A third dimension of social is around the gigantic amounts of data that are produced by social media applications. Companies such as GNIP and DataSift have sprung up to process the real-time streams produced by large social networks, other companies such as Marketwired’s Sysomos, Radian6 (now part of Salesforce), and Klout (now part of Lithium) analyze these data flows and provide value added aggregate information on top.

As user behavior continues to reinforce the growth and importance of social media and the value of enterprise social increases, companies and individuals are compelled to include social integration options in their own workflows and products. Such integration or addition requires the use of APIs – either by integrating APIs provided by third parties or (in the case of products with a social dimension) providing a new API.

At their core, social systems provide a range of key functions:

  • Messaging & Notifications
  • Media Sharing
  • Collaboration
  • Search and Monitoring
  • Data Aggregation

Each of these functions can be a means in itself (a discrete task carried out by an individual) or part of a large workflow:

  • Tweeting out the result of a process, or arrival of a new piece of content or an opinion or comment.
  • Pushing out a Facebook post when an Instagram photo is published.
  • Updating a Github work ticket when a software integration test passes.
  • Regularly search on the same set of keywords for mentions of a company’s brand.
  • Tracking statistics of certain behaviors across different social networks.

As a social strategy evolves it invariable requires processes to be established and made replicatable. Hence APIs become central as these small individual actions are strung together into a larger flow.

In a similar way, any new product with its own social features (e.g. a new code collaboration tool) will quickly run into the need for integration with existing other tools in the workflows which are already established. As a result, an API becomes an indispensable part of the product. Conversely shipping a product with pre-existing integrations to the APIs of other tools in the workflow is a major added value for a product.

Mobile Computing and APIs

It goes without saying that mobile is a huge driver of API Adoption. However, it may not be 100% clear as to why this is so for an individual company – after all, it is rare that an organization sets out to build an API if their focus is putting an application in a user’s hands. However, APIs are at the heart of such projects since they provide the server side synchronization point which applications communicate with.

Going back to 2008 and the launch of the iPhone with the first generation of Apps and even further to early feature phone, blackberry / palm pilot apps, the vast majority of Apps were software which ran in a self-contained manner on the mobile device. In other words the App’s value was solely in the software executing on the device.

Through 2009 however and to this day, a rapid transformation has taken place in that almost all meaningful apps are composed not only of software executing on the device but also server side services that provide support services: everything from backup to live data or ecommerce transactions. This server connection requires an API to receive and respond to traffic.

Further, with the rise of Android and the proliferation of the number of device types that can run mobile apps, there are often many versions of a mobile app that are available at any one time – all of which need to be able to access server side components.

A modern mobile strategy is therefore increasingly like the one shown in Figure 2, with an API driven core and a wide variety of different clients calling the API.

mobile Final2

Figure 2–Mobile API Strategy      Source: 3Scale Inc.

These trends in APIs are likely to be at the heart of very many of the major strategic IT initiatives most companies have planned.

Axiom number two details just how strategic APIs are for the computing trifecta — mobile, social and cloud computing.

Summary

As with the first axiom, the truth of this axiom may seem obvious – but this is part of the point. APIs are often “in the picture” of transformative strategies in mobile, social and cloud, but rarely mentioned. Every organization is evolving strategies to deal with the Trifecta—cloud, social and mobile computing— and these come in varying shapes and sizes. However, it is critical they they be viewed from a perspective of being API-centric, since APIs are a key glue which make each of them stick. Axiom #3 will be up next week.

→ No CommentsTags:

The Five Axioms of the API Economy, Starting with Axiom #1— everything and everyone will be API-enabled

April 17th, 2014 · Apps, feature, lexicon, The API Economy

This is a joint post with me and Steven Willmott (njyx on twitter) at 3Scale. The original axiom approach was created by me and then iterated with Steven. We will post a new axiom each week until all five are available.

The API Economy is a phenomenon that is starting to be covered widely in technology circles and spreading well beyond, with many companies now investing in API powered business strategies. There are also a number of definitions of the term API Economy that are useful (including here and here).  As the term catches hold however, it makes sense to step back and reflect what the API Economy actually is and what effect it might have on organizations, individuals and the Internet/Web as a whole.

To do this, we’ve tried to describe the nature of the API Economy in the form of five axioms we believe are true and a number of conclusions that can be drawn from these axioms. We’ll be publishing the axioms and our conclusions in a series of posts here to try to make them more digestible. The thoughts here are somewhat raw so we’re hoping for feedback and debate. The axioms and conclusions are a result of discussions between Craig Burton, Steven Willmott and others.

While producing a set of Axioms may seem a rather theoretical exercise, we believe it is a useful endeavor since when talking about something as complex as the interactions between many new types of service providers and consumers, solid foundations are needed. The API Economy is already turning out to be clearly different from the human powered Web Economy we see today – transactions are happening faster and at greater scale, but they are distributed differently.

We begin with an overview of the API Economy, name the five Axioms and finish with the details of the first Axiom. The remaining Axioms and what we can derive from them will be posted here in the next few weeks.

The API Economy

As software and technology become ubiquitous in today’s business processes, the means by which organizations, partners and customers interface with this software and technology is becoming a critical differentiator in the market place. The Application Programming Interfaces (APIs) that form these interfaces are the means to fuel internal innovation and collaboration, reach new customers, extend products and create vibrant partner ecosystems. The resulting shift in the way business works is giving rise to huge new opportunities for both individual organizations and global commerce as a whole – this is being dubbed the API Economy.

The computing trifecta of mobile, social, and cloud computing is changing the landscape of computing and business at an unprecedented speed and scope. This combination is also having a huge impact on the nature of data and control flows between servers, resources, devices, products, partners and customers. Many previous notions of how an organization might set up its internal systems or deliver services / products to its customers are being revolutionized.

The means by which these new data and control flows occur is through the APIs that enable an organization to define how data passes between their internal systems, to and from their partners and to / from their customers. Whilst high tech media and eCommerce companies began the deployment of this technology, it is rapidly spreading to all sectors of the economy from healthcare to farming equipment, manufacturing, sports clothing and much more.

Furthermore, it has become increasingly obvious that the APIs deployed by one company do not just affect its own pre-existing internal teams, customers and partners. Very often APIs open the door to large new product, partner and customer opportunities. When deployed, these APIs become part of a wider ecosystem of available building blocks for other organizations to build new exciting services on top of – many with, in turn, their own APIs.

As a result of these shifts, it is imperative that organizations understand the emerging API Economy, its implications and what actions should be taken to embrace its rapid emergence.

Before covering the second axiom let’s review the five axioms.

The five axioms they are as follows:

  1. Everything and everyone will be API enabled
  2. APIs are core to every cloud, social and mobile computing strategy
  3. APIs are an economic imperative
  4. Organizations must provide their core competence through APIs
  5. Organizations must consume core competences of others through APIs

This is the first blog post in this series. It will be for more blog posts covering axioms #2, #3, #4 and #5.

Axiom #1: Everything and Everyone will be API-enabled

In other words, every crafted object, structure, and every individual person as well as many natural objects, structures, objects and beings will become addressable by API and make use of APIs. Not only that but the infrastructure used to create and manage APIs for everything and everyone will also be API enabled.

On the surface this looks like an impossibly bold claim – why would this occur and how? On closer examination however, it becomes obvious that this is the inevitable outcome current strong technology trends.

There are three forces driving the “api-ification” of everything:

  1. The emergence of wireless mobile devices as the dominant form of interface technology for most computing systems.
  2. The explosion in connected device numbers, configurations, types and form factors – from large to incredibly tiny – driven by the Internet of Things.
  3. The extension and adaptation of integration technologies previously only used in large enterprises into more open forms that permit cross department and cross organizational systems integration at a fraction of the costs of previous technologies.

The first driving force means that networked communication at distance without wires has become ubiquitous in almost all modern environments – in other words devices can communicate and act at distance near seamlessly. The second force has made it possible for many computing form factors to be deployed in these wireless environments – some in the form of large objects such as televisions or cars, some in powerful small mobile devices such as phones, others as tiny passive sensors or identifiers. The last force is the increasing ease of software-to-software communication between systems operating on these diverse devices and/or on remote servers many miles away.

Devices tie into the identities of individuals and objects, sometimes even just to physical spaces and are able to transmit and receive data. This data in turn permits not only information capture, but also control of actions to be taken (for example thermostat to automatically change a temperature setting using settings inferred on a cloud server from data captured the previous day or the proximity to home of a smartphone using owner).

Each compute enabled device, each software application running on such a device or a remote server, and each networked environment provides interfaces to the outside world that can be identified and accessed.

Mobile and Wireless Computing

Data about mobile and wireless growth abounds, for example an article in Forbes by Cheryl Snapp Conner provides shows what is happening in the mobile segment of the market.

  1. There are 6.8 billion people on the planet at present. 4 billion own mobile phones. (But only 3.5 billion use a toothbrush. Oy!) Source: 60SecondMarketer.com.
  2. Twenty five percent of Americans use only mobile devices to access the Internet.  (Source: GoMoNews.com)
  3. There are 5x as many cellphones in the world as PCs. (Source: ImpigoMobile)
  4. 82 percent of U.S. adults own a cellphone. (Source: Pew Reports 2010)
  5. There are 271 million mobile subscribers in the U.S. alone.

And what about growth? Gartner recently published these numbers.

World Wide Devices Shipments by Segment (Thousands of Units)

Device Type 2012 2013 2014
PC (Desk-based and Notebook 341,273 305,178 289,239
Ultramobile 9,787 20,301 39,824
Tablet 120,203 201,825 276,178
Mobile Phone 1,746,177 1,821,193 1,901,188
Total 2,217,440 2,348,497

2,506,429

Table 1–Mobile Device Growth Projections Source: Gartner

These numbers are staggering: 2.4 billion devices in 2013 and 2.5 billion in 2014.

In terms of raw volume of wirelessly network devices, we are clearly well past the tipping point of adoption and on the road to near ubiquitous deployment of such technology.

A Myriad of Devices

High mobile computing growth is one of the most important reasons driving growth in both wireless networking and devices. As the data indicates, the mobile device is rapidly becoming the preferred entry point to information and the Internet. In today’s world, information rarely remains static, it is in a state of constant change.

People use myriads of mobile apps to keep track of and manage all sorts of relevant information for their day-to-day routines. This information includes location, intention, scheduling, contacts, meetings, business, context, shopping and relationships to name but a few. This information is always changing, using mobile software apps on mobile devices to track and manage these changes.

However, smart phones are only the tip of the Iceberg in terms of what computing devices are being distributed in the world. Consider the number of devices that will be a part of the Internet of Things. These numbers do not include the devices shown in table 1 part but are “things” that will be connected to the Internet. These devices have form factors of everything from lightweight plastic fitness trackers such as the Fitbit, to Low Power Bluetooth beacons such as the Estimote. Many everyday objects now come with sensors and networking capability embedded. Sensor density is also rapidly increasing with compact gadgets such as the wireless Quirky Sensor now packing Humidity, Light, Sound, Motion sensors into a package for under $50.

In addition the new “hobbyists and make community” — driven by the Raspberry Pi, Arduino and other high function and affordable microcontroller devices.

Cisco predicts that by 2020 there will be 50 billion devices connected to the Internet of Things.

This is a huge number of potential compute enabled endpoints.

clip_image001

Figure 1—Number of Connected Objects

Software-to-Software Communication

Since its emergence and mass adoption, interactions on the World Wide Web have been fundamentally human driven. Human users browse information, upload content, download and browse data, click controls in order to generate effects: both virtual (such as a video playing) and physical (such as a purchased book being shipped from a warehouse). This setup enabled a huge diversity of applications since on the basis of a few simple rules and patterns a vast number of different services can be built and human users were flexible enough to understand the purpose of the sites/applications. They can then act accordingly to operate the service.

Today however, human driven Web Activity is rapidly being caught up by software-to-software activity: traffic between systems with no human in the loop and automated transfer of data and control actions. This is occurring because:

  • Mobile devices are increasingly acting autonomously and pre-emptively to fetch data from and deliver data to remote systems.
  • Many new device form factors include no keyboards, screens or other means of data input/output – and are hence their onboard software is only able to communicate with other specialized software on other devices or on remote servers.
  • Many modern applications require high rates of highly accurate transaction flow – with millions of similar calls per day which must be carried out stably and efficiently in a way no human could drive.
  • Almost no modern software application developed today – Web Applications and those running locally on a device – functions in a completely standalone manner any more. That is, it is very often the case that some part of the functionality requires direct, coordinated communication with some other piece of software.

Why APIs will be ubiquitous

As these trends continue they all continually reinforce the need for APIs for every device and person. For devices, almost all devices will require:

  • A hardware substrate of some kind.
  • Control software.
  • An interface by which the device can be reached.
  • A number of remote interfaces the device may call from time to time.
  • One or more identities by which the entity can be addressed.

For a natural person (or potential another creation or natural object), they may become associated with one or more digital devices that can receive incoming messages and conversely send data to one or more remote systems.

Existing identity mechanisms such as email addresses, twitter handles, phone numbers and the like already permit data exchange and as devices become more customized and embedded about the person, the ability for an individual to receive information (which is some cases will cause them to act) and stream data to a remote location will increase.

Summary

As we’ve already pointed out the claim that everything and everyone will be API enabled is mind-boggling within itself. But it seems inevitable as we move down the path of universal connectivity collaboration interoperability that this example come to pass. That is not to say there are not huge obstacles in accomplishing this grand vision but it seems nonetheless this is the path we are on.

Axiom number 2 “APIs are core to every cloud, social and mobile computing strategy” is up next week.

→ No CommentsTags:

Identity and the API Economy

November 14th, 2013 · Essays, feature, Identity, Internet of Things, lexicon, Links, The API Economy

This is also posted as a guest blog on the 3Scale web site.

Introduction

I recently attended and participated in the API Strategy and Practice conference in San Fransisco OCT 23-25. This was an exceptional conference and I met a lot of new people and learned many new things.

The panel I took part in in a panel discussion concerning  the API Economy. It was too short of a time slot do go very deep into the subject, but the moderator—Brian Proffitt—did a great job and the questions were interesting and relevant.

In retrospect, there was something curious missing from all of the presentations and conversations at the conference—the lack of discussion about the role of identity access management and APIs. It is clear that the API community doesn’t get that API ubiquity is an identity problem as much as it is an API design and maintenance problem.

Making Some Distinctions

To understand the connection, some definitions are in order. Identity management is complicated. The lingo is complicated. Understanding the lexicon used is the only way to really get a handles on this stuff works. The distinctions here are not exhaustive, but should be enough make my point. As a reminder, I am not insisting that anyone adopt these definitions and canonical and disregard the definitions that might be in place already, but to be rigorous in explaining the relationship between Identity and The API Economy, it is essential that a baseline ontology be established and used for discussion.

Kim Cameron’s famous blog provides great information about definitions and functions surrounding IdM. Pay special attention to the Laws of Identity.

I love the sequence he provides explaining the definitions of IdM basics. Note that these definitions are from Microsoft and not everyone has adopted them. For example the OASIS SAML working group has their own glossary of terms for IdM. I like them both but I think the terms Microsoft uses are a little more rigorous for the point I am making.

There are a number of definitions pertaining to subjects, persons and identity itself:

Identity:  The fact of being what a person or a thing is, and the characteristics determining this.

This definition of identity is quite different from the definition that conflates identity and “identifier” (e.g. kim@foo.bar being called an identity).  Without clearing up this confusion, nothing can be understood.   Claims are the way of communicating what a person or thing is – different from being that person or thing.  An identifier is one possible claim content.

We also distinguish between a “natural person”, a “person”, and a “persona”, taking into account input from the legal and policy community:

Natural person:  A human being…

Person:  an entity recognized by the legal system.  In the context of eID, a person who can be digitally identified.

Persona:  A character deliberately assumed by a natural person

A “subject” is much broader, including things like services:

Subject:  The consumer of a digital service (a digital representation of a natural or juristic person, persona, group, organization, software service or device) described through claims.

And what about user?

User:  a natural person who is represented by a subject.

The entities that depend on identity are called relying parties:

Relying party:  An individual, organization or service that depends on claims issued by a claims provider about a subject to control access to and personalization of a service.

Service:  A digital entity comprising software, hardware and/or communications channels that interacts with subjects.

The other section that needs to be included here is Kim’s distinctions about claims. Read this carefully.

Let’s start with the series of definitions pertaining to claims.  It is key to the document that claims are assertions by one subject about another subject that are “in doubt”.  This is a fundamental notion since it leads to an understanding that one of the basic services of a multi-party model must be ”Claims Approval”.  The simple assumption by systems that assertions are true – in other words the failure to factor out “approval” as a separate service – has lead to conflation and insularity in earlier systems.

Claim:  an assertion made by one subject about itself or another subject that a relying party considers to be “in doubt” until it passes “Claims Approval”

Claims Approval: The process of evaluating a set of claims associated with a security presentation to produce claims trusted in a specific environment so it can used for automated decision making and/or mapped to an application specific identifier.

Claims Selector:  A software component that gives the user control over the production and release of sets of claims issued by claims providers.

Security Token:  A set of claims.

The concept of claims provider is presented in relation to “registration” of subjects.  Then claims are divided into two broad categories:  primordial and substantive…

Registration:  The process through which a primordial claim is associated with a subject so that a claims provider can subsequently issue a set of claims about that subject.

Claims Provider:  An individual, organization or service that:

Registers subjects and associates them with primordial claims, with the goal of subsequently exchanging their primordial claims for a set of substantive claims about the subject that can be presented at a relying party; or

Interprets one set of substantive claims and produces a second set (this specialization of a claims provider is called a claims transformer).  A claims set produced by a claims provider is not a primordial claim.

Claims Transformer:  A claims provider that produces one set of substantive claims from another set.

To understand this better let’s look at what we mean by  “primordial” and “substantive” claims.  The word ”primordial” may seem a strange at first, but its use will be seen to be rewardingly precise:  Constituting the beginning or starting point, from which something else is derived or developed, or on which something else depends. (OED) .

As will become clear, the claims-based model works through the use of “Claims Providers”.  In the most basic case, subjects prove to a claims provider that they are an entity it has registered, and then the claims provider makes ”substantive” claims about them.  The subject proves that it is the registered entity by using a “primordial” claim – one which is thus the beginning or starting point, and from which the provider’s substantive claims are derived.  So our definitions are the following:

Primordial Claim: A proof – based on secret(s) and/or biometrics – that only a single subject is able to present to a specific claims provider for the purpose of being recognized and obtaining a set of substantive claims.

Substantive claim:  A claim produced by a claims provider – as opposed to a primordial claim.

Passwords and secret keys are therefore examples of “primordial” claims, whereas SAML tokens and X.509 certificates (with DNs and the like) are examples of substantive claims.

Some will say, “Why don’t you just use the word ’credential’”?   The answer is simple.  We avoided “credential” precisely because people use it to mean both the primordial claim (e.g. a secret key) and the substantive claim (e.g. a certificate or signed statement).   This conflation makes it unsuitable for expressing the distinction between primordial and substantive, and this distinction is essential to properly factoring the services in the model.

Systems that manage Identity are referred to as Identity and Access Management (IAM) or just Identity Management (IdM). IdM components are divided into processes distinguished as Authentication and Authorization (AuthN/AuthZ).

Authentication

Authentication is the process of confirming the veracity of a specific attribute of a subject. There a “factors” of authentication often referred to as “multi-factor authentication”. The ways in which a subject can be authenticated is by something the subject has, something the subject knows, and something the subject is or does. Note that authentication is not just about people, but other subject—like and app or a piece of code that represents a machine or a device or a service. The most common factor used in AuthN is a subject’s name. But any number of factors can be used of any factor type.

Authorization

Authorization is a process distinct from authentication. Authentication is the process of a subject being verified that “it is who it says it is”.  Authorization is the process of verifying that it is permitted to “do what it is trying to do.” This distinction a little subtle, but suffice to say that authorization has to presuppose authentication and so they are separate processes.

Due to so many breaches to IdM systems in recent months, it is becoming more and more popular to require a “two factor” authorization process. We should expect that multi-factory authorization will become more and more a part of our lives in the future. I don’t want to get side tracked on multi-factor IdM so you can read Dave Kearns blog post for a good little primer and some links.

So a summary of AuthN/AuthZ is the processes that prove the validity of a subject’s claims perhaps a name and password to a particular system.

The Problem with AuthN/AuthZ

Nobody really needs to explain the problem in too much detail because everybody that uses the internet experiences the problem every day in excruciating detail.

The problem that every system, every service and every device have their own processes for AuthN/AuthZ. In other words, everything has its own name and password. We are drowning in what I call “name and password fatigue.”

The panacea for resolving name and password fatigue is finding solution for one name and password to provide secure AuthN/AuthZ to all system. This solution is referred to as Single Sign On (SSO). With SSO, all entities can properly and securely have access to information and resources without having to provide new credentials.

Some say we are close to reaching the panacea of a usable SSO environment.

Ha.

You wish.

The advent of mobile computing, cloud computing and social computing, coupled with the burgeoning API Economy just compounds the problem to unbelievable heights of complexity and difficulty. Don’t get me wrong, there are cool systems that have made tremendous strides towards SSO, there is just so much more to do.

The Cambrian Explosion of Everything

What is going on is so astounding Peter Van Auwera calls it the Cambrian Explosion of Everything.

Peter said:

“The Cambrian explosion was the relatively rapid appearance of most major animal life forms, accompanied by major diversification of organisms. Before, most organisms were simple, composed of individual cells occasionally organized into colonies. Over the following 70 or 80 million years the rate of evolution accelerated by an order of magnitude and the diversity of life began to resemble that of today.” (Adapted from Wikipedia )

He goes on to write that we are experiencing a similar explosion on the internet of everything. The next set of numbers will show you that Peter is exactly right.

My good friend Cheryl Snapp wrote an article in Forbes recently that had some incredible facts about Mobile devices in it. The article is mostly about mobile marketing—which is not my focus—but some of the research facts will help prove my point.

  1. There are 6.8 billion people on the planet at present. 4 billion own mobile phones. (But only 3.5 million use a toothbrush. Oy!) Source: 60SecondMarketer.com.
  2. As noted 91 percent of adults have their smartphones within arm’s reach. And that was as of 2007. I would wager the sum is closer to 100 percent now. (Source: Morgan Stanley)
  3. Twenty five percent of Americans use only mobile devices to access the Internet.  (Source: GoMoNews.com)
  4. There are 5x as many cellphones in the world as PCs. (Source: ImpigoMobile)
  5. 82 percent of U.S. adults own a cellphone. (Source: Pew Reports 2010)
  6. There are 271 million mobile subscribers in the U.S. alone.

And what about growth? Gartner recently published these numbers.

World Wide Devices Shipments by Segment (Thousands of Units)

Device Type

2012

2013

2014

PC (Desk-based and Notebook

341,273

305,178

289,239

Ultramobile

9,787

20,301

39,824

Tablet

120,203

201,825

276,178

Mobile Phone

1,746,177

1,821,193

1,901,188

Total

2,217,440

2,348,497

2,506,429

That’s right, 2.4 billion devices in 2013 and 2.5 billion in 2014.

It gets better (or worse depending how you view it), Gartner also predicts that there will be over 81 billion apps downloaded in 2013.

I could give you even more numbers about social computing and cloud computing that make this phenomena even crazier, but I think you get my point.

It’s an Identity Problem

I will make one more bold statement. I believe we are headed to a point where:

Everyone and everything will be API Enabled.

source: Craig Burton

Consider the implications. Billions and billions of things and people will need to be able to properly, securely and quickly AuthN/AuthZ to each other at any time with as few names and passwords as possible. At the same time, every entity—billions and billions—must have unique and secure names and passwords. Tough problem.

As an industry, we are struggling to build systems that let people properly, securely and quickly login to a few hundred systems. We have our work cut out for us. It seems as if solving the password fatigue problem is unsolvable.

Something happened 70 million years ago to trigger the Cambrian Era into existence. We don’t know what it was. Something is happening now to trigger the solution to the Cambrian Explosion of Everything— it is the API Economy itself.

Solving the problem requires the ability for machines to help us. Machines need programs and APIs to allow the solution to the problem to be automated. The API Economy provides the framework for an automated solution to come into existence.

Summary

In a digital world where there are billions and billions of entities that require proper and secure identity management, we will need to build systems, services and apps that help us automate the creation, evolution and maintenance of the solution.

There is no better time for The API Economy to get a foot-hold and evolve into a capable framework that will let us solve the Identity problem soon.

→ No CommentsTags:······

The Age of Context and the API Economy

November 2nd, 2013 · Daily Thesis, The API Economy

I

I participated in a panel on the API Economy at the APIStrat Conference last week. The conference was fabulous and well attended. I learned a lot. Steve Willmott—the co-organizer and CEO of 3Scale–invited me to write a blog post for 3Scale. I am also cross-posting it here. Thanks for the invitation to the conference and to write a little piece.

 

One can’t exist without the other

Veteran writers and pundits Shel Israel and Robert Scoble recently had their new book, “The Age of Context” published. The thesis of the book is solid. I think the book is spot on and relevant to what is really happening. At the same time, it misses a critical ingredient to their predicted perfect storm. Indeed without it, I predict, the Age of Context will never come into existence.

Storms Coming

The opening thesis of the book uses the 2005 Batman movie screenplay where a mysterious caped person suddenly appears and warns Commissioner Gordon. Storm’s Coming.

“For the next two hours of the movie all hell breaks loose. Finally peace is restored. When people resume their lives after so much tumult and trouble, they discover that life after the storm was better than it was before.”

Scoble and Israel go on with the storm metaphor:

“We are not caped crusaders, but we are here to prepare you for an imminent storm. Tumults and disruption will be followed by improvements in health, convenience, safety and efficiency. In his book The Perfect Storm author Sebastian Junger described a rare but fierce weather phenomena caused by the convergence of three meteorological forces: cold air, hot hair and tropical moisture. Such natural occurrences cause 100-ft waves, 100-mph winds and—until recently—occur every 50 to 100 years.”

“Our perfect storm is not made of up three forces but five and are technological not meteorological: mobile devices, social media, sensors, big data, and location-based services.

What the book misses

The rest of the chapters are a good read, and while I disagree with a few of the points, overall the book nails it.

Almost.

There is another force so important and so critical to the success of the Age of Context that it needs to be examined and understood. The critical force I speak of is: The API Economy.

The five technological forces Scoble and Israel identify must be programmatically interconnected via well-known simple APIs.

In short, the Age of Context will never happen properly without a vibrant proactive API Economy.

An API Economy includes the technology, API Management and an active development community interested in participating.

Further, the effect of adding this force to the equation will eclipse any sort of technological “storm” anyone has ever seen or even thought of. The storm metaphor is a good one. But add the force of the API Economy and in geological/meteorological terms, think tectonic plates shift and global tsunami.

The API Economy

The emerging economic effects enabled by companies, governments, non-profits, and individuals of using APIs to provide direct programmable access to their systems and processes.

Some well-known examples of APIs that are significant are the Netflix phenomena. Roku, Amazon, Ebay, Salesforce.com, Google, Microsoft, Facebook, Twitter, LinkedIn and Twilio are all examples of companies using APIs to change their business and even how the Internet is used and how it works. The Health Care industry is also very active in providing APIs. The ProgrammableWeb shows there are 129 medical APIs. The controversial Healthcare.gov web site has a powerful developers program with APIs.

The Programmable Web tracks open published APIs and categorizes them on a daily basis. Programmable Web has now over 10,000 APIs listed and the growth continues to be at over 100% compound annual growth.

Needless to say, it is very important for every organization to understand what its role is going to be in this critical emerging API Economy.

Summary

There is no question in my mind about the Age of Context and its subsequent disruption and following benefits. We are just not recognizing that the ultimate success Age of Context will be enabled by the advent of API ubiquity.

 

 

 

 

         

→ No CommentsTags:··