Craig Burton

Logs, Links, Life and Lexicon: and Code

Craig Burton header image 1

The Five Axioms of the API Economy, Starting with Axiom #1— everything and everyone will be API-enabled

April 17th, 2014 · Apps, feature, lexicon, The API Economy

go ahead and share

This is a joint post with me and Steven Willmott (njyx on twitter) at 3Scale. The original axiom approach was created by me and then iterated with Steven. We will post a new axiom each week until all five are available.

The API Economy is a phenomenon that is starting to be covered widely in technology circles and spreading well beyond, with many companies now investing in API powered business strategies. There are also a number of definitions of the term API Economy that are useful (including here and here).  As the term catches hold however, it makes sense to step back and reflect what the API Economy actually is and what effect it might have on organizations, individuals and the Internet/Web as a whole.

To do this, we’ve tried to describe the nature of the API Economy in the form of five axioms we believe are true and a number of conclusions that can be drawn from these axioms. We’ll be publishing the axioms and our conclusions in a series of posts here to try to make them more digestible. The thoughts here are somewhat raw so we’re hoping for feedback and debate. The axioms and conclusions are a result of discussions between Craig Burton, Steven Willmott and others.

While producing a set of Axioms may seem a rather theoretical exercise, we believe it is a useful endeavor since when talking about something as complex as the interactions between many new types of service providers and consumers, solid foundations are needed. The API Economy is already turning out to be clearly different from the human powered Web Economy we see today – transactions are happening faster and at greater scale, but they are distributed differently.

We begin with an overview of the API Economy, name the five Axioms and finish with the details of the first Axiom. The remaining Axioms and what we can derive from them will be posted here in the next few weeks.

The API Economy

As software and technology become ubiquitous in today’s business processes, the means by which organizations, partners and customers interface with this software and technology is becoming a critical differentiator in the market place. The Application Programming Interfaces (APIs) that form these interfaces are the means to fuel internal innovation and collaboration, reach new customers, extend products and create vibrant partner ecosystems. The resulting shift in the way business works is giving rise to huge new opportunities for both individual organizations and global commerce as a whole – this is being dubbed the API Economy.

The computing trifecta of mobile, social, and cloud computing is changing the landscape of computing and business at an unprecedented speed and scope. This combination is also having a huge impact on the nature of data and control flows between servers, resources, devices, products, partners and customers. Many previous notions of how an organization might set up its internal systems or deliver services / products to its customers are being revolutionized.

The means by which these new data and control flows occur is through the APIs that enable an organization to define how data passes between their internal systems, to and from their partners and to / from their customers. Whilst high tech media and eCommerce companies began the deployment of this technology, it is rapidly spreading to all sectors of the economy from healthcare to farming equipment, manufacturing, sports clothing and much more.

Furthermore, it has become increasingly obvious that the APIs deployed by one company do not just affect its own pre-existing internal teams, customers and partners. Very often APIs open the door to large new product, partner and customer opportunities. When deployed, these APIs become part of a wider ecosystem of available building blocks for other organizations to build new exciting services on top of – many with, in turn, their own APIs.

As a result of these shifts, it is imperative that organizations understand the emerging API Economy, its implications and what actions should be taken to embrace its rapid emergence.

Before covering the second axiom let’s review the five axioms.

The five axioms they are as follows:

  1. Everything and everyone will be API enabled
  2. APIs are core to every cloud, social and mobile computing strategy
  3. APIs are an economic imperative
  4. Organizations must provide their core competence through APIs
  5. Organizations must consume core competences of others through APIs

This is the first blog post in this series. It will be for more blog posts covering axioms #2, #3, #4 and #5.

Axiom #1: Everything and Everyone will be API-enabled

In other words, every crafted object, structure, and every individual person as well as many natural objects, structures, objects and beings will become addressable by API and make use of APIs. Not only that but the infrastructure used to create and manage APIs for everything and everyone will also be API enabled.

On the surface this looks like an impossibly bold claim – why would this occur and how? On closer examination however, it becomes obvious that this is the inevitable outcome current strong technology trends.

There are three forces driving the “api-ification” of everything:

  1. The emergence of wireless mobile devices as the dominant form of interface technology for most computing systems.
  2. The explosion in connected device numbers, configurations, types and form factors – from large to incredibly tiny – driven by the Internet of Things.
  3. The extension and adaptation of integration technologies previously only used in large enterprises into more open forms that permit cross department and cross organizational systems integration at a fraction of the costs of previous technologies.

The first driving force means that networked communication at distance without wires has become ubiquitous in almost all modern environments – in other words devices can communicate and act at distance near seamlessly. The second force has made it possible for many computing form factors to be deployed in these wireless environments – some in the form of large objects such as televisions or cars, some in powerful small mobile devices such as phones, others as tiny passive sensors or identifiers. The last force is the increasing ease of software-to-software communication between systems operating on these diverse devices and/or on remote servers many miles away.

Devices tie into the identities of individuals and objects, sometimes even just to physical spaces and are able to transmit and receive data. This data in turn permits not only information capture, but also control of actions to be taken (for example thermostat to automatically change a temperature setting using settings inferred on a cloud server from data captured the previous day or the proximity to home of a smartphone using owner).

Each compute enabled device, each software application running on such a device or a remote server, and each networked environment provides interfaces to the outside world that can be identified and accessed.

Mobile and Wireless Computing

Data about mobile and wireless growth abounds, for example an article in Forbes by Cheryl Snapp Conner provides shows what is happening in the mobile segment of the market.

  1. There are 6.8 billion people on the planet at present. 4 billion own mobile phones. (But only 3.5 billion use a toothbrush. Oy!) Source:
  2. Twenty five percent of Americans use only mobile devices to access the Internet.  (Source:
  3. There are 5x as many cellphones in the world as PCs. (Source: ImpigoMobile)
  4. 82 percent of U.S. adults own a cellphone. (Source: Pew Reports 2010)
  5. There are 271 million mobile subscribers in the U.S. alone.

And what about growth? Gartner recently published these numbers.

World Wide Devices Shipments by Segment (Thousands of Units)

Device Type 2012 2013 2014
PC (Desk-based and Notebook 341,273 305,178 289,239
Ultramobile 9,787 20,301 39,824
Tablet 120,203 201,825 276,178
Mobile Phone 1,746,177 1,821,193 1,901,188
Total 2,217,440 2,348,497


Table 1–Mobile Device Growth Projections Source: Gartner

These numbers are staggering: 2.4 billion devices in 2013 and 2.5 billion in 2014.

In terms of raw volume of wirelessly network devices, we are clearly well past the tipping point of adoption and on the road to near ubiquitous deployment of such technology.

A Myriad of Devices

High mobile computing growth is one of the most important reasons driving growth in both wireless networking and devices. As the data indicates, the mobile device is rapidly becoming the preferred entry point to information and the Internet. In today’s world, information rarely remains static, it is in a state of constant change.

People use myriads of mobile apps to keep track of and manage all sorts of relevant information for their day-to-day routines. This information includes location, intention, scheduling, contacts, meetings, business, context, shopping and relationships to name but a few. This information is always changing, using mobile software apps on mobile devices to track and manage these changes.

However, smart phones are only the tip of the Iceberg in terms of what computing devices are being distributed in the world. Consider the number of devices that will be a part of the Internet of Things. These numbers do not include the devices shown in table 1 part but are “things” that will be connected to the Internet. These devices have form factors of everything from lightweight plastic fitness trackers such as the Fitbit, to Low Power Bluetooth beacons such as the Estimote. Many everyday objects now come with sensors and networking capability embedded. Sensor density is also rapidly increasing with compact gadgets such as the wireless Quirky Sensor now packing Humidity, Light, Sound, Motion sensors into a package for under $50.

In addition the new “hobbyists and make community” — driven by the Raspberry Pi, Arduino and other high function and affordable microcontroller devices.

Cisco predicts that by 2020 there will be 50 billion devices connected to the Internet of Things.

This is a huge number of potential compute enabled endpoints.


Figure 1—Number of Connected Objects

Software-to-Software Communication

Since its emergence and mass adoption, interactions on the World Wide Web have been fundamentally human driven. Human users browse information, upload content, download and browse data, click controls in order to generate effects: both virtual (such as a video playing) and physical (such as a purchased book being shipped from a warehouse). This setup enabled a huge diversity of applications since on the basis of a few simple rules and patterns a vast number of different services can be built and human users were flexible enough to understand the purpose of the sites/applications. They can then act accordingly to operate the service.

Today however, human driven Web Activity is rapidly being caught up by software-to-software activity: traffic between systems with no human in the loop and automated transfer of data and control actions. This is occurring because:

  • Mobile devices are increasingly acting autonomously and pre-emptively to fetch data from and deliver data to remote systems.
  • Many new device form factors include no keyboards, screens or other means of data input/output – and are hence their onboard software is only able to communicate with other specialized software on other devices or on remote servers.
  • Many modern applications require high rates of highly accurate transaction flow – with millions of similar calls per day which must be carried out stably and efficiently in a way no human could drive.
  • Almost no modern software application developed today – Web Applications and those running locally on a device – functions in a completely standalone manner any more. That is, it is very often the case that some part of the functionality requires direct, coordinated communication with some other piece of software.

Why APIs will be ubiquitous

As these trends continue they all continually reinforce the need for APIs for every device and person. For devices, almost all devices will require:

  • A hardware substrate of some kind.
  • Control software.
  • An interface by which the device can be reached.
  • A number of remote interfaces the device may call from time to time.
  • One or more identities by which the entity can be addressed.

For a natural person (or potential another creation or natural object), they may become associated with one or more digital devices that can receive incoming messages and conversely send data to one or more remote systems.

Existing identity mechanisms such as email addresses, twitter handles, phone numbers and the like already permit data exchange and as devices become more customized and embedded about the person, the ability for an individual to receive information (which is some cases will cause them to act) and stream data to a remote location will increase.


As we’ve already pointed out the claim that everything and everyone will be API enabled is mind-boggling within itself. But it seems inevitable as we move down the path of universal connectivity collaboration interoperability that this example come to pass. That is not to say there are not huge obstacles in accomplishing this grand vision but it seems nonetheless this is the path we are on.

Axiom number 2 “APIs are core to every cloud, social and mobile computing strategy” is up next week.

→ No CommentsTags:

Identity and the API Economy

November 14th, 2013 · Essays, feature, Identity, Internet of Things, lexicon, Links, The API Economy

go ahead and share

This is also posted as a guest blog on the 3Scale web site.


I recently attended and participated in the API Strategy and Practice conference in San Fransisco OCT 23-25. This was an exceptional conference and I met a lot of new people and learned many new things.

The panel I took part in in a panel discussion concerning  the API Economy. It was too short of a time slot do go very deep into the subject, but the moderator—Brian Proffitt—did a great job and the questions were interesting and relevant.

In retrospect, there was something curious missing from all of the presentations and conversations at the conference—the lack of discussion about the role of identity access management and APIs. It is clear that the API community doesn’t get that API ubiquity is an identity problem as much as it is an API design and maintenance problem.

Making Some Distinctions

To understand the connection, some definitions are in order. Identity management is complicated. The lingo is complicated. Understanding the lexicon used is the only way to really get a handles on this stuff works. The distinctions here are not exhaustive, but should be enough make my point. As a reminder, I am not insisting that anyone adopt these definitions and canonical and disregard the definitions that might be in place already, but to be rigorous in explaining the relationship between Identity and The API Economy, it is essential that a baseline ontology be established and used for discussion.

Kim Cameron’s famous blog provides great information about definitions and functions surrounding IdM. Pay special attention to the Laws of Identity.

I love the sequence he provides explaining the definitions of IdM basics. Note that these definitions are from Microsoft and not everyone has adopted them. For example the OASIS SAML working group has their own glossary of terms for IdM. I like them both but I think the terms Microsoft uses are a little more rigorous for the point I am making.

There are a number of definitions pertaining to subjects, persons and identity itself:

Identity:  The fact of being what a person or a thing is, and the characteristics determining this.

This definition of identity is quite different from the definition that conflates identity and “identifier” (e.g. being called an identity).  Without clearing up this confusion, nothing can be understood.   Claims are the way of communicating what a person or thing is – different from being that person or thing.  An identifier is one possible claim content.

We also distinguish between a “natural person”, a “person”, and a “persona”, taking into account input from the legal and policy community:

Natural person:  A human being…

Person:  an entity recognized by the legal system.  In the context of eID, a person who can be digitally identified.

Persona:  A character deliberately assumed by a natural person

A “subject” is much broader, including things like services:

Subject:  The consumer of a digital service (a digital representation of a natural or juristic person, persona, group, organization, software service or device) described through claims.

And what about user?

User:  a natural person who is represented by a subject.

The entities that depend on identity are called relying parties:

Relying party:  An individual, organization or service that depends on claims issued by a claims provider about a subject to control access to and personalization of a service.

Service:  A digital entity comprising software, hardware and/or communications channels that interacts with subjects.

The other section that needs to be included here is Kim’s distinctions about claims. Read this carefully.

Let’s start with the series of definitions pertaining to claims.  It is key to the document that claims are assertions by one subject about another subject that are “in doubt”.  This is a fundamental notion since it leads to an understanding that one of the basic services of a multi-party model must be ”Claims Approval”.  The simple assumption by systems that assertions are true – in other words the failure to factor out “approval” as a separate service – has lead to conflation and insularity in earlier systems.

Claim:  an assertion made by one subject about itself or another subject that a relying party considers to be “in doubt” until it passes “Claims Approval”

Claims Approval: The process of evaluating a set of claims associated with a security presentation to produce claims trusted in a specific environment so it can used for automated decision making and/or mapped to an application specific identifier.

Claims Selector:  A software component that gives the user control over the production and release of sets of claims issued by claims providers.

Security Token:  A set of claims.

The concept of claims provider is presented in relation to “registration” of subjects.  Then claims are divided into two broad categories:  primordial and substantive…

Registration:  The process through which a primordial claim is associated with a subject so that a claims provider can subsequently issue a set of claims about that subject.

Claims Provider:  An individual, organization or service that:

Registers subjects and associates them with primordial claims, with the goal of subsequently exchanging their primordial claims for a set of substantive claims about the subject that can be presented at a relying party; or

Interprets one set of substantive claims and produces a second set (this specialization of a claims provider is called a claims transformer).  A claims set produced by a claims provider is not a primordial claim.

Claims Transformer:  A claims provider that produces one set of substantive claims from another set.

To understand this better let’s look at what we mean by  “primordial” and “substantive” claims.  The word ”primordial” may seem a strange at first, but its use will be seen to be rewardingly precise:  Constituting the beginning or starting point, from which something else is derived or developed, or on which something else depends. (OED) .

As will become clear, the claims-based model works through the use of “Claims Providers”.  In the most basic case, subjects prove to a claims provider that they are an entity it has registered, and then the claims provider makes ”substantive” claims about them.  The subject proves that it is the registered entity by using a “primordial” claim – one which is thus the beginning or starting point, and from which the provider’s substantive claims are derived.  So our definitions are the following:

Primordial Claim: A proof – based on secret(s) and/or biometrics – that only a single subject is able to present to a specific claims provider for the purpose of being recognized and obtaining a set of substantive claims.

Substantive claim:  A claim produced by a claims provider – as opposed to a primordial claim.

Passwords and secret keys are therefore examples of “primordial” claims, whereas SAML tokens and X.509 certificates (with DNs and the like) are examples of substantive claims.

Some will say, “Why don’t you just use the word ’credential’”?   The answer is simple.  We avoided “credential” precisely because people use it to mean both the primordial claim (e.g. a secret key) and the substantive claim (e.g. a certificate or signed statement).   This conflation makes it unsuitable for expressing the distinction between primordial and substantive, and this distinction is essential to properly factoring the services in the model.

Systems that manage Identity are referred to as Identity and Access Management (IAM) or just Identity Management (IdM). IdM components are divided into processes distinguished as Authentication and Authorization (AuthN/AuthZ).


Authentication is the process of confirming the veracity of a specific attribute of a subject. There a “factors” of authentication often referred to as “multi-factor authentication”. The ways in which a subject can be authenticated is by something the subject has, something the subject knows, and something the subject is or does. Note that authentication is not just about people, but other subject—like and app or a piece of code that represents a machine or a device or a service. The most common factor used in AuthN is a subject’s name. But any number of factors can be used of any factor type.


Authorization is a process distinct from authentication. Authentication is the process of a subject being verified that “it is who it says it is”.  Authorization is the process of verifying that it is permitted to “do what it is trying to do.” This distinction a little subtle, but suffice to say that authorization has to presuppose authentication and so they are separate processes.

Due to so many breaches to IdM systems in recent months, it is becoming more and more popular to require a “two factor” authorization process. We should expect that multi-factory authorization will become more and more a part of our lives in the future. I don’t want to get side tracked on multi-factor IdM so you can read Dave Kearns blog post for a good little primer and some links.

So a summary of AuthN/AuthZ is the processes that prove the validity of a subject’s claims perhaps a name and password to a particular system.

The Problem with AuthN/AuthZ

Nobody really needs to explain the problem in too much detail because everybody that uses the internet experiences the problem every day in excruciating detail.

The problem that every system, every service and every device have their own processes for AuthN/AuthZ. In other words, everything has its own name and password. We are drowning in what I call “name and password fatigue.”

The panacea for resolving name and password fatigue is finding solution for one name and password to provide secure AuthN/AuthZ to all system. This solution is referred to as Single Sign On (SSO). With SSO, all entities can properly and securely have access to information and resources without having to provide new credentials.

Some say we are close to reaching the panacea of a usable SSO environment.


You wish.

The advent of mobile computing, cloud computing and social computing, coupled with the burgeoning API Economy just compounds the problem to unbelievable heights of complexity and difficulty. Don’t get me wrong, there are cool systems that have made tremendous strides towards SSO, there is just so much more to do.

The Cambrian Explosion of Everything

What is going on is so astounding Peter Van Auwera calls it the Cambrian Explosion of Everything.

Peter said:

“The Cambrian explosion was the relatively rapid appearance of most major animal life forms, accompanied by major diversification of organisms. Before, most organisms were simple, composed of individual cells occasionally organized into colonies. Over the following 70 or 80 million years the rate of evolution accelerated by an order of magnitude and the diversity of life began to resemble that of today.” (Adapted from Wikipedia )

He goes on to write that we are experiencing a similar explosion on the internet of everything. The next set of numbers will show you that Peter is exactly right.

My good friend Cheryl Snapp wrote an article in Forbes recently that had some incredible facts about Mobile devices in it. The article is mostly about mobile marketing—which is not my focus—but some of the research facts will help prove my point.

  1. There are 6.8 billion people on the planet at present. 4 billion own mobile phones. (But only 3.5 million use a toothbrush. Oy!) Source:
  2. As noted 91 percent of adults have their smartphones within arm’s reach. And that was as of 2007. I would wager the sum is closer to 100 percent now. (Source: Morgan Stanley)
  3. Twenty five percent of Americans use only mobile devices to access the Internet.  (Source:
  4. There are 5x as many cellphones in the world as PCs. (Source: ImpigoMobile)
  5. 82 percent of U.S. adults own a cellphone. (Source: Pew Reports 2010)
  6. There are 271 million mobile subscribers in the U.S. alone.

And what about growth? Gartner recently published these numbers.

World Wide Devices Shipments by Segment (Thousands of Units)

Device Type




PC (Desk-based and Notebook












Mobile Phone








That’s right, 2.4 billion devices in 2013 and 2.5 billion in 2014.

It gets better (or worse depending how you view it), Gartner also predicts that there will be over 81 billion apps downloaded in 2013.

I could give you even more numbers about social computing and cloud computing that make this phenomena even crazier, but I think you get my point.

It’s an Identity Problem

I will make one more bold statement. I believe we are headed to a point where:

Everyone and everything will be API Enabled.

source: Craig Burton

Consider the implications. Billions and billions of things and people will need to be able to properly, securely and quickly AuthN/AuthZ to each other at any time with as few names and passwords as possible. At the same time, every entity—billions and billions—must have unique and secure names and passwords. Tough problem.

As an industry, we are struggling to build systems that let people properly, securely and quickly login to a few hundred systems. We have our work cut out for us. It seems as if solving the password fatigue problem is unsolvable.

Something happened 70 million years ago to trigger the Cambrian Era into existence. We don’t know what it was. Something is happening now to trigger the solution to the Cambrian Explosion of Everything— it is the API Economy itself.

Solving the problem requires the ability for machines to help us. Machines need programs and APIs to allow the solution to the problem to be automated. The API Economy provides the framework for an automated solution to come into existence.


In a digital world where there are billions and billions of entities that require proper and secure identity management, we will need to build systems, services and apps that help us automate the creation, evolution and maintenance of the solution.

There is no better time for The API Economy to get a foot-hold and evolve into a capable framework that will let us solve the Identity problem soon.

→ No CommentsTags:······

The Age of Context and the API Economy

November 2nd, 2013 · Daily Thesis, The API Economy

go ahead and share


I participated in a panel on the API Economy at the APIStrat Conference last week. The conference was fabulous and well attended. I learned a lot. Steve Willmott—the co-organizer and CEO of 3Scale–invited me to write a blog post for 3Scale. I am also cross-posting it here. Thanks for the invitation to the conference and to write a little piece.


One can’t exist without the other

Veteran writers and pundits Shel Israel and Robert Scoble recently had their new book, “The Age of Context” published. The thesis of the book is solid. I think the book is spot on and relevant to what is really happening. At the same time, it misses a critical ingredient to their predicted perfect storm. Indeed without it, I predict, the Age of Context will never come into existence.

Storms Coming

The opening thesis of the book uses the 2005 Batman movie screenplay where a mysterious caped person suddenly appears and warns Commissioner Gordon. Storm’s Coming.

“For the next two hours of the movie all hell breaks loose. Finally peace is restored. When people resume their lives after so much tumult and trouble, they discover that life after the storm was better than it was before.”

Scoble and Israel go on with the storm metaphor:

“We are not caped crusaders, but we are here to prepare you for an imminent storm. Tumults and disruption will be followed by improvements in health, convenience, safety and efficiency. In his book The Perfect Storm author Sebastian Junger described a rare but fierce weather phenomena caused by the convergence of three meteorological forces: cold air, hot hair and tropical moisture. Such natural occurrences cause 100-ft waves, 100-mph winds and—until recently—occur every 50 to 100 years.”

“Our perfect storm is not made of up three forces but five and are technological not meteorological: mobile devices, social media, sensors, big data, and location-based services.

What the book misses

The rest of the chapters are a good read, and while I disagree with a few of the points, overall the book nails it.


There is another force so important and so critical to the success of the Age of Context that it needs to be examined and understood. The critical force I speak of is: The API Economy.

The five technological forces Scoble and Israel identify must be programmatically interconnected via well-known simple APIs.

In short, the Age of Context will never happen properly without a vibrant proactive API Economy.

An API Economy includes the technology, API Management and an active development community interested in participating.

Further, the effect of adding this force to the equation will eclipse any sort of technological “storm” anyone has ever seen or even thought of. The storm metaphor is a good one. But add the force of the API Economy and in geological/meteorological terms, think tectonic plates shift and global tsunami.

The API Economy

The emerging economic effects enabled by companies, governments, non-profits, and individuals of using APIs to provide direct programmable access to their systems and processes.

Some well-known examples of APIs that are significant are the Netflix phenomena. Roku, Amazon, Ebay,, Google, Microsoft, Facebook, Twitter, LinkedIn and Twilio are all examples of companies using APIs to change their business and even how the Internet is used and how it works. The Health Care industry is also very active in providing APIs. The ProgrammableWeb shows there are 129 medical APIs. The controversial web site has a powerful developers program with APIs.

The Programmable Web tracks open published APIs and categorizes them on a daily basis. Programmable Web has now over 10,000 APIs listed and the growth continues to be at over 100% compound annual growth.

Needless to say, it is very important for every organization to understand what its role is going to be in this critical emerging API Economy.


There is no question in my mind about the Age of Context and its subsequent disruption and following benefits. We are just not recognizing that the ultimate success Age of Context will be enabled by the advent of API ubiquity.






→ No CommentsTags:··

Privacy and Prohibition

July 21st, 2013 · Daily Thesis, Identity, privacy

go ahead and share

I was watching Ken Burns documentary on Prohibition tonight,and it was powerful and relevant.

“The history of the United State can be told in eleven words, Columbus, Washington, Lincoln and the Volstead Act, two flours up and ask for Gus.
The New Your Evening Sun”

The Volstead act was the name of the bill that became the 18th amendment, the law of prohibition.

Pete Hamill said “Never underestimate the need for young dopes to defy the conventional laws. You want them to brush their teeth, make toothpaste illegal. And sure enough, they will up on the mountain brushing their teeth. It is part of human nature.”

The next quote I will give is is killer.

The first incident of wire tapping was in 1922 of Roy Olmstead—the prohibition king pin of Seattle. The federal judge ruled that the courts could use wire taps as evidence even without a warrant. Olmstead was found guilty and was convicted to four years in prison.

“Olmstead and his lawyers were still confident that the constitution violated the ban on unreasonable search and seizure. And appealed the verdict all the way to the Supreme Court. A violation of the Volstead Act had turned into something far larger. The Court upheld Olmstead’s conviction 5 to 4. But Justice William Brandice acknowledged Olmstead’s concerns. And for the first time in a Federal Judicial Proceeding, asserted that embedded in American Constitution was  a right to privacy.”

And then, he went on “the greatest dangers to liberty lurk in insidious encroachment by men of zeal—well meaning, but without understanding. To declare that in the administration of the criminal law, the end justifies the means, to declare that the government may commit crimes in order to secure the conviction of a private criminal, would bring terrible retribution.”

The Supreme Court eventually reversed itself.

Everyone should watch this.


→ No CommentsTags:·

More Consolidation for the API Economy

April 24th, 2013 · feature, Identity, The API Economy

go ahead and share

CA Technologies acquires Layer 7, MuleSoft acquires Programmable Web, 3Scale gets funding

It is clear that the API Economy is kicking into gear in a big way. Last week, Intel announced its acquisition of Mashery, this week, CA Technologies announced its acquisition of Layer7 , MuleSoft announced its acquisition of ProgrammableWeb and 3Scale closed a round of funding for 4.2M.

Money is flooding into the API Economy as the importance of APIs only heightens. Expect this trend to continue.

The upside of this flurry of activity is the focus being given to the API Economy.

But here is my assessment.

CA’s acquisition of Layer7 doesn’t necessarily bode well for Layer7 or its customers. CA as a large vendor will probably take longer than Layer 7 would do independently for defining and delivering on the roadmap, but they might put far more power behind such roadmap and its execution. Layer7 needs an upgrade and needs to move to the cloud. CA has a clear Cloud strategy it executes on – look at IAM and Service Management where a large portion of the products is available as cloud service; there is a strong potential for CA putting far more pressure behind the required move of Layer 7 to the cloud. Let’s see what happens there.

MuleSoft’s acquisition of ProgammableWeb is a little weird. John Musser is an independent well-spoken representative of the API Economy. MuleSoft has an agenda with its own platform. Does MuleSoft let Musser continue to be an independent spokesperson? Where does this lead to? All answers unknown.

3Scale gets a round of funding for 4.2M. It plans to add more extensions to the product and grow its international distribution with the funds.

Lots of activity here. Curious to see what happens next.

However, one thing is clear: The API Economy is going mainstream.

→ No CommentsTags:

Intel Announces Mashery Acquisition

April 22nd, 2013 · feature, Open API Economy

go ahead and share

From partnership to acquisition

Let there be no confusion. Intel is a hardware company. It makes microchips. This is its core business. History shows that companies do best when they stick to their roots. There are exceptions.

At the same time, Intel has always dabbled in software at some level. Mostly in products that support the chip architecture. Compilers, development tools and debuggers.

From time to time, however, Intel ventures into the software business with more serious intentions.

Back in 1991, Intel acquired LAN Systems in attempt to get more serious into the LAN utility business. This direction was later abandoned and Intel went back to its knitting as a chip vendor.

Recently, Intel has started again to be serious about being in the software business. Its most serious foray was with the purchase of McAfee in 2010 to the tune of some 7.6 billion. A pretty serious commitment.

We wrote recently about Intel’s intent to be a serious player in the Identity Management business with its composite platform Expressway API management.

With that approach, Intel was clear that it had an “investment” in Mashery that would remain an arm’s length relationship best supporting the customer and allowing maximum flexibility for Mashery. In general, I like this approach better than an acquisition. Acquisitions by big companies of little companies are don’t always turn out for the best for anyone.

Since then, it is clear that Intel management has shifted its view and thinks that outright ownership of Mashery is a better plan.

While we agree that outright ownership can mean more control and management of direction, it can also mean the marginalization of and independent group that could possibly act more dynamically on its own.

It is still too early to tell exactly how this will turn out for Intel and its customers, it will be important to watch and see how the organization is integrated into the company.

→ No CommentsTags:

The Dark Side of Cloud Computing

April 19th, 2013 · Coding, feature

go ahead and share

When things go bad, it goes really bad

At KuppingerCole we use Office365 extensively to manage our documents and keep track of document development and distribution.

On April 9, 2013, Microsoft released a normal sized Tuesday update to Windows and Office products. The only thing is, this time the update completely broke the functionality of Office 365 and Office 2013. Trying to open a document stored in SharePoint would result in a recursive dialogue box asking for you to authenticate to the SharePoint server. Same thing would happen when trying to upload a document. Excel and PowerPoint documents had the same problem.

Going to the Office365 forum resulted in a bevy of customers complaining about the problem. A Microsoft tech support person was offering possible solutions, all of which were just time wasters and solved nothing.

“First, please run desktop setup by following Set up your desktop for Office 365
If the issue persists, please remove saved login credentials from the Windows credential manager and then sign into the MS account.

Finally, two days later a customer posted a solution.

“KB2768349 is definitely the culprit. I uninstalled this on Windows RT and login worked again across all Office 2013 RT apps. Reinstalling broke it. Uninstalling again fixed it.
Replicated on my Windows 8 desktop with Office 2013.
For the time being I have hidden KB2768349 from Windows Update until this is fixed.”

As soon as I deleted the KB2768349 update the problem went away. I also learned what “hiding” an update entails.

For those of you dying to know, here is how you fix this thing.

control panel>windows update>view update history>installed updates

Scroll down thru the Office 2013 updates until you find KB2768349. Select and then uninstall.

Of course once you uninstall an update, it’s going to show back up again and try to update. The way you prevent this is to “hide” the update so it doesn’t keep showing up. To hide and update, you open Windows Update and right mouse the update you want to hide and select “hide update.” There you go.

So for two days the normal operation of Office365 was frustratingly broken. Now this was not just for me and my colleagues, but for everyone on the planet that used Office365 and installed these updates. At the same time, the fix applies to everyone on the planet using Office365 as well. In other words, critical apps in the cloud that go bad, go bad hard. They also heal big. Part of the deal.

I was surprised that I was the only one tweeting and complaining about it. I didn’t see one article or public view on this major screw up. The only place I saw any complaining was on the Office365 forum. So glad that was happening.

→ 1 CommentTags:

Another Case for IDMaaS

April 17th, 2013 · feature, Identity, life

go ahead and share

Identity Management is a universal problem

When I pay my electric bill I usually just call the power company and give them my credit card. This month I decided that I should go set up auto payments on the web site and be done with it. So I opened the power company web site and attempted to login. Clearly the site recognized me, the login name I usually use was being recognized, but I just could not remember my password. I tried all of the normal passwords I use and none of them were working.

So I attempted to retrieve my password, it gave me an option of having the password reset sent to my email address or answering secret questions. I opted to have it sent to my email address. I waited. Nothing showed up in my email box. I looked in the spam folder, still nothing. I went back to the web site and this time I opted for being asked the secret question…..”What is your favorite color”. Oh man, I don’t know. Depends on my mood and what day it is. I don’t remember what I put in there for my favorite color. Ok. Let’s try “Blue.” Good, that worked. Wow. I am in. Hey. This isn’t my account? WTF?

Now I know there are two other Craig Burton’s living in Utah. Apparently I have just accessed the electricity billing account of one of them by guessing both the user name and secret question. And the secret question was “blue?”

Off the top of my head I would say the Electric Company has a severe security leak in it.

Of course I didn’t do anything to this account. I could see that his email address was just sent another request to change the name and password. I hope he did that.

This was an ugly incident that could have been much uglier if I was malicious.

Here is my point, a uniform cloud-based Identity management system could be used to prevent this kind of thing. As it stands, every single web site has its own set of code used to prevent inappropriate access. A scenario bound to create the blatant hole I ran into.

Of course the other side of the coin is that if the cloud-based identity management system had a hole in it, everybody would have the hole. Then again, the fix would fix everybody. Trade-offs but I still think the cloud-based Identity Management as a Service is where we are headed in the future.

→ No CommentsTags:

The Façade Proxy

March 18th, 2013 · Coding, feature, The API Economy

go ahead and share


Securing BYOD

With the rapidly emerging cloud-mobile-social Troika coupled with the API Economy, there are so many questions about how to design systems that can allow application access to internal information and resources via APIs that will not compromise the integrity of enterprise assets. And on the other hand, how do we prevent inappropriate personal information from propagating inappropriately as personal data stores and information is processed and accessed? Indeed, I have read so many articles lately that predict utter catastrophe from the inevitable smart phone and tablet application rush that leverages the burgeoning API economy.

In recent posts, I have posited that one approach to solving the problem is by using an IdMaaS design for authentication and authorization.

Another proposed approach—that keeps coming up—is a system construct that is referred to as the “Façade Proxy.”

A place to start to understand the nature of Facades is in an article by Bruno Pedro entitled “Using Facades to Decouple API Integrations.”

In this article Bruno explains:

A Façade is an object that provides simple access to complex – or external – functionality. It might be used to group together several methods into a single one, to abstract a very complex method into several simple calls or, more generically, to decouple two pieces of code where there’s a strong dependency of one over the other.


Figure 1–Facade Pattern Design Source: Cloudwork

What happens when you develop API calls inside your code and, suddenly, the API is upgraded and some of its methods or parameters change? You’ll have to change your application code to handle those changes. Also, by changing your internal application code, you might have to change the way some of your objects behave. It is easy to overlook every instance and can require you to double-check multiple lines of code.
There’s a better way to keep API calls up-to-date. By writing a Façade with the single responsibility of interacting with the external Web service, you can defend your code from external changes. Now, whenever the API changes, all you have to do is update your Façade. Your internal application code will remain untouched.

To shed even more light on how a Façade Proxy is designed and can be used to address yet another problem is blog post from Kin Lane. Kin is an API Evangelist extraordinaire and I learn a lot from him in his writings. Kin recently wrote in a blog post entitled “An API that Scrubs Personally Identifiable Information from Other APIs”:

I had a conversation with one UC Berkeley analyst about a problem that isn’t just unique to a university, but they are working on an innovative solution for.

The problem:

UCB Developers are creating Web Services that provide access to sensitive data (e.g. grades, transcripts, current enrollments) but only trusted applications are typically allowed to access these Web Services to prevent misuse of the sensitive data. Expanding access to these services, while preserving the confidentiality of the data, could provide student and third party developers with opportunities to create new applications that provide UCB students with enhanced services.

The solution:

Wrapping untrusted applications in a “Proxied Façade Service” framework that passes anonymous tickets through the “untrusted” application to underlying services that can independently extract the necessary personal information provides a secure way of allowing an application to retrieve a Web User’s Business data (e.g. their current course enrollments) WITHOUT exposing any identifying information about the user to the untrusted application.

I find their problem and solution fascinating, I also think it is something that could have huge potential. When data leaves any school, healthcare provider, financial services or government office, the presence of sensitive data is always a concern. More data will be leaving these trusted systems, for use in not just apps, but also for analysis and visualizations, and the need to scrub personally identifiable information will only grow.

Finally, Intel recently announced its Expressway API Manger product suite. EAM is a new category of service that Intel is calling a “Composite API Platform.” It is referred as such as the platform is a composite of a premise-based gateway that allows organizations to create and secure APIs that can be externalized for secure access through a cloud-based API management service from Mashery designed to help organizations expose, monetize and manage APIs to developers. In its design, Intel has created a RESTful Façade API that exposes APIs to developers for internal information and resources of an organization. It is very similar to the design approach outlined by Kin. This approach looks to be an elegant use of the Façade pattern to efficiently manage authorization and authentication of mobile apps to information that needs to remain secure.

composite API platform architecture

Figure 2–EAM Application Life Cycle Source: Intel

I am learning a lot about the possible API designs—like the Façade Proxy—that can be useful constructs for organizations to successfully participate in the API economy and not give up the farm.

→ No CommentsTags:

How to Make an API

February 21st, 2013 · Apps, feature, The API Economy

go ahead and share


Making an API is hard. It is also a tough question. A small company out of England has figured out how to let anyone make an API with just:

  1. Dropbox
  2. A Spreadsheet
  3. A Datownia SaaS account


One of the activities I practice to keep up with what is happening in the world of APIs is to subscribe to the ProgrammableWeb’s newsletter. Every week the newsletter contains the latest APIs that have been added to the rapidly increasing list. While I seldom can get through the whole list, I inevitably find one or two new APIs that are really interesting.

Recently I ran into one that has an incredibly simple and effective method of creating an API out of a spreadsheet.

The Company is

I now have an API with a developer portal that is driven by data in a spread sheet.

I can distribute developer keys to any developer I choose and then that developer can access the data and integrate it into any app.

Further, any change I make to the spreadsheet get versioned and propagated to the API with just a click. To propagate the data, all I do is modify the spreadsheet and drop it into the linked DropBox folder.

Here is what my spreadsheet looks like.




Here is what the JSON look like when you make a restful call to the API location created for me by Datownia.






So simple.

I have been talking a lot about companies that manage already existing APIs. But what about organizations that need to create APIs?

A few weeks ago, I received an email from the CEO of Datownia wanting to give me a small gift to chat with him about what I was doing with their technology.

Of course as an analyst I can’t accept any gifts, but I had a great conversation with William Lovegrove about the technology and where the idea came from.

From one-offs to a SaaS

Basically William’s little consulting firm was busy building and evangelizing APIs to organizations. When a company was confronted with making an API, often progress would screech to halt or at least be diverted while things were sorted out. Often IT departments simply could not deal with making an API for anything. Other times they would be engaged into creating a one-time API for a company.

Complicated, expensive and not very efficient.

Datownia then came up with the idea of building a service in the cloud that automates the process of building and API.

I think this is brilliant.

If you need ana API, or just want to play with a prototype, you should take a look at how simple this is.

Thanks William Lovegrove and crew.

→ No CommentsTags:·