Today I stumbled an extremely disturbing article that hit the mainstream. At least the “Wired” mainstream.
In early August, the enterprise security firm Armis got a confusing call from a hospital that uses the company’s security monitoring platform. One of its infusion pumps contained a type of networking vulnerability that the researchers had discovered in a few weeks prior. But that vulnerability had been found in an operating system called VxWorks—which the infusion pump didn’t run.
Today Armis, the Department of Homeland Security, the Food and Drug Administration, and a broad swath of so-called real-time operating system and device companies disclosed that Urgent/11, a suite of network protocol bugs, exist in far more platforms than originally believed. The RTO systems are used in the always-on devices common to the industrial control or health care industries. And while they’re distinct platforms, many of them incorporate the same decades-old networking code that leaves them vulnerable to denial of service attacks or even full takeovers. There are at least seven affected operating systems that run in countless IoT devices across the industry.
“It’s a mess and it illustrates the problem of unmanaged embedded devices,” says Ben Seri, vice president of research at Armis. “The amount of code changes that have happened in these 15 years are enormous, but the vulnerabilities are the only thing that has remained the same. That’s the challenge.”
Translation. This means that most systems that are used for your medical care are being hacked as I write this. If not now, soon.
Further, this is not a manageable problem. It gets scarier. If hospitals and ICR units were to throw out their existing hackable systems and replace the with BRAND NEW product. They would still be hackable. While there has been enormous change in the usability and the functionality of these devices in short periods of time, security is ALWAYS and after-thought. Nobody wants to pay for security. It should be included.
It’s not. It will never be.
It’s called Biohacking. Adding security to BioTech to prevent Biohacking ( and everything else) is an Identity Problem. We need and Identity Metasystem (as Phil Windley so articulately outlines.)
Here is the disconnect. Identity is complicated and highly political. All the big boys (Google, Microsoft, Apple, Facebook [I would add IBM but the don’t matter anymore]) want to “own” your identity. Silliness. It will never happen.
In the meantime, we are all at high risk.
You think Biotech is the only problem? Think again.
It goes on forever.
Try cell phone systems. Not cell phones. But cell phonetech. The towers. The system that your phone uses for seamless connectivity. It will be hacked. Not if. But when.
There is a serious smog problem in Seoul Korea. Sensitive to this issue since we live here in Seoul. ( I love it here, but the pollution scares me.)
I’ve been dismissing the masks being worn as useless. I decided I need to speak from real information not assumption.
NPR published a study in 2016 to show just how serious things are.
Koreans worry much more about environmental issues (air pollution is #1 concern) that danger from North Korea. In fact, North Korean threats rank #5 in importance. Seoul has 10.1m people in an area that covers 12% of South Korea. One of the most densely populated and homogeneous cities in the world. There are some 22.8m cars in Seoul. Korean car emissions and manufacturing produce the most harmful emissions in Seoul.
To contrast, there are barely 3m people total in the state of Utah. There 8.6m people living in New York City.
The bottom line is to be protected from air pollution in Seoul, you must wear a mask capable of filtering out what is referred in international standards as PM 2.5. (particles 2.5 microns or larger) The cheap face masks most people wear do not even meet the requirements for PM 10 (particles of 10 microns or larger and according to Reuters–only 32% of the particulates are being filtered. That’s whopping 68% leakage.
Hardly being protective. In general, my assumptions were correct. Most masks are not effective and are merely a weak fashion statement. But after doing this quick study, I learned there are affordable solutions. There are usable masks (more expensive but effective) that can meet the PM 2.5 specs.
Make sure you have masks that have a rating of N95 or better.
One of the touted benefits of iOS 12 is a new feature built into the system: Screen Time.
Screen Time is designed to help you manage the time you spend in front of your mobile device.
I fell for it. I admit.
I believed the hype that is telling us that we are globally out of control—duped by our smart phones.
Here is an example of the pervasive sentiment:
How to use Apple’s new Screen Time and App Limits features in iOS 12
Apple is making it easier than ever to cut back on app overload
We are being sold that we need to cut back on our use of social media and technology. This has become a common belief.
Like I said, I fell for it. I cringe when Screen Time reminds me every week how much time I spend on my mobile devices.
But something just doesn’t feel right to me about the whole idea that technology is bad for you.
Then I stumbled on a book that resonates with how I feel and think about technology and popular culture.
Everything Bad is Good for You: How Today’s Popular Culture is Actually Making Us Smarter—Steven Johnson
This book has completely changed how I feel about Screen Time. I now revel in the numbers. We will need to change how we think about technology and popular culture—everything we know is wrong.
This is not a new book—2006. So, some of the references are stale, especially in light of what is happening in our culture right now. But if he were to go back and rewrite sections of the book to reflect what is happening now with social media, his case would just be stronger.
The Sleeper Curve
Mr. Johnson introduces the concept of the Sleeper Curve.
The Sleeper Curve: The most debased forms of mass diversion—video games and violent television dramas and juvenile sitcoms—turn out to be nutritional after all. For decades, we’ve worked under the assumption that mass culture follows a steadily declining path towards lowest-common-denominator standards, presumably because the “masses” want dumb, simple pleasures and big media companies want to give the masses what they want. But in fact, the exact opposite is happening: the culture is getting more intellectually demanding, not less.
The rest of the book makes the case why the hypothesis has merit.
This works for me on an abundance of levels.
I haven’t made the complete transition yet, but I finally found some language and discussion that is in alignment with how I feel.
AI Will Save the World
There, I said it. We are on the fertile verge of understanding how to use AI to our benefit like never before. To astronomically increase our ability to increase—not just our intellectual intelligence—but our emotional and social intelligence.
People often ask me about the future of AI. Most people believe AI is dangerous and will cause irreparable damage to humanity.
The exact opposite is happening. AI—more specifically AEI—will be a tool humanity uses to increase emotional and social intelligence like we have never imagined.
Years ago we were sharing stories about our children. I was recounting to Natalie my favorite funny stories about her. She share with me a funny story about Miles. This little animation is my attempt to keep that memory in animation form.
I hope it is close to what you told me Nat.
We recently moved to Korea.
We are adapting quickly. What an adventure.
My great friend and mentor Doc Searls posted a poignant eulogy to Leonard Cohen.
I had no idea he felt the same way I do about his music.
Through the soundtrack of my life, nobody else taught more about how to be a man, a lover, and a human being with one foot in the temporary world and the other in eternity.
I’ve listened over and over to all of his albums. I especially like “The Future” and “Ten New Songs.” His new album—“Songs from a Room” is also fabulous.
His music played a huge role in getting me through many a tough time, A Thousand Kisses Deep.
The ponies run,
the girls are young,
The odds are there to beat.
You win a while, and then it’s done –
Your little winning streak.
And summoned now to deal
With your invincible defeat,
You live your life as if it’s real,
A Thousand Kisses Deep.
And sometimes when the night is slow,
The wretched and the meek,
We gather up our hearts and go,
A Thousand Kisses Deep.
Excellent TED Talk on how the Blockchain technology will play a role in managing trust and identity.
Rachel Botsman is studying the defines trust as a “confident relationship to the unknown.” She is studying how technology is transforming the social glue of society.
Human beings are incredible in being able to take trust leaps.
She then introduces the concept of “climbing the trust stack.”
She then posits that we are going thru a massive change in the trust model, one from an instituionalized model to a distributed model.
She goes on to say that the blockchain technology will play a major role in how we effect digital trust. So much so, that the trust stack can be simplified, and the need for institutionalized trust intermediaries can sometimes be mitigated.
Watch the entire video. Very enlightening and provides a clear and consice explanation on how the blockchain works.
What Ms. Botsman omitted—clearly not intentionally—is the role sentiment analysis plays in the future of digital trust.
The Role of Sentiment Analysis and Trust
In the future, the ability to understand the sentiment or the “spectrum of intention” of another person or entity will be highly valuable in determining trust values.
In a recent series of blog posts by Phil Windley, the concept of a self-soveriegn identity system is introduced.
SIS purpose is just like it sounds. An independent identity system managed by users.
The series leads up to the announcement last week of Sovrin.org. (But I will get to that later.) Since these are in a series of blog posts, they are in reverse chronological order. So here they are in order.
- Service Integration Via a Distributed Ledger
- Governance for Distributed Ledgers
- An Internet for Identity
- Self-Sovereign Identity and the Legitamacy of Permissioned Ledgers
Some of these are lengthy. The topic is complicated, but fundamental to the future. Take your time. Dont let TL;DR syndrome sidetrack you.
This is a transcription of a keynote speech given by Kim Cameron (Chief Identity Architect, Microsoft Corp.) at IIW (Internet Identity Workshop) in May of 2016. The beginning was not recorded, the transcription begins shortly after his talk commenced.
I have also pulled out a section about the blockchain, I would like to hear more about what Microsoft is planning to do using blockchain technology. (Can you address this Kim?)
But we also have the evolution of decentralization. So to me things like the blockchain are hugely important because the blockchain represents a way of removing the exclusive power of the concentrators.
The Laws of Identity
… what was a digital subject? So a digital subject is a person or thing in the digital world that is being dealt with. That’s all. That’s also from the dictionary, a subject is a person or thing that is being dealt with. So a digital subject is one from the digital world.
And that (inaudible) number three, which was what is a digital identity?
A set of claims made by one digital subject about itself or of another digital subject. This is very important because it included self-asserted claims. In other words, this whole idea of sovereign identities, I can say things about myself, and those can be a set of valid claims. Or others can say things about me and those are also a set of claims. And all of these things can be part of the same system because they all follow the same theoretical model of claims made by one party about another party.
With that, we then set up the laws of identity. So the joke was, the reason we set up the laws was because the lawyers wouldn’t listen to us if we didn’t call them laws.
I was also a physicist—when I was born. (big laughter) This is really hard for me. I don’t normally think about the past, in fact I find it hard to think about the present. And I hate the future. I live in the architectural conditional. Time is a mere detail for me.
So the laws were like the laws of physics to me. They are propositions that you put forward, you find out if they are true by whether they explain reality or not. So yo test them. And so that’s …the spirit … in which they were proposed.
The first was to consider the context of what does it take to be successful building this system that spans the internet that includes China, that includes the US, it includes Republicans and Democrats. (laughter) We have to.
It’s a different question than what do you have to do to be popular. So the first law that we put forward was User Control and Consent. You can’t have an identity system in which actors in the sky make statements about people and you have to ability to control those or intercept those. So it had to be driven through the user.
Secondly, Minimal Disclosure for a Constrained Use. In other words it wasn’t about splattering all of your personal information everywhere on every occasion. Certainly users have had their information (inaudible) “Spired?” But it isn’t something that they have chosen to do and that they would be willing to do in all of their identity interactions. It doesn’t qualify for operating as the system as a whole. So the system has to support minimal disclosure. The user just releases the minimal information that you need in order to do a transaction. That’s what we meant. Not anything about yourself.
Justifiable Parties [said] the only people who should receive information about a transaction are the people who are in it. This is partly in reference to one of Microsoft’s previous things, which was the “Passport” system where people were frightened that Microsoft would be in the middle of the transaction between a user and the relying party who he is going to. And so people said “What’s Microsoft doing there?” Just as they are saying why is Google in this transaction or Apple a part of this transaction? Not to say that users don’t use those systems with customers but it doesn’t need to be part of the universal system. So justifiable parties was essential if we were going to get to a universal system.
Next was the idea of Directed Identity. So we proposed that there are two kinds of identity relations. One is our public ones. “Kim Cameron” is a public person who people know in some little microcosm of the world, vs. “Kim Cameron” as a private individual who has a relationship with some TV channel or whatever it might be. So you want to have identifiers. You don’t want to release your Social Security Number when you go to purchase a bag of popcorn. You don’t have to have a universal identifier for all of the things that you are doing. Now this question of universal identifiers vs. directed identifiers; we used to call “long name directional” vs “directional” identifiers. It is very important in my view. It’s interesting again because of its relationship to the blockchain. Maybe will discuss that later in the conference.
Another [law] was Pluralism of Operators and Technologies. There wasn’t going to be one technology and there wasn’t going to be one operator. And all of the technologies and operators could work together through this concept of claims. You could send a claim from any technology to any other technology and from any operator to any other operator so you can actually have a system that will work worldwide.
Human Integration. A lot of these systems were devised without remembering that the human was part of the system. For example when you have passwords of unlimited complexity and so on. Which is just crap because everybody writes them down and it’s an incredible security mess—horrible. We have to remember that you don’t only have to speak the protocol of your machine, you have to speak the protocol of the people who are using the systems. And you have to relate to them iconically and all of the ways that make systems safe and usable.
And finally—and as sort of a corollary to all of that—you need Consistent Experience Across Contexts. If every system works in a completely different way, the users won’t know how to use it.
What I Did Wrong
So that was the set of principles. More on this last one about contextual identity choices that probably what we would end up with—the is part of the human engineering level—would be the need to keep contextual separation. So that when I’m browsing the web things should not be intermingled with and connected to what I do in my personal relationships and I should be able to keep that separate from what I do with my wider community and that should be able to be kept separate from what I do professionally and what I do there should be kept separate from my credit card and my citizen identity and so on.
I don’t—and I think a lot of us don’t—use the same email address in all of our personal relationships that we use in our professional relationships and so on. It’s this idea of not one single identifier for us that follows us and links everything and makes it incredibly easy to track all of that and makes a huge security trap but rather to have the ability to have contextual separation.
I’m thinking—a cosmic ray hit me—to remind me of our poor brethren in the armed forces who are forced to use their social security number as their service identifier number and which is then plastered all over everything that they do. This is a lose of identity nightmare, right? It breaks five laws at once. Which is really a feat.
And of course it was the basis by which the Chinese hackers that were able to break into the NSA and secure systems and get all of the information including on the background checks of all the people in the foreign service and everything. And link all of that together into the personal lives of those people. That’s very convenient when you know what people are doing in the military and what roles they hold in the espionage system and you also know all of their personal information so that you can blackmail them in the most effective fashion. It can completely break down the whole security system of the United States. So this was a real triumph of law breaking in the world of the laws of identity.
I’m going to conclude this by telling you what I did wrong. Can I blame that on us? No. I blame that on myself.
The Asymmetric Power of the Parties
The biggest mistake that I made was in not understanding the asymmetric power of the parties. We posited there were three parties, the user, the identity provider, and the relying party. The relying party, is the entity that operates the web site, or that provides some kind of a service. A service provider is another way of saying the same thing.
A lot of us started to work on technology called information cards, and other things which would empower the user. Because the user would get a verified representation of their contextual identities. Here is my professional identity, here is my personal identity and so on. Google has done the best implementation of this and taken it to the market. I congratulate them for that.
We thought, if we build this, it’s going to make the life of the users better, and everything will be more secure and so therefore, the relying parties will love it. Guess what, the relying parties were really busy counting their clicks. I’m sorry, another click will lose me another 250 million people blah blah blah blah.
Who are the people deploying systems that are used? It’s the relying parties. If relying parties don’t like the system, the system goes nowhere. The whole system of identity emanates from the relying party. They drive the economy. So I had not understood how asymmetric the system really was in terms of the way the parties were structured.
We have people today doing excellent work on empowering the users as this was originally intended to do. I urge those people to bear in mind this lesson of the asymmetry of the relying parties and what that implies.
Multiplicity of Claims Sources
The second thing that I didn’t understand was the multiplicity of the sources of claims and what that implies. I was living inside the model of being a relying party, so the relying party chooses an identity provider that offers a set of identity providers or something and that’s the end of the story.
In fact the relying parties want lots of different claims. They want authentication claims, they want things about the health of computers, they want claims about location, they want claims from their own system. This person has this kind of an account or this person is a gold member or whatever it may be. They may want claims from an attribute verifier. The address of this person really is what they claim. They want claims from financial entities. We call them attribute verifiers.
They may want to take claims, and send them out again into other systems. As we are getting this claim from over there would you please put it over here and make it a copy over there so that next time we can take it into our calculations.
So I didn’t understand the multiplicity of the sources of claims or the multiplicity of the destination of claims.
Per Party Relationship Management
The next thing was that I didn’t understand that that implies per party relationship management. Both users and relying parties need relationship management. There are many people in the room that have promoted this idea very well. I’m just trying to fit it in to these other issues.
The relationship management means that it is not merely a matter of getting these claims from different places but of arbitrating them. Which ones do I need for different contexts? In other words it’s not just a binary thing with your customers its I’ll treat you one way when I’m courting you and another way when I’m billing you. Many different claims and we are calling these “user journeys.”
We need a system that functions on behalf of the relying party to navigate all of these different relationships with different claims providers and users. It becomes complicated because different users have different claims providers. They don’t all go to claims provider school and learn how to choose claims providers. They all choose the providers that they want. The system has to be adaptive to that.
And so that implies the need for people to operate and manage these complicated systems. Before I get there, I just mentioned that the relying party needs somebody to manage the relationships equally important—at least in terms of our souls and what we believe in for our children—is managing relationships on behalf of the individual user.
I’m stressing the relying party right now because they have all of the power. So we have to start with them in order to solve all of the other problems. That is my personal view.
Because of all of the complexity of dealing with all of these things you need service providers to manage these relationships on behalf of people. Enterprises cannot do it. The level of complexity brought about by criminalization is too high. The cost of enterprises to protect themselves and manage their relationships and reduce risk is super high. And so you need to have providers that are specialized in doing that. Similarly, the complexity of running a PIMS (personal identity management systems) is very high. So I think there will be service providers the run PIMS on behalf of the users.
Need For Service Operators To Be Separate From Content
However that leads to another law. The need for service operators to be separate from content. We want service providers who just operate the electrons not determine the content of people’s identity or any of that stuff. There should be no connection to content. Service providers should be content free. And if they are content free and they operate a service, and the service is open and the service is based on standards, that creates sockets for many content providers to plug in. And that is where the next layer of wealth and of benefit to users comes from is from content not from the wiring of the electrons but having systems that allow users to choose what the user journeys are going to be, what the content is going to be and then make it easy for everybody to plug in solutions that add content.
Benefits of Concentration and Decentralization
The last thing is, the benefits of concentration and decentralization. Because of this need for specialization it implies there will be concentration. In other words, there will be a certain number of operators who merge in different geographies to provide the management of systems for relying parties and end users. Hopefully those systems will be content free. Meaning they’re not using their control of the infrastructure to keep other innovators from introducing new capabilities, new content, and ways of doing things. In fact it will be a platform for all of that innovation. But still. There will be this concentration represents a huge danger.
But we also have the evolution of decentralization. So to me things like the blockchain are hugely important because the blockchain represents a way of removing the exclusive power of the concentrators. And saying the truth about identities is stored in the blockchain in a way that is irrevocably owned by the individual or the owner of that identity. And so therefore the operator is just an operator. Disposable. And instead of that disposability just being something where you have to take the word of some VP—not that I’m at all skeptical of the words of VPs excepting my own—you don’t have to take anybody’s word for it because the truth is in the blockchain. It’s like the truth is in the pudding only better.
So that’s my list of things that I think we did wrong. I look at all of the wonderful things that are happening around personal information systems (PIMS), I believe in those very strongly. I believe that is absolutely the future of personal identity. However, I urge you to remember, to deploy those things, you need sockets inside the enterprise. You don’t want to have to go to the enterprise with new technology and hat in hand saying “Please change your architecture so we can put in this better content.”
What you want to do is to be able to go to the enterprise and say “We love your architecture and here is some content that plugs into it that is risk free. It meets all of your regulatory requirements and everything else because the infrastructure is running independently.” I think that would be something that would help the outcome that all of us are looking for.
At the most recent Internet Identity Workshop (#iiw), I was watching the #iiw twitter feed. As the keybote speaker began (Kim Cameron), a barrage of insightful tweets from Kevin Marks ensued.
I looked over at Kevin Marks tapping away on his laptop. Something didn’t make sense. There was just no way the number of keystrokes he was making was matching the prodigious output of tweets.
So I aksed him how in the world he was doing that. He happily reavealed a tool for tweeting events that he and some others had developed.
Brilliant. Love it.
Since then, I have noticed others starting to use it with astounding results. Phil Windley.