Tuesday, February 13, 2018

Podcast: Diane Mueller on evolving communities and OpenShift Commons

Dianemueller 1378481891 37

Diane Mueller is the community manager for OpenShift Origin, a Caas and PaaS platform for cloud-native application development, deployment, and operations. In this podcast, she discusses how communities like the OpenShift Commons are evolving from groups that were singularly focused on code contributions to ones that focus increasingly on users and contributors in other areas.

Listen to MP3 [20:52]

Listen to OGG [20:52]


 Gordon Haff: For the first time in way too long, I am here with Diane Mueller, who runs community development for Red Hat OpenShift. Diane's been spending a lot of time over the last year thinking about how communities should be built and how they should be allowed to evolve.

As a result, I think she has some interesting things to say about how open-source communities in general are evolving. Maybe we can start with a little bit of context of where you're coming from. We just had another very successful OpenShift Commons Gathering. What is OpenShift Commons and where we are right now?

Diane Mueller: I'm the director of community development -- which means nothing to anybody -- for OpenShift.

Basically, I've been in the open source world for almost 20 years now and worked on lots of different open standards, open source projects, and done a lot of thinking about what it is to develop a community that will sustain an open source project or move a standard forward in adoption.

Thinking about what it really takes to create -- as someone famous once said, the village that it takes to raise a child -- the village that it takes to create a global ecosystem that supports and sustains people using your project.

We hear a lot about trying to grab code contributions for an individual project and grow the maintainers of a project.

I've learned over the past four or five years working at Red Hat about a lot of different open-source community models. With OpenShift, we had some really interesting things happen that forced us to really open up our ideas about what it is to make a community that will sustain a project for the long haul. A lot of it was about collaboration with upstream projects.

What we did about two years ago, we pivoted the whole underpinnings of OpenShift to work with the Kubernetes community. If you don't know Kubernetes, Google it, find out about it -- cluster management and a whole lot more at scale for clouds. It is the underpinnings now, along with a lot of other open projects for OpenShift Origin, which is the project that I manage for Red Hat.

What happened when we did that pivot was two things. One, we pivoted and we had an existing user base, so we had lots of people we had to educate about the redirection of our architecture and our project, and how to use it with new tools, new pods and a different approach to containers. All kinds of stuff.

We had this fire-hose of information that we had to get out there to people who were already using it. Then we had a whole new community of people -- the Kubernetes community and others -- that we were trying to figure out how to collaborate with.

Now, rather than just trying to get people to contribute to Origin, we were contributing back to upstream projects that were integral to our project's lifecycle, and had to keep in-sync with other projects, how to collaborate with Docker, then the OCI and all the other container standards.

Tons of other projects for monitoring within the CNCF: Prometheus, Grafeas, and other projects that are out there. We had to create a new model. That model we named -- we had to give it some sort of name -- we called it the Commons because Red Hat's near Boston, and I'm from that area.

Boston Common is a shared resource, the grass where you bring your cows to graze and you have your farmer's hipster market or whatever it is today that they do on Boston Commons besides protests and wonderful things, but it's also right next to City Hall and all the state government stuff.

The governance and all of the other pieces of things that threaded together with the concept of common, so we created something called the OpenShift Commons. What we tried to do was open up our minds about what the community constituted.

There's been lots of other examples of people doing similar things where we reached out to the upstream communities, to -- we didn't ignore the contributors to our code base, because we love them -- to the service providers who were building infrastructure that hosted OpenShift.

We have AWS, Google, VMware, and a bazillion other cloud-hosting providers that are trying to deploy OpenShift and make managed service offerings of them. They constituted a whole lot of really good feedback besides the stuff that we're using ourselves, hosting OpenShift online originally, and then OpenShift dedicated in addition, and openshift.io, lots of things ourselves.

We learned a lot, and got a lot of feedback from there. Also all of the ISVs, all the consultants, all the database folks from Crunchy Data to Tigera. All kinds of other people who were trying to work with us, but didn't have a way in because, in the old model of open-source community management, you were only looking for those code contributors.

You only really talked to people when they were those people that were giving you code. We tried to flip this all on its head.

In addition to all these people who were adding services, or providing infrastructure, or working with us on this, there was that whole other community out there, the customers, the people who were actually deploying OpenShift Origin or deploying OpenShift Container Platform. How do we get their feedback back to the contributors, the engineers, the service providers on this topic?

What we did was try and create a new model, a new ecosystem that incorporated all of those different parties, and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. The PMs don't scale that are working at Red Hat.

We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.

If you go on our Slack channel for Commons at OpenShift, you'll see a lot of people talking to each other that are not Red Haters, giving support to each other and peers. A lot of it was about creating this peer-to-peer network model, wherein Red Hat got out of the way so the conversations could be common between Amadeus and a Clydesdale Bank or someone else.

It had different interesting aspects. We were trying to create and use all those tools to do that, as well as we realized we couldn't just be virtual. We're here in London and we just came through a day of what we called OpenShift gathering -- which is not like a Red Hat Summit, which is huge -- or a meet up, which is just a two-hour thing.

A gathering where we all come together, like on the Commons. We have conversations. We have panels of like people talking about stuff, panels of disparate people from different open source projects. We get updates from different upstream projects.

There's lots of stuff that we do try to make that virtual world work, because I think you do need the people connections.

As soon as I stopped trying to attract contributors to my project, we went from...I think we had five external organizations contributing when I started on this project to OpenShift Origin.
We've gone from 5 organizations to 70. That's huge in two years' time. That's a huge growth spurt. It just shows that by giving people a voice, and making a space for people from different parts of our community, and different parts of our ecosystem, that actually drove code contribution.

I think we have to break the model of what we say is open-source community. We need new rules for revolutionaries. We need new open-source revolutionaries, and we need an evolution in how we think about who is part of the community. That's really what we've tried to do. What I've tried to do over the past couple of years is give the podium away.

Gordon: I'd like to take things up a level or abstract things away a level. We love abstracting things in computer science.

You've been talking about things that have been done specifically in the context of OpenShift. I'd like to probe about generalizing this. Why do you think things are different? Why has this been a good model for OpenShift, whereas it's not something that we've necessarily really seen in most other open source projects? Has the world changed? Is OpenShift different in some ways? Why?

Diane: We don't have enough time for me to rant completely, but I think, in some ways, in the old world, we would've thrown our code into a foundation. There's a lot of room for people and foundations for helping grow and incubate projects. Because of the pivot that we did to Kubernetes, we were forced to do something different.

In doing that and in breaking the model, or the rules, for what an open-source community is, we started finding a new framework. I think the framework is a more inclusive, diverse community and it allows us to really drive innovation.

If you abstract what we've done for OpenShift and apply it to any other open-source project -- and maybe not one that's in incubation -- maybe some of things that have come out of OpenShift that we're slowly incubating into other projects or moving into Kubernetes functions and features from OpenShift.

I think if you abstract what we've done, you can apply it to any existing open-source community. The foundations still, in some ways, play a nice role for giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways.

I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away from, say, an enterprise like Red Hat that's pushing something so that we don't have a one-way street, or that we don't always have to play the mediator in any conversation.

What I'm trying to do is break down the barriers between the different people in the network and really try and help people make the connections across the community. To do that, the other secret ingredient in this model is there isn't any anonymity. There's a couple of rants I've done somewhere online about it.

I love GitHub, but everybody who signs up for GitHub pretty much uses a Gmail account or some super-secret email that I can't figure out. A lot of them flag their GitHub with their affiliations, which is great. Then companies like Bitergia or Stackanalytics scrape that, and we can figure out what organization they're from.

I think, in the terms of community, in order to have a real trusted peer-to-peer network, you've got to know who you're talking to. The other aspect of these community models is that we ask people to really be clear about who their corporate masters are, who they're working for.

That doesn't mean that's your agenda, but it does mean that, if I'm working at some big financial institution, I know that I'm talking to another big financial institution and there may be rules that apply then I need to worry about -- privacy and things like that. Also, I can learn lots of things from people outside of my spectrum, my normal peer-to-peer network. That's really been very helpful.

If you took this, it's not OpenShift specific. I think what it is, is more these are the lessons. This is the framework we've had with the Commons model, is what I call it. You can brand it any way you want. I think the idea of having these shared resources, whether they are virtual Slack mailing lists, and having the lack of anonymity, and the ability to...

Here in London, one of the Commons' members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons' members.

When you let go being the absolute owner of a project...Not that Red Hat has let go of OpenShift or anything. I'm not saying that, but if you let go and let in other people into the community and have them have a say, and have a voice, and an ability to be recognized in lots of different ways.

We always talk in open-source about one of the best ways to get in is documentation, or log an issue, or do a pull request, or something that... but there's a lot more than GitHub-centric little pull requests and issues. Those are good. Don't get me wrong.

I think maybe when I talk about SIGs, Special Interest Groups, the distinction we make with OpenShift SIGs is that they really are about sharing best practices, and lessons learned, and what's in your stack that you're running. Maybe your machine learning on OpenShift is, "Tell us what all the tools that you're doing. Share that with your peers," versus, say, Kubernetes' SIG, which is, "Tell me how I'm going to get Cluster Federation working," or, "How am I going to do service catalog work and contribute to that?" There's this conversational level of community that has to be somewhere it has to grow from. It has to be nurtured.

I think that's the role today in community development that I'm espousing, is nurturing those conversations.

Gordon: You mentioned Kubernetes a few times. You mentioned some things like OCI (Open Container Initiative), CNCF (Cloud Native Computing Foundation)n. I've done a number of podcasts with various executive directors and other project heads within those foundations.

Obviously, OpenShift touches many of these things. Kubernetes, of course, but you've also mentioned Prometheus, and then there is Istio in the service mesh area, and a lot of other things. How do you think about and how do you interact and work with those foundations, which are often structured in a somewhat different way from OpenShift Commons?

Diane: I think the role of community management, or community development is to create those connections and to make sure that the updates from those different foundations make it, and unfiltered through, and get connected to all of the different pieces and parts. That's, in some ways, what the briefings are we do.

We get people to talk from different projects, different foundations, different aspects of the community and make that information available in some ways. The foundations, like I said before, are really great around governance and incubation. What we're trying to do is create the conversations for cross-community collaboration. That's really the connective tissue of communities.

That's where communities really help drive, and the collaboration that we do with these, drive the innovation back into OpenShift and into the people who use OpenShift into their practices as well, into their enterprises, and their uses of OpenShift, and the tooling that they're building on top of it.

Gordon: I promise I won't share with your manager, but what are your goals and plans for next year?

Diane: [laughs] Oh, geez. I think, we just did the metrics thing. I think I hate metrics, but yeah, shh, don't tell my boss. More face-to-face time. More gatherings, more regional gatherings. We did one in London. We'll be doing something in Copenhagen before KubeCon and something at Red Hat Summit. More customer stories.

When I say customer stories, it's not the Red Hat customers. It's people who are using the different pieces and parts of our ecosystem to get them to tell us what their full stack is. What are they using?

When I ask someone to tell me to share their story, their lessons learned, OpenShift may be just a component of say -- I'm really hooked on ML and AI right now -- might be just part of they're doing TensorFlow with JupyterHub and maybe they're off on the Kubeflow tangent or maybe they're using Spark or something from the radanalytics team, trying to tease out the entire stack conversation.

I see a lot more of that happening this year. I think we've hit the tipping point of where we're trying to teach people what Kubernetes is. We've done that, last year. We still got some more of that to do because it keeps evolving every three months. This year I see it as the year of workloads on OpenShift. What are we doing? Getting more of those stories out there.

The OpenShift Commons gathering at Summit will be almost entirely case studies. Users talking about what's in their stack. What lessons did they learn? What the best practices are? Sharing those ideas that they've done just like we did here in London. There were some great stories here. Wait for the videos for that.

Metrics, I probably will double the size of the number of organizations in the OpenShift Commons. That's really what's we're seeing now, is just really rapid. I think I said it at the London event, but in the past hundred days we've had over 40 organizations join the OpenShift Commons, which is phenomenal.

It's not like I'm going out and recruiting these people. They just are naturally finally finding the commons.openshift.org website buried under all the other Red Hat OpenShift properties there. I really encourage you, if you want to share your stories and meet your peers to come to commons.openshift.org, fill out the join form.

You'll get an email from me and all my contact information. Maybe not my home phone number, but everything else. I think growing the community of people who have production deployments and adding more of those is really the big deal this year. Track of everything that's going on in the upstream, too. Lots of that stuff going on.

Gordon: Thank you, Diane. That's a good note to close on I think.

Diane: Thank you very much. It's time for more coffee.

Thursday, January 11, 2018

When companies focus too much on risk

When we think about security in the context of DevSecOps, an important mantra is that we need to move from thinking about providing absolute security to thinking about managing risk in the context of business outcomes. Move from “Just say no” to saying yes to small risks if the tradeoffs appear to be worth it.

Let me illustrate this principle (in addition to a couple of other things) with an example that’s not drawn from the IT world. 

Right before the holidays, I took a last minute quick trip to speak at and attend a couple of events being held next to the airport outside San Francisco. Loaded the bags up and off I went. As I was being dropped off at the airport, I pull out my driver’s license so I won’t be fumbling around with my wallet, get out of the car, and head into the terminal.

Somehow, in the course of 50 feet, space aliens made off with my license. Call the limo company. Driver takes a look. No luck. I still have absolutely no idea what happened. 

Now, normally, frequent traveler me has a travel folio with passport, spare credit cards, cash, and other potentially useful travel backups. But because this was just a quick trip I figured I didn’t need it.

Lesson #1: You may not think you need a backup. Until you do.

(See also. It’s just a small code change. We don’t need to re-run the test suite.)

Crap. Visions of my trip mashed up with mushroom clouds seemed appropriate. But I wandered over to the security line anyway.

Much to my surprise, my missing license turned out not to be a particularly serious problem. Yes, I had other ID although nothing government issued. I had my boarding pass on my phone. I have TSA Pre. And they gave me a thorough pat down and they inspected and detected my luggage very carefully. I was both impressed and surprised that I was able to hop on my flight.

I thought I had dodged a bullet.

Land SFO. Take shuttle bus to hotel. I won’t name the hotel. Let’s just say it’s a lower end chain I wouldn’t normally stay at but, as I said, this was a very last minute trip and with my usual chains either sold out or going for $700 a night I figured I could put up with the relative dump for a couple of nights.

They have my reservation that I made online. Give them my credit card.

“ID please."

I tell my story. Consternation. “Umm, do you have a passport?"

Well, no. But I can show you any number of cards. Here’s my company badge with a photo. You can easily look me up online. 

Nope. It was starting to look as if I’d have to start dialing various friends in the Bay area to see if they had a spare couch I could use.

At this point, what I really wanted to say was: “Look. If I wanted to concoct some complicated scam for free hotel nights that somehow involved having 1.) an online reservation, 2.) a wallet full of cards including the credit card used to make the reservation, 3.) an official looking company ID, but 4.) no government-issued photo ID, I’m pretty sure it would be at an exotic resort and not an SFO fleabag."

To bring us back to the original topic, sure, you can always impose more hard and fast rules but you really need to think about whether inflexibly imposing those rules is the best approach for the business. 

Lesson #2: Think about whether potential risks justify the costs of eliminating them (which you can never fully do anyway)

In the end, I was able to check in. I didn’t say what I was thinking and we reached an agreement whereby I could pay cash, including a security deposit. (Fortunately, the dollar amount was small enough that I was able to withdraw what I needed from the ATM in the lobby.) Luckily, I did have my company ID with a photo; I don’t think they’d have let me stay with no photo ID at all—my face being all over the Web notwithstanding. 

So I do give some small amount of credit to the local manager for bending, however slightly, to what I have to assume are quite rigid corporate rules.

Lesson #3: Empower employees to do the right thing as much as possible

I was also pleasantly surprised how easy and relatively inexpensive ($25) it was to replace my driver’s license on the Massachusetts DMV site. Which brings us to our last lesson.

Lesson #4: If your policies and customer experience fail to meet the standards set by both the TSA and the Massachusetts DMV, I’m pretty sure you’re doing something wrong


Podcast: Talking Kubernetes community at CloudNativeCon

Wrapping up the week at CloudNativeCon, I sat down with Google’s Paris Pittman, Heptio’s Jorge Castro, and Microsoft’s Jaice Singer DuMars to talk about their roles as Kubernetes community leads. Kubernetes has become so successful in large part because of the strength of its community. In this podcast, we talk about mentorship, getting involved, and being a welcoming community. 

Listen to the MP3 [26:56]

Listen to the OGG [26:56]




Thursday, January 04, 2018

Podcast: HashiCorp's Armon Dadgar on "secret sprawl" and Vault

5gFsC5pv 400x400

HashiCorp co-founder and CTO Armon Dadgar and I recorded this podcast at CloudNativeCon in Austin. In this podcast, we talk about the problem of secrets management, the changing nature of threats, the need to be secure by default, HashiCorp's Vault project, and Vault on Red Hat’s OpenShift.

The Vault project

OpenShift blog post on Vault integration

Listen to MP3 [17:40]

Listen to OGG [17:40]

Wednesday, January 03, 2018

Podcast: Heptio's Joe Beda talks Kubernetes

Leader beda 168x168

Heptio's CTO, Joe Beda, made the first public commit to Kubernetes. In this podcast he talks about ark (an open source project for Kubernetes disaster recovery), what made Kubernetes take off, why companies are moving so quickly on cloud-native, and where Kubernetes is headed.

From Joe’s perspective, companies realize that they’re at an inflection point and they have a sense of urgency about how they need to move quicker than in the past. That’s one of the factors that have driven container adoption at a faster pace than, say, virtualization even though the latter was arguably less disruptive to existing processes and infrastructure.

The next phase will be making the most effective use of Kubernetes clusters once they’re in place. Integrating them with other systems. Delivering value to customers on top of them. 

  • ark, a utility for managing disaster recover of Kubernetes clusters from Heptio, as discussed on the podcast

Listen to podcast in MP3 [12:42]

Listen to podcast in OGG [12:42]

Podcast: Kris Borchers of the JS Foundation

At CloudNativeCon in Austin, the Executive Director of the JS Foundation, Kris Borchers, sat down with me to talk about a variety of JS Foundation projects such as architect, jQuery, and JerryScript. We also discussed why JavaScript has been so successful; Kris chalks it up in part to its approachability and argues that, even if it’s not a perfect language, what language is? We also talked about the community which he describes as very energetic and always tweaking the ecosystem around the language (of which jQuery provides a great example).

Listen to the podcast [17:21] MP3

Listen to the podcast [17:21] OGG

Cloud-native data management with Kasten CEO Niraj Tolia


Kasten recently emerged from stealth and has released kanister, an extensible open-source framework for application-level data management on Kubernetes--as well as a commercial offering that builds on it. In this podcast, CEO Niraj Tolia discusses the increased need to manage storage used with Kubernetes at scale, the challenges of complex distributed apps, and the need for app-centric approaches that make infrastructure "boring" (to use my colleague Clayton Coleman's term).

 Listen to podcast in MP3 format [12:19]

 Listen to podcast in OGG format [12:19]

Blogging update

I realize that most people these days find posts by following social media links rather than using RSS or otherwise subscribing to blogs. But, just in case, anyone has been wondering why I’ve been pretty much inactive of late on this site, here goes.

I’ve actually been writing (and podcasting, albeit in spurts) as much as ever this past year but the bulk of that writing is increasingly spread across a variety of other channels and that’s likely to continue to be the case. You should encounter the links if you follow me on twitter (@ghaff). You can also go to my Bitmasons website where you’ll find links to most of the channels I publish on. (Occasional pieces will also be on The Register this year.) I may start publishing monthly digests here. We’ll see. 

Furthermore, although this has long mostly been a professional site, I’m splitting out (hopefully expanded) food, photography, and travel content to a new Wordpress site, which will hopefully kickoff in the coming weeks. 

Podcasts from CloudNativeCon/kubecon

Frederick Branczyk on Prometheus, metrics for cloud-native [13:14] MP3

Frederick discusses Prometheus including the goals of the project, a focus on simplicity, the distinction between metrics and logging, what's new in 2.0, and what's coming.

Marc Holmes of Chef Software on automation in a containerized cloud-native world [11:07] MP3

Chef Software's Marc Holmes talks about the global shift from automating infrastructure to automating applications, establishing a foundation for chaos engineering, and shifting security left.

Ben Sigelman of LightStep on OpenTracing, monitoring, and the challenges of distributed systems [19:41] MP3

Ben Sigelman worked on Dapper and Monarch at Google. He's now the co-founder of LightStep. At CloudNativeCon in Austin, we took the opportunity to cover a wide range of issues including the key challenges of distributed systems, the sometimes confusing monitoring/logging/tracing/etc. landscape, how monoliths evolve to microservices, Conway's Law, OpenTracing, and more.

Talking Jaeger with Yuri Shkuro and Pavo Loffay [11:08] MP3

Jaeger is an OpenTracing compatible open source distributed tracing system that came out of Uber. In this podcast, I sat down with Yuri Shkuro of Uber and Pavo Loffay of Red Hat to discuss the state of Jaeger, what problems it solves, where it fits with the broader cloud-native ecosystem, the Jaeger community, and where it's headed.

See also:

Cloud-native data management with Kasten CEO Niraj Tolia

Kris Borchers of the JS Foundation

Heptio's Joe Beda talks Kubernetes

HashiCorp's Armon Dadgar on "secret sprawl" and Vault

Monday, October 16, 2017

Eclipse IoT with Ian Skerrett of the Eclipse Foundation

29323131953 702edeb40f z

For many people, Eclipse may not be the first open source organization that pops to mind when thinking about Internet-of-Things (IoT) projects. But, in fact, Eclipse hosts 28 projects that touch on a wide range of needs for organizations doing IoT projects. In September, I was attending the Eclipse IoT day and RedMonk's ThingMonk conference in London and had a chance to sit down with Ian Skerrett. Ian heads marketing for Eclipse and we spoke about Eclipse's IoT projects and how Eclipse thinks about IoT more broadly.

(Apologies for the audio quality not being quite up to my usual standards. We had to record this outside, it was a windy day, and I didn’t have any recording gear to mitigate the wind noise.)

Listen to podcast:



Gordon Haff:  Hi everyone. This is Gordon Haff, Technology Evangelist with Red Hat. You're in the Cloudy Chat podcast. I'm at London at ThingMonk, RedMonk's annual event on IoT. I'm here at Shoreditch Studios, and I'm pleased to be here with Ian Skerrett who runs marketing for the Eclipse Foundation. Welcome Ian.

Ian Skerrett:  Great to be here, Gordon. Thanks for having me.

Gordon:  Ian, could you start off by giving a little bit of background for yourself, how you came to be at Eclipse, and what your role is at Eclipse?

Ian:  As you said, I'm working at the Eclipse Foundation, and my official title is the Vice President of Marketing. I help bring together the community and talk about what the community is doing around the different open source projects. Lots of people don't know, but Eclipse is a huge community of well over 300 projects.

My specific role is on marketing, but I also deal a lot with IoT or IoT community which we're going to talk a bit more about. I probably spend half of my time on IoT right now.

Gordon:  Eclipse has come a long way. I'm sure everybody listening to this has heard of Eclipse, but they probably think in terms of the IDE or some other specific development things. As you say, you have a very large presence in IoT today.

Before we get into the details of, specifically, what Eclipse is doing in IoT, you gave a talk yesterday where you discussed things like Industry 4.0. That might be a useful context in order to talk about what Eclipse is doing, specifically, in IoT.

Ian:  Industry 4.0 is a term that probably started in Germany. It's reflective of how things get made, if you think about a factory floor. Lots of people know about the Industrial Revolution. The first Industrial Revolution, that was the start of steam power, steam powered machines.

The second Industrial Revolution was mass production. You think about car manufacturing, mass production around that.

The third Industrial Revolution is credited with automation and robotics that have gone into the factory plants. Where Industry 4.0 comes from is what people talk about, the fourth Industrial Revolution, is how do you start connecting all those factory floors, the machinery and automation machinery on the factory floors to the enterprise IoT system.

It's a term that comes out of Germany. Germany is the hub of industrial automation. That's where the machines that can make other machines that go into factory floors start off often. It started there. It's become an industry term that's been adopted globally and to talk about how do you start connecting up what a lot of people call the operational technology. The technology that's on the factory floor, to the enterprise IoT technology.

That's the context and one industry that IoT plays into. IoT is a general term that plays in pretty well every industry out there, be it automotive, be it wearables, be it healthcare, be it industrial automation and manufacturing.

Gordon:  Before we go further, you touched on something, which our listeners will be interested in. One of the questions I hear a lot is we've been connecting things up in factories forever. We've had various types of industrial control systems. Many of these systems have been connected within modern factories. From your personal perspective, from Eclipse's perspective, how is IoT different besides being a cool new term?

Ian:  You're right. A lot of factories are connected. A lot aren't though. There's a term called SCADA, Supervisory Control and Data Analysis. SCADA system would often be how you use IoT technology to monitor a factory. Often, SCADA systems and even the factory floor technology, is very proprietary, very siloed. It's hard to change it. It's hard to adapt to it.

One of the drivers of Industry 4.0 is that the manufacturing process is trying to be more flexible. Right now, when you set up a manufacturing run, you need to manufacture hundreds of thousands of that piece, of that unit. What they want to do is to meet customer demand, to have manufacturing processes that are very flexible, that you can actually do a lot size one.

You do an entire manufacturing process of just one unit and then change it as quickly as possible. To do that, it has to be much more flexible. Software has to be much more flexible. It has to be distributed, where the actual machines have the intelligence of what to do. That's a very new way of doing the software that's being put out on the factory floor. That's where the industry is going.

Gordon:  It's a little bit like cloud has “been there since time‑sharing,” but obviously, it's qualitatively different today.

Ian:  IoT and terms like embedded system development, a lot of this is being done. It's taking it to the next step where you can actually interoperate, where that information can run, having multiple factories talking to each other, doing data analysis across multiple factories, and just having a lot more flexibility that go to those systems.

Gordon:  Let's talk about Eclipse. As we said in the beginning, there's activity across the whole IoT spectrum. A lot of people's attention is focused on more consumer‑type stuff, SmartHome, Roombas, what have you. Obviously, there's factories, there's transportation, logistics work. Out of all that, how is Eclipse thinking about where you want to put wood in your arrow, where you want focus?

Ian:  Our goal is to be, when developers are building IoT solutions, they have set building blocks that they can draw on. An analogy I like to make is that, in early days of the web, I used to look at IBM. IBM used to have four different HTTP servers that they're trying to commercialize. They wanted to do e‑business, they wanted to have e‑commerce, and you need a web server to do that.

If you wanted a website, you needed a web server. They were trying to commercialize that. It turned out having HTTP server to sell had no value to any customer so they shut them all down, which are Apache.

What we can see in IoT is that there's core fundamental technology that every IoT solution needs, that want an open source. Everyone can use it so they can get broader adoption of it. The way we think first is that an IoT solution, there's three stacks of software.

What we see is that you need three different software stacks, building block technology for IoT. You need a stack of software for constrained devices, the MCU [microcontroller unit] level sensor type of hardware.

There's usually some type of a gateway that aggregates information and data from the different sensors and sends it to the network. You need a software stack for there.

And then a software stack for the IoT platform on the backend, on the cloud. Our goal is to be the provider of the underlying technology for those three stacks of software.

Gordon:  One of the things about IoT that seems to be a good fit with open source is this idea of modularity and gluing things together. Without going into details here, we've seen a number of things over the past year [suggesting that] a monolithic software stack that handles everything isn't the best answer.

Ian:  IoT is so broad. When you go to getting a solution done, there's very specific things that need to be built, but there's a lot of underlying technology that can be used like messaging protocol, like gateway services. It needs to be a modular approach to scale up to the different use cases that are up there.

It isn't just one big stack of software behind this. Certainly, the microservices were obviously constrained. The IoT needs to be working and moving in that direction too.

Gordon:  Now, I know you love all your children and we don't want this to be a two‑hour podcast. We're going to bore our listeners, but what are some highlights of some of the projects on your Eclipse?

Ian:  They're all amazing.


Ian:  No, I'm just kidding. In reality, there's a maturity of IoT business projects. Let's start with what I would consider our more mature projects that are being used in production today. Certainly around MQTT, the messaging protocol for IoT. We have two projects, Eclipse Mosquitto and Eclipse Paho. Mosquitto is the broker, Paho is the client for MQTT. Those are widely used, widely successful.

If you're doing MQQT, you probably want to look at Paho and Mosquitto. MQTT has been a great success in terms of being a standard that's being widely adopted in IoT and an open source application.

Gordon:  For our listeners, what is MQTT?

Ian:  It's a pub/sub, publish‑subscribe messaging protocol that was designed specifically for oil and gas pipeline monitoring where power management network latency is really important. You can't have an HTTP client that's always pinging home. It's got to be a pub/sub.

Another project that's very mature and well‑used is Eclipse Kura, which is an IoT gateway. Essentially, it provides northbound and southbound connectivity. There's a lot of different protocols. There are Bluetooth, Modbus, CAN bus, OPC UA. We just keep on growing the list.

Instead, you writing your own connectivity. Kura provides that and then connect you to the network via satellite, via an ethernet or anything. It handles all that and things like firewall configuration. It handles network latency. If the network goes down, it will store messages until it ever comes back up. Kura is another well‑used project from Eclipse.

We have a project in home automation area called Eclipse SmartHome. In the maker community, there is a project called openHAB. OpenHAB is based on Eclipse SmartHome. It's very well‑used and successful community for that.

Where we've been working in, probably, the last 18 months, is on cloud platform. We have a new project called Eclipse Kapua, which is taking a microservices approach to providing different services for an IoT cloud platform. That's up and coming. It's not being deployed yet, but Eurotech and Red Hat are very active in that.

One of my more intriguing projects is Eclipse hawkBit, which is for software updates. From a security perspective, if you can't update your device, you've got a huge security hole. Most of the IoT security disaster reports that you see is the fact that they couldn't just update it too. That's what hawkBit does.

HawkBit, basically, manages the backend of how you do scalable updates across your IoT system. That's interesting.

We've got 28 different projects. Do you want me to keep going, or we will stop there?


Gordon:  That's probably good for right now. I'm going to cap this off with a pretty typical question when I do these podcasts around community run, open source projects. How do people find out? How do people get involved? If they're not coders and they are still interested, how can they get involved?

Ian:  How to find out? We've got a website, iot.eclipse.org. Go there. That's our developer portal. We've written a white paper called the "Three Software Stacks for IoT." I'd recommend reading that to get a sense of what our view of IoT is from a software perspective.

I'd start there before getting started. We have some good getting started documentation to help people try some of the software out. We have some sandbox servers for a lot of our backend server projects.

If you want to, for instance, try out MQTT, you don't have to install Mosquitto. We have it in the system Mosquitto that's open that anyone can use. For device management, we have a device management server running called Eclipse Leshan. They're that, they're there.

As with any open source, you try it out, you give feedback, you open bugs. If you got a bug and have a fix for it, do a pull request. It's very typical open source, and I'm encouraging that. Certainly, if there's people that want to join the community, we have a working group. Organizations come together and collaborate on bringing together these projects for IoT solution developers.

If you want to start a project, if you have some technology that you think is relevant to IoT, come talk to us. We're certainly an open community and welcome other people to join us.

Gordon:  Thank you. Anything you'd like to add?

Ian:  No. It's great to see you again at ThingMonk. I'm going to put in a plug for ThingMonk in here because, I don't know about you, but I think it's an amazing show. Pretty well every talk, I learn something. I go a lot of IoT shows, and I usually don't learn much at an IoT show, but ThingMonk, I always do.

Gordon:  I'll put in a plug for RedMonk's other events as well. Great analyst firm, they do a lot of work with developers. Good guys. Used to work with a couple of them. Definitely check them out.

Tuesday, July 18, 2017

Red Hat's Mark Wagner on Hyperledger performance work

Mark Wagner Red Hat

Mark Wagner is a performance engineer at Red Hat. He heads the Hyperledger Performance and Scalability Working Group. In this podcast, he discusses how he approaches distributed ledger performance and what we should expect to see as this technology evolves.


Listen to MP3 [13:45]

Listen to OGG [13:45]


Podcast with Brian Behlendorf

Hyperledger Announces Performance and Scalability Working Group

MIT Tech Review Business of Blockchain event

MIT Sloan CIO Symposium: AI and blockchain's long games


Gordon Haff:   I'm sitting here with Senior Principal Performance Engineer, Mark Wagner. What we're going to talk about today is blockchain, Hyperledger, and some of the performance work that Mark's been doing around there. Mark, first introduce yourself.

Mark Wagner:  My name is Mark Wagner. I'm in my 10th year here at Red Hat. My degree, from when I started many years ago, was hardware. I switched to software. I got the bug to do performance work when I saw the performance improvements I could make in software, in how things ran.

Here at Red Hat, I've worked on everything from the kernel up through OpenShift and OpenStack at all the layers. My most recent assignment is in the blockchain area.

Gordon: A lot of people probably associate blockchain with Bitcoin. What is blockchain, really?

Mark: Blockchain itself is a technology where things are distributed. I like to think of it more as a distributed database at a really high level. Bitcoin is a particular implementation of it, but in general, blockchain ‑‑ and there's also a thing called distributed ledgers ‑‑ they're fairly similar in concept, but the blockchain itself is more for straight financial things like Bitcoin.

Distributed ledgers are coming up a lot more in their uses across many different vertical markets, such as healthcare, asset tracking, IoT, and of course the financial markets, commodity trading, things like that.

Gordon: As we've really seen over the last, I don't know, year or two years, there's still a lot of shaking out going on in terms of exactly what the use case is here, which of course makes the job for people like you harder when you don't know what the ultimate objectives necessarily are.

Mark: Yes. It's shaking out in terms of both new verticals are being added, as well as there's multiple implementations going on right now, in a sense competing, but they're designed at different verticals in many cases, so that, in a true sense, not really competing, per se.

Gordon: Now you're working in Hyperledger. Introduce Hyperledger.

Mark: Hyperledger is a project in the Linux Foundation to bring open source distributed ledgers out into the world. I've been involved in it since December of 2016. Red Hat's been a member for two years.

One of the things in Hyperledger, there are multiple projects within Hyperledger. The two main ones that people know are Fabric from IBM, Sawtooth from Intel. There's a bunch of smaller projects as well to complement these technologies.

Both Fabric and Sawtooth are distributed ledger implementations with different consensus models and things like that, and getting to the point where they can do pluggable consensus models.

One of the things that no one was doing at Hyperledger, and where I felt I could help across all the projects, is performance and scalability. People see out in the world that the Bitcoin and Ethereum stuff is not scaling. When it hits scale issues, things go poorly.

I proposed in April that we have a Performance and Scale Working Group to go off, investigate this, and come up with some tests and ways to measure. It passed unanimously, but the scope was actually expanded from what I proposed, and they don't want it to just focus on Hyperledger but to focus industry‑wide.

Since that time, I've been in touch with the Enterprise Ethereum Association, with the person leading their performance and scale work. In principle, we've agreed to work together.

Gordon: I'm interested in some of the specific things that you've found in this performance and scale work. Maybe before we go into detail there, at a high level, where do you see the scalability and performance challenges with blockchain and distributed ledgers?

It's obviously early days. You've done performance work with the Linux kernel, which is about tweaking for very small increments of performance, where distributed ledgers are obviously in a very different place today.

Mark: The design of the original Bitcoin, and those technologies, is what was called proof of work. They gave you a large cryptographic hash you needed to go solve in order to prove that you actually did the work.

There were consensus algorithms based on that, and who got first and who got to build the chain and add to the chain. It quickly became people started using GPU offload or going off and fabricating FPGAs directly to give them an advantage doing this. There's a quick example of performance and scalability.

The other issue is, because it's consensus, everything gets shared. Everyone has to agree on it, or some large percentage has to agree on it. As the network grows, more and more nodes are involved in this, and it becomes a big scalability problem.

Gordon: Let's talk about the work that you've done so far. What have you been focusing on?

Mark: The Performance and Scale Working Group is really just getting started. Right now, we're trying to go through and identify three or four different vertical use cases. We're focusing more on distributed ledgers and their smart contracts, things like that.

We're trying to right now go through and identify use cases at Hyperledger. Another working group within Hyperledger has already defined. We can take those, and then say, "These are the key characteristics of those," because some of these vertical markets may not need the most transactions per second. It may be more how much you can scale.

The other interesting thing is there's two types of implementations, or deployments I should say. One is permissioned, where you need permission. That's called a private. The other is permissionless, which is public. Bitcoin is public. Anyone can join.

In the permission, you need to be invited so you can control the scale that way.

Gordon: Also, there's at least some discussion that in private distributed ledgers or blockchains, it's even possible you may not need proof of work.

Mark: Yes, a lot of it is working now towards proof of stake, where you prove that you're a stakeholder. It's less computation involved.

Gordon: Now, you mentioned it in the beginning of this podcast that you can almost think of a distributed ledger as almost a form of ‑‑ not to put words in your mouth ‑‑ distributed database. There's obviously very different performance characteristics, at least as things stand now.

How do you see that interplay of distributed databases substituting for, or instead of, or what do you see the relationship between distributed ledgers, blockchain, and distributed databases?

Mark: Distributed databases are more focused on sharing data, spreading it out. With blockchain and distributed ledgers, everyone has the same copy. People are looking at sharding now. You can go off and do just the specific set of transactions, or something like that with sharding.

It's also referred to as collections. Certain sets of nodes can go off and be involved in some transactions, others in different ones. That's one way to go around the performance and scalability.

Gordon: If you're looking back from, I don't know, five years from now or whatever, what do you think have been some of your toughest challenges that you've had to overcome in terms of improving the performance, usability, and so forth of distributed ledgers?

Mark: Five years from now, we'll look back, and we'll think how naive we were, in trying to solve some of these issues. Again, there will a big difference between public and private, but trying to come up with consensus algorithms, I think they'll keep evolving. The amount of work needed will change.

The other thing people will need to start thinking about is storage. How are you going to store all this data over time?

Gordon: What's Red Hat's interest in this?

Mark: Red Hat, right now, we have customers coming to us saying, "We like blockchain, but we'd like it to run on your enterprise‑class software."

One of the things I'm trying to do with Hyperledger is get things running on our OpenShift platform with Kubernetes with a RHEL base underneath it, looking at being able to contribute software so that it can become part of a CI environment once we get further along.

In general, right now our goal is to offer multiple blockchain solutions. Internally, we're figuring out what that means and how to do that. Right now, we're working with several.

Gordon: To your earlier "how naive we were" comment, that's one of the things we absolutely see today around blockchain, around distributed ledger, is really everyone's trying to figure out, "Where is this going to be a great fit?" Conversely, "We really thought we could use it for that? What were we thinking?"

I was at an event about a month ago, and Irving Wladawsky‑Berger, who basically ran Linux strategy for IBM when they were first developing a Linux strategy, was up in the panel on blockchain at the MIT Sloan CIO Symposium.

I think he's fairly representative of a lot of people who think that blockchain can very possibly be a very big deal, but also recognizing, Irving said we were probably in the equivalent of the 1980s Internet. It takes a long time to build out these kind of infrastructures.

Mark: That sums it up pretty well. One of the other things I heard when I first started with Hyperledger back in December at a conference in New York, was everyone agreed we're at the peak of the hype cycle, but also that it's still going to be very big.

Gordon: Actually, somebody made a very similar comment to me. It might have been the same event. They asked me where did I think it was in the hype cycle.

I actually looked up a Gartner "Emerging Technologies Hype Cycle" report and guess where blockchain was in that report? [At the peak of the hype cycle.] It scares me a little bit, but I agree with Gartner, to tell you the truth, but that was certainly their opinion.

Mark: Through my interactions here at Red Hat, I'm seeing lots of interest from healthcare, insurance. You can use this to cut down on paperwork for insurance companies, things like that.

"Here's the list of treatments that you're eligible for." The doctor goes in, says, "I did these," and he just gets paid. There's no going back through the review process, things like that.

Gordon: There certainly seem at least a lot of potential use cases out there. You have to believe that some of those are going to pan out at least.

Mark: Right.

Wednesday, June 14, 2017

From Pots and Vats to Programs and Apps: Coming Soon!

Packagebook frontonly

Monkigras in London this past January had packaging as its theme, both in the software and the more meta sense. James Governor graciously extended me an invitation to speak. The resulting talk, A Short History of Packaging (video here), described how packaging in both retail and software has evolved from the functional to something that improves the buying and using experience.

I’d been looking to write a new book for a while. I knew I wanted it to relate to the containers, microservices, cloud broadly, etc. space, but I didn’t really have an angle. I considered just rewriting my three-year old Computing Next but so much had changed and so much remained in flux that the timing didn’t feel right.

But packaging! Now there was an angle and one that I could work on together with my Red Hat colleague William Henry (who is a senior consulting engineer and works on DevOps strategy). 

So that’s what we did. We set a target to do a book signing at Red Hat Summit in early June. We mostly made it. We signed a pre-release version and have spent the past month or so working off-and-on to polish up the contents, give colleagues the opportunity to review, and update a few things based on various industry announcements and events. 

We’re finally just about ready to go. I expect to have the paperback version orderable through Amazon by about mid-July. We’ll also be making a free PDF version available at around the same time; distribution details TBD. Given the free PDF I don’t expect to release a Kindle version. The layout of the book (sidebars, footnotes, some amount of graphics) doesn’t really lend itself to this format and it would be extra work.

The thesis of the book is that if you think about packaging broadly, it highlights critical tradeoffs.

Unpackaged and unbundled components offer ultimate flexibility, control, and customization. Packaging and bundling can simplify and improve usability—but potentially at the cost of constraining choice and future options.

Bundling can also create products that are interesting, useful, and economically viable in a way the fully disaggregated individual components may not be. Think newspapers, financial instruments, and numerous telecommunications services examples.

Open source software, composed of the ultimately malleable bits that can be modified and redistributed, offers near-infinite choice.

Yet, many software users and consumers desire a more opinionated, bundled, and yes, packaged experience—trading off choice for convenience.

This last point is a critical tension around open source software and, for lack of a better umbrella term, “the cloud” in the current era. Which makes understanding the role that packaging may play not just important, but a necessity. Ultimately, packaging enables open source to create the convenience and the ease of use that users want without giving up on innovation, community-driven development, and user control.

Monday, June 05, 2017

MIT Sloan CIO Symposium: AI and blockchain's long games

I wrote earlier about the broad transformation themes at the MIT Sloan CIO Symposium last month. Today, I’m going to wrap up by taking a look at a few of the specific panels over the course of the day.

MIT Sloan CIO Symposium May 2017

Artificial Intelligence

Andrew McAfee and Erik Brynjolfsson are regulars at this event. Their bestselling Second Machine Age focuses on the impact of automation and artificial intelligence on the future of work and technological, societal, and economic progress. Their new book Machine, Platform, Crowd: Harnessing Our Digital Future will be available later this month. Another panel, moderated by the MIT Media Lab’s Job Ito, featured discussions on the theme “Putting AI to Work.” 

Like blockchain, which I’ll get to in a bit, a common thread seemed to be something along the lines of AI and machine learning being supremely important but with much still to do. In general, panelists avoided getting too specific about timelines. Ryan Gariepy, CTO & Co-Founder, Clearpath & OTTO Motors put the timing on the majority of truck driving jobs going away as a “generation.” My overall takeaway is that AI is probably be one of those things where many people are predicting greater short-term effects than is warranted while underestimating the effects over the longer term.

For example, Prof. Josh Tenenbaum, Professor, Department of Brain and Cognitive Sciences at MIT highlighted the difference between pattern recognition and modeling. He noted that "most of how children learn is not driven by pattern recognition” but it’s mostly pattern recognition where AI is having an impact on the market today.  He went on to say that "other parts like common sense understanding we are quite far from. We’re quite a way from a conversation.The narrative that expert systems are a thing of the past is wrong. You can't build a system that beats the world's best Go players without thinking about Go. You can't build a self-driving car without driving."

MIT Sloan CIO Symposium May 2017

Users of common “personal assistants” like Alexa have probably experienced something similar. Like a call center reading from a script, these assistants can recognize voices and act on simple command quite well. But get off script, especially in any way that requires an understanding of human behaviors, and their limitations quickly become clear.

McAfee also pointed to the confluence of AI with communications technology as a major factor driving rapid change. As he puts it “two huge things are happening simultaneously: the spurt of AI and machine learning systems and, it’s easy to forget about this, but over a decade have connected humanity for the first time. Put the two together and are in very very new territory."

As they do in their books, McAfee and Brynjolfsson also touched on the economic changes that these technological shifts could drive. For example, Brynjolfsson highlighted how “the underlying dynamics when you can produce things at near-zero marginal cost does tend to lead to winner takes all. The great decoupling of median wages is because a lot of the benefits have become much more concentrated."

Both suggested that government policy will eventually have to play a part. As McAfee put it "times of great change are not calm times. There’s a concentration of wealth and economic activity. Concentration has some nice benefits but it leaves a lot behind.” With respect to Universal Basic Income, however, McAfee added that "a check from the government doesn't magically knit communities back together. There's a role for smart policies and smart government."


The tone of the Trusted Data: The Role of Blockchain, Secure Identity, and Encryption panel was similar to that at Technology Review’s all-day blockchain event the prior month that I wrote about here. I’d sum it up in three bullets:

  • It’s potentially very important
  • Cryptocurrency existence proofs notwithstanding, as a foundational technology it’s still very early days
  • Use cases and architectures are still fluid

MIT Sloan CIO Symposium May 2017

Sandy Pentland, who moderated the panel, laid out some of the reasons why blockchain may be both useful and challenging. For example, he noted that "Data sharing is really difficult. You need to combine data from different sources that you may not own” On the other hand, "auditability is increasingly important. Are you being fair? You need to show decisions made. Existing architectures are just not up to it. Probably need consensus mechanisms like blockchain."

Hu Liang, Senior Managing Director Head of Emerging Technologies Center, State Street pointed out how some of the basic architectural elements of blockchain are still being debated. He went so far as to say that blockchain is just a fairly vague concept.” For example, he wondered whether "some things that made bitcoin popular may not be needed in an institutional world. Banks exist and regulators exist. Still get eencryption, auditability, but do you need proof of work?"

Finally Irving Wladawsky-Berger, Fellow, MIT Initiative on the Digital Economy (and long-time IBMer), framed blockchain as a transactional mechanism. He noted that "the internet never dealt with directly was transactions. Transactions are things that when they go wrong people get really really really upset. When transactions are part of interactions between different institutions it is a pain. The promise of blockchain over time is to be a record of transactions. benefits are gigantic.It  could do for transactional systems what the internet does for connections."

But it will be a slow process. “The internet of the early to mid 90s was really crappy. The internet we are really happy with today took another 15 years to get there. We're at the toddlers stage. Foundational technologies take a long time."



Top. Jason Pontin, Andrew McAfee, and Erik Brynjolfsson [Gordon Haff]

Prof. Josh Tenenbaum, Professor, Department of Brain and Cognitive Sciences, MIT [Gordon Haff]

Irving Wladawsky-Berger [Gordon Haff]

Tuesday, May 30, 2017

Transformation at MIT Sloan CIO Symposium 2017

When I attend an event such as the MIT Sloan CIO Symposium, as I did in Cambridge yesterday, I find myself thinking about common threads and touch points. A naive perusal of the agenda might have suggested a somewhat disparate set of emerging strategies and technologies. Digital transformation. AI. Man and Machine. Blockchain. IoT. Cloud.

However, patterns emerged. We’re in such an interesting period of technology adoption and company transformation precisely because things that may at first seem loosely coupled turn out to reinforce each other. Thereby leading to powerful (and possibly unpredictable) outcomes. IoT is about data. AI is, at least in part, about taking interesting actions based on data. Cloud is about infrastructure that can support new applications for better customer experiences and more efficient operations. Blockchain may turn into a mechanism for better connecting organizations and the data they want to share. And so forth. 

We observe similar patters at many levels of technology stacks and throughout technology processes these days. New business imperatives require new types of applications. Delivering and operating these applications require DevOps. Their deployment demands new open hybrid infrastructures based on software-defined infrastructures and container platforms. (Which is why I spend much of my day job at Red Hat involved with platforms like OpenStack and OpenShift.)

That it’s all connected is perhaps the primary theme the event reinforced. In this post, I focus on the “big picture” discussions around digital transformation. I’ll cover specific technologies such as AI in a future piece.

Screen Shot 2017 05 30 at 10 28 25 AM

Digital transformation on two dimensions

Peter Weill, Chairman, MIT Sloan Center for Information Systems Research (CISR) led off the day with some research that will be made public over the next few months. This research identified change associated with digital transformation as taking place on two different dimensions: customer experience (e.g. NPS) and operational efficiency (e.g. cost to income). Companies that transform on both dimensions ("future ready firms”) have a net margin 16 points higher than the industry average.

Weill emphasized that these transformations are not just about technology. “Every one in room is struggling with cultural change question,” he said. As Jeanne Ross, also of CISR put it later in the day “Digital transformation is not about technology. It’s about redesigning your value prop and that means redesigning your company."

Finally, it’s worth noting that these two dimensions mirror the two aspects of IT transformation that we see more broadly. The “bimodal IT” or two-speed IT model has somewhat fallen out of fashion; it’s often seen as an overly rigid model that de-emphasizes the modernization of legacy systems. I don’t really agree although I get the argument.

Nonetheless, the CISR research highlights a key point: Both IT optimization and next-generation infrastructures and applications are important. However, they require different approaches. They both need to be part of an overall strategy connecting the business and the business’ technology. But the specific tactics needed to optimize and to transform are different and can’t be treated as part of a single go-forward motion. 

Four decisions

Ross broke down designing for digital transformation into four decisions.

The first is defining a "vision for improving the lives of customers” because this affects what innovations will pursue.

The second decision is defining  whether you’ll be primarily focused on customer engagement (market driven) or digitized solutions (product driven).

The third decision is defining the digital capabilities will you'll pursue. Ross said that "the operational backbone is the baseline. But you also need a digital services platform that relies on cloud, mobility, and analytics.” Such a platform emphasizes "developing components rapidly and stitching them together.” (The evolution award microservices, DevOps, and container platforms is very much in response to these sorts of requirements.)

Finally, digital transformation is fundamentally about how the business is architected. "Pre-digital we architected for efficiency. In a digital economy, we architect for spped and innovations. This requires empowering and partnering.” (From the vendor side, this also mirrors the shift we see from a historical emphasis on individual products to an emphasis on ecosystems and communities. These are perhaps especially important within open source software but it’s a broader observation.)

Stay tuned for future posts about some of the more technology-oriented discussions at the event.

Friday, May 05, 2017

Podcast: Dr. André Baumgart of EasiER AG on jumpstarting app dev with Red Hat's Open Innovation Labs

IMG 3408

EasiER AG used Red Hat's Open Innovation Labs to create a new category of healthcare product to improve the emergency room experience. Dr. André Baumgart is one of the founders of EasiER AG and he sat down at Red Hat Summit with myself and my Red Hat colleague Jeremy Brown to talk about his experiences with the process. (Spoiler Alert: He’s a big fan.)

Among the key points he makes is that the program focused on business outcomes and problems that need to be solved rather than technology stacks.

Also in this podcast, Jeremy Brown shares some highlights about what’s different about the Open Innovation Labs from a more traditional consulting engagement. 

Link to MP3 (12:43)

Link to OGG (12:43)



Thursday, April 20, 2017

Cautiously optimistic on blockchain at MIT

Blockchain has certain similarities to a number of other emerging technologies like IoT and cloud-native broadly. There’s a lot of hype and there’s conflation of different facets or use cases that aren’t necessarily all that related to each other. I won’t say that MIT Technology Review’s Business of Blockchain event at the Media Lab on April 18 avoided those traps entirely. But overall it did far better than average in providing a lucid and balanced perspective. In this post, I share some of the more interesting themes, discussion points, and statements from the day.

It’s very early

Joi Ito, MIT Media Lab

Joi Ito, the Director of the MIT Media Lab, captured what was probably the best description of the overall sentiment about blockchain adoption when he said that we "should have a cautious but optimistic view.” He went on to say that “it's a long game” and that we should also "be prepared for quite of bit of change.” 

In spite of this, he observed that there was a huge amount of investment going on. Asked why, he essentially shrugged and suggested that it was like the Internet boom where VCs and others felt they had to be part of the gold rush.  “It’s about the money." He summed up by saying "we're investing like it's 1998 but it's more like 1989."

The role of standards

In Ito’s view standards will play an important role and open standards are one of the things that we should pay attention to. However, Ito also drew further on the analogues between blockchain and the Internet when he went on to say that "where we standardize isn't necessarily a foregone conclusion” and once you lock in on a layer (such as IP in the case of the Internet), it’s harder to innovate in that space. 

As an example of the ongoing architectural discussion, he noted that there are "huge arguments if contracts should be a separate layer” yet we "can't really be interoperable until agree on what goes in which layer."

Use cases

Most of the discussion revolved around payment systems and, to a somewhat lesser degree, supply chain (e.g. provenance tracking).

In addition to cryptocurrencies (with greater or lesser degrees of anonymity), payment systems also encompass using blockchains to reduce the cost of intermediaries or eliminating them entirely. This could in principle better enable micropayment or payment systems for individuals who are currently unbanked. Robleh Ali, a research scientist in MIT’s Digital Currency Initiative notes that there’s “very little competition in the financial sector. It’s hard to enter for regulatory and other reasons." In his opinion, even if blockchain-based payment systems didn’t eliminate the role of banks, moving money outside the financial system would put pressure on them to reduce fees.

A couple of other well-worn blockchain examples involve supply chains. Everledger uses blockchain to track features such as diamond cut and quality, as well as monitoring diamonds from war zones. Another recent example comes from IBM and Maersk who say that they are using blockchain to "manage transactions among network of shippers, freight forwarders, ocean carriers, ports and customs authorities.” 

(IBM has been very involved with the Hyperledger Project, which my employer Red Hat is also a member of. For more background on Hyperledger, check out my podcast and discussion with Brian Behlendorf—who also spoke at this event—from a couple months back.)

It’s at least plausible that supply chain could be a good fit for blockchain. There’s a lot of interest in better tracking assets as they flow through a web of disconnected entities. And it’s an area that doesn’t have much in the way of well-established governing entities or standardized practices and systems. 

Amber Baldet, JP Morgan


This topic kept coming up in various forms. Amber Baldet of JP Morgan went so far as to say “If we get identity wrong, it will undermine everything else. Who owns our identity? You or the government? How do you transfer identity?"

In a lunchtime discussion Michael Casey of MIT noted that “knowing that we can trust whoever is going to transact is going to be a fundamental question.” But he went on to ask “how do we bring back in privacy given that with big data we can start to connect, say, bitcoin identities."

The other big identity tradeoff familiar to anyone who deals with security was also front and center. Namely, how do we balance ease-of-use and security/anonymity/privacy? In the  words of one speaker “the harsh tradeoff between making it easy and making it self-sovereign."

Chris Ferris of IBM asked “how do you secure and protect private keys? Maybe there’s some third-party custodian but then you're getting back to the idea of trusted third parties. Regulatory regimes and governments will have to figure out how to accommodate anonymity."

Tradeoffs and the real world

Which is as good a point as any to connect blockchain to the world that we live in.

As Dan Elitzer, IDEO coLAB, commented "if we move to a system where the easiest thing is to do things completely anonymously, regulators and law enforcement will lose the ability to track financial transactions and they'll turn to other methods like mass surveillance.” Furthermore, many of the problems that exist with title registries, provenance tracking, the unbanked poor, etc. etc. aren’t clearly the result of technology failure. Given the will and the money to address them in a systematic way that avoids corruption, monopolistic behaviors, and legal/regulatory disputes, there’s a lot that could be done in the absence of blockchains.

To take one fairly simple example that I was discussing with a colleague at the event, a lot of the information associated with deeds and titles in the US isn’t stored in the dusty file cabinets of county clerks because we lack the technology to digitize and centralize. They’re there for some combination of inertia, lack of a compelling need to do things differently, and perhaps a generalized fear of centralizing data. In other situations, “inefficiencies” (perhaps involving bribes) and lack of transparency are even more likely to be seen as features and not bugs by at least some of the participants.  Furthermore, just because something is entered into an immutable blockchain doesn’t mean it’s true.

Summing up

A few speakers alluded to how bitcoin has served as something of an existence proof for the blockchain concept. As Neha Narula, Director of Research of DCI at the MIT Media Lab, put it, bitcoin has "been out there for eight years and it hasn't been cracked” even though “novel cryptographic protocols are usually fragile and hard to get right."

At the same time, there’s a lot of work still required around issues like scalability, identity, how to govern consensus, and adjudicating differences between code and the spec. (If the code is “supposed” to do one thing and it actually does another, which one governs?) And there are broader questions. Some I’ve covered above. There are also fundamental questions like: Are permissioned and permission-less (i.e. public) blockchains really different or are they variations of the same thing? What are the escape hatches for smart contracts in the event of the inevitable bugs? What alternatives are there to proof of work? Where does monetary policy and cryptocurrency intersect?

I come back to Joi Ito’s cautious but optimistic.



Top: Joi Ito, Director MIT Media Lab

Bottom: Amber Baldet, Executive Director, Blockchain Program Lead, J.P. Morgan

by Gordon Haff

Wednesday, April 19, 2017

DevOps Culture: continuous improvement for Digital Transformation

Marshmallow winners

In contrast to even tightly-run enterprise software practices, the speed at which big Internet businesses such as Amazon and Netflix can enhance, update, and tune their customer-facing services can be eye opening. Yet a miniscule number of these deployments cause any kind of outage. These companies are different from more traditional businesses in many ways. Nonetheless they set benchmarks for what is possible. 

Enterprise IT organizations must do likewise if they’re to rapidly create and iterate on the new types of digital services needed to succeed in the marketplace today. Customers demand anywhere/anywhen self-service transactions and winning businesses meet those demands better than their competition. Operational decisions within organizations also must increasingly be informed by data and analytics, requiring another whole set of applications and data sets.

Amazon and Netflix got to where they are using DevOps. DevOps touches many different aspects of the software development, delivery, and operations process. But, at a high level, it can be thought of as applying open source principles and practices to automation, platform design, and culture. The goal is to make the overall process associated with software faster, more flexible, and incremental. Ideas like the continuous improvement based on metrics and data that have transformed manufacturing in many industries are at the heart of the DevOps concept.

Development tools and other technologies are certainly part of DevOps. 

Pervasive and consistent automation is often used as a way to jumpstart DevOps in an organization. Playbooks that encode complex multi-part tasks improve both speed and consistency. It can also improve security by reducing the number of error-prone manual processes. Even narrowly targeted uses of automation are a highly effective way for organizations to gain immediate value from DevOps.

Modern application platforms, such as those based on containers, can also enable more modular software architectures and provide a flexible foundation for implementing DevOps. At the organizational level, a container platform allows for appropriate ownership of the technology stack and processes, reducing hand-offs and the costly change coordination that comes with them. 

However, even with the best tools and platforms in place, DevOps initiatives will fail unless an organization develops the right kind of culture. One of the key transformational elements is developing trust among developers, operations, IT management, and business owners through openness and accountability. In addition to being a source of innovative tooling, open source serves as a great model for the iterative development, open collaboration, and transparent communities that DevOps requires to succeed.

Ultimately, DevOps becomes most effective when its principles pervade an organization rather than being limited to developer and IT operations roles. This includes putting the incentives in place to encourage experimentation and (fast) failure, transparency in decision-making, and reward systems that encourage trust and cooperation. The rich communication flows that characterize many distributed open source projects are likewise important to both DevOps initiatives and modern organizations more broadly.

Shifting culture is always challenging and often needs to be an evolution. For example, Target CIO Mike McNamara noted in a recent interview that “What you come up against is: ‘My area can’t be agile because…’ It’s a natural resistance to change – and in some mission-critical areas, the concerns are warranted. So in those areas, we started developing releases in an agile manner but still released in a controlled environment. As teams got more comfortable with the process and the tools that support continuous integration and continuous deployment, they just naturally started becoming more and more agile.”

At the same time, there’s an increasingly widespread recognition that IT must respond to the needs of and partner with the lines of business--and that DevOps is an integral part of that redefined IT role. As Robert Reeves, the CTO of Datical, puts it: “With DevOps, we now have proof that IT can and does impact market capitalization of the company. We should staff accordingly.”


Photo credit: http://marshmallowchallenge.com/Welcome.html

Monday, April 17, 2017

DevSecOps at Red Hat Summit 2017

Screen Shot 2017 04 17 at 11 51 08 AM

We’re starting to hear “DevSecOps" mentioned a lot. The term causes some DevOps purists to roll their eyes and insist that security has always been part of DevOps. If you press hard enough, they may even pull out a well-thumbed copy of The Phoenix Project by Gene Kim et al. [1] and point to the many passages which discuss making security part of the process from the beginning rather than a big barrier at the end.

But the reality is that security is often something apart from DevOps even today. Even if DevOps should include continuously integrating and automating security at scale. It’s at least in part because security and compliance operated largely in their own world historically. At a DevOpsDays event last year, one senior security professional even told me that this was the first IT event that was not security-specific that he had ever attended.

With that context, I’d like to point you to a session that my colleague William Henry and I will be giving at Red Hat Summit on May 3. In DevSecOps the open source way we’ll discuss how the IT environment has changed across both development and operations. Think characteristics and technologies like microservices, component reuse, automation, pervasive access, immutability, flexible deploys, rapid tech churn, software-defined everything, a much faster pace, and containers.

Risk has to be managed across all of these. (Which is also a change. Historically, we tended to talk in terms of eliminating risk while today it’s more about managing risk in a business context.)

Doing so requires securing the software assets that get built and well as the machinery doing the building. It requires securing the development process from the source code through the rest of the software supply chain. It requires securing deployments and ongoing operations continuously and not just at a point in time. And it requires securing both the application and the container platform APIs.

We hope to see you at our talk. But whether or not you can make it to see us specifically, we hope that you can make it to Red Hat Summit in Boston from May 2-4. I’m also going to put in a plug for the OpenShift Commons Gathering on the day before (Monday, May 1).


[1] If you’re reading this, you’ve almost certainly heard of The Phoenix Project. But, if not, it’s a fable of sorts about making IT more flexible, effective, and agile. It’s widely cited as one of the source texts for the DevOps movement.

Thursday, April 13, 2017

Links for 04-13-2017