Open Source, Collaboration and the future of software

November 6, 2019

Every January I start my open source year at a conference called FOSDEM. It’s the biggest open source developer summit in Europe, possibly the world, where 8,000 geeks gather. I always wear dresses and have been known to rock up in a leopard-skin coat, so I always stick out there. Each year, I have a great time for a different reason and learn something new. It’s the best place I know to meet new folk in open source. Newcomers are welcome, even lawyers who can drop in to the “LegalDev” room on the Saturday,

This year, the thing that really stood out for me was the closing keynote by Jon “Maddog” Hall. Of course, I love the fact that I am part of a community where it’s fine to go by the name Maddog, but also because, the focus of Maddog’s talk was “things that are 50 this year”.

His list of things that hit 50 in 2019 included his involvement in developing Unix and Linus Torvalds, the creator of the Linux operating system. I, too, was going to be 50 this year and Maddog’s focus really made me think about how far we have come in those 50 years and where the future will take us. 

 

Amanda Brock with Linus Torvalds

We live in a world, and in a way, that is just unrecognisable from the one I was born into. Going back to the start of my journey, in 1969, Neil Armstrong took one small step for humankind. It has been impossible to miss the 50th anniversary celebrations this summer. Also, in 1969, John and Yoko staged their love in at the Hilton Hotel Amsterdam, the StoneWall riots happened at a little pub in Greenwich Village and Woodstock’s music melted into the summer of love. A few weeks later I popped into the world.  

Of course, tech didn’t come into existence in 1969. Charles Babbage had invented his Analytical Engine in 1822 and Ada Lovelace created the first algorithm to run on it, recognising that it could do more than calculate. In 1936, Alan Turing’s Turing Machine was ready to compute anything computable.

By 1969 we had seen a GUI and the invention of the computer chip. That year a group of developers at Bell Labs produced Unix, an operating system written in C and which was portable across multiple platforms. Maddog, as I mentioned, started developing Unix that year and it became the operating system of choice for main-frame computers. 

(If you’re the kind of geek who interested in the nuances of the Unix war, then I suggest you listen to Maddog’s talk on youtube for the inside track here).

Also, in 1969, Requests For Comments (RFC) was invented by Steve Crocker to record the notes of the development of ARPANET and those have since become official specifications, communications protocols, procedures of the internet. It is a recorded 50-year-old discussion where all the technologies needed for the internet are being discussed and explored. In 1969, the open source community – who, until the term was coined in 1998, didn’t even know that’s what they were – had already started work on the internet.

In 1990 Tim Berners Lee developed HTML at CERN and by the early 90’s the geeks are revelling in the internet and I am learning to word-process at University and write essays and dissertations on an Apple computer. In 1996 Sergey Brin and Larry Page develop the Google search engine at Stamford. In 2004 Mozilla’s browser takes on Microsoft and we see the launch of Facebook.

So much change and innovation, but to my mind there are 3 significant developments that have brought us to the place we are in today.

The first is the internet combined with decent connectivity.

I spent 25 years as a lawyer. I worked in-house mostly and in my first employed in-house role, I worked for Dixons the high street retailer. I joined them in early 1999 to become part of what would be a 12- strong management team who ran Freeserve until its IPO and was the first lawyer employed to work on Freeserve. I had spent several years advising on “internet law” thanks to Professor Ian Walden’s 1996 course on internet and law, which I took as part of a Masters at QM. I believe it was the first internet law course in the UK.

I guess the reason I was employed by Freeserve was that I was one of the first lawyers to really work on internet law in the UK. It was the wild west and those of us involved often spoke to each other to work out how to manage this new world where the law just didn’t exist. I spoke often with people like Mike Miller, the then Head of Legal at Amazon, who has gone on to greater things with Amazon and is still with them today. By doing this, and lobbying the legislators, we shaped the future of internet regulation.

Freeserve became a FTSE 100 company and held a higher valuation than our parent DSG, despite their owning 1,000 stores and having 40,000 employees. The valuation on float was between £1.3 and £1.5billion. We were 12 people with an outsourcing contract with Planet (who later became Energis) for the actual provision of internet infrastructure. By 2000 Freeserve had 2,000,000 users: more than BT. We were unique in Europe in having more users than the incumbent telecomms provider. It was the beginning of the era where start-up valuations didn’t match their proven revenue and we were one of the first unicorns in the world. We disrupted a relatively new market with our free model. The world was changing. 

Freeserve was ground-breaking in a number of ways. It disrupted the Internet Service Provider (ISP) market, which was in itself relatively new, by not charging for internet connection. Instead it utilised a model based on advertising and linking deals with adverts popping up on the home page. It was also one of the first ISP’s to utilise ADSL as opposed to twisted copper pair. 

Nice story, but what’s the relevance? Firstly, we had to deal with the incumbent who didn’t exactly welcome us, a pattern that we have seen become very established across the face of innovation and disruption ever since.

Secondly, connectivity. I recall being taken to an innovation lab in the Marais in Paris, in early 2000, where we were shown an “Internet Fridge”. This was an early IoT device that could undertake stock control and create a shopping list, arrange replenishment of necessary items by ordering from a store, make payment,arrange delivery, even plan a healthy diet and suggest recipes for you. 

Remember, this was 20 years ago.  

The internet fridge, like tablet devices at that time, didn’t catch on for over a decade more and the first movers’ businesses undoubtedly went under. 

So, why didn’t they become successful? 

Primarily connectivity. Yes, we had the internet, but it was old-school dial up. Think back: do you remember that noise and constant connection dropping? The infrastructure just wasn’t stable enough or capacious enough to make these devices work. 

But it was coming… the future was close and getting closer. 

The second significant change was the advent of the smart phone.

It’s hard to believe that only really had feature phones for a decade or so. Apple brought their smart phone to market in 2007. Frankly it was just a phone, a camera and a better receiver of email than the Blackberry. But that was just the beginning.  

Apps seemed a bit pointless for the first few years when everyone and their dog rushed to create their one. It was only when viable services for app based delivery appeared (like uber, laundrapp, drive-now, to name but 3 on my home screen), the practicalities of pick-up and delivery were ironed out and digital content became streamed and stored collaboratively, that the smart phone and the tablet really came into their own. 

By 2008 Android had its first release and the brand-new smartphone market was itself disrupted – but that’s another story that I will come back to a little later.

The third key development that has made change possible is Cloud. 

Cloud is something we accept in business and in our personal lives today. There’s no longer the need to host or run our own software. We let experts manage the infrastructure and give us SaaS, Collaboration tools and storage. 

While GDPR and a suite of “standards”, particularly around security, have potentially made our data safer, we know there are risks when enterprise data and applications are stored in the Cloud and many of those have not changed so much in the last 5-10 years but regardless of those risks, we can see the value of buying on a SaaS or Cloud model.

Early in 2008 I joined Canonical, an open source software company, to work on the open source Ubuntu operating system. We went from 3% to 70% of the market as the open source operating system of choice in Cloud almost overnight. Even today Ubuntu is the world’s most popular operating system across the public cloud powering services from AWS to Azure. Again, our open source technology allowed the development of cloud but also disrupted the market. 

In 2010, through my role as GC at Canonical, I was part of the OpenStack Drafting Committee that set up this foundational software project. We pulled together the legals for a collaboration that allowed companies like Intel and Arm, Canonical and Redhat to fund and work collaboratively on projects within open source whilst still differentiating themselves further up the software stack. 

This structure for foundational software built and shared in a collaborative way yet promoting competition is what the German Economist Mirko Boehm has labelled “Continuous Non-Differentiated Collaboration”. It creates a state, of what Open Invention Network’s CEO, Keith Bergelt calls “Co-Opetition”.

This behaviour, and the collaboration that open source brings at an enterprise level, are critical to the development of these technologies, the internet, the smart phone and cloud. 

My friend Stephen Walli, a Principal Program Manager at Microsoft, is in his second stint at Microsoft and is actively engaged in Microsoft’s open source business. As well as Maddog’s talk I would recommend Stephen’s keynote from the Linux Foundation’s Open Source Summit in Edinburgh in 2018. I encourage you to listen to this here if you have a spare 13 minutes.

 
Stephen Walli 

If you listen, you will understand that the journey to open source is a question of culture, not strategy for Microsoft. Microsoft’s culture has changed to an open one.  

What changed? 

First, the industry changed. 

With the rise of the cloud, all cloud infrastructure projects are open source licensed and the workloads running on the clouds, all that ML and AI, has a foundation built on open source licensed projects. 

Secondly, Microsoft’s customers have changed. 

15 years ago, customers of Microsoft would have said they didn’t use open source. They were wrong, but that’s what they believed. Today they know, acknowledge that they do use it and request open source from their suppliers. 

Thirdly Microsoft can’t hire a developer who isn’t open source savvy.

All developers today are open source savvy at some level: they have knowledge of the open methodologies and participate in key open source projects. At this point in history developers love open source and regard the open and collaborative way of working as a norm or a given. 

Today all companies are, to a certain degree, technology companies. They create digital goods and services, use software to create their products, deliver their products through digital channels or those products are consumed through digital technologies. 

In the past many businesses ran on models of control and fear but now, “Software rules the world and open source is eating its lunch.” These old approaches stifled innovation. While collaboration can be uncomfortable, we are all driving to the same goals in a position of innovation inter-dependence, so collaboration is the logical next step.

Whether they like it or not, all companies need developers, need a digital and software strategy and need to understand how to hire, manage and collaborate with the development communities to achieve their goals. 

These communities want to work in an open and collaborative way. 

The largest proprietary software company in the world, Microsoft, has gone on a journey over the last decade to become open. Businesses generally now need to embark on a similar journey as part of their path to successful digitisation and to follow that same road to collaboration.

Open Source Strategist Allison Randall gave an Alan Turing lecture earlier this year – another talk that I highly recommend. It’s based on her PhD research and takes the listener on a journey through the development of innovation noting along the way that today all innovation uses open source methodology to succeed today.

You will often hear the thought leaders of open source talk about the inevitability of open source. I hope this now makes sense.

The collaboration that is open source means many things including community, many eyes making bugs shallow, diversity, speed and best value solutions, the highest quality code and the ability to fail fast. We are in a world where this can both help us create better business and make the world a better place for everyone.

In 2019 I took on a role as the Chair of the Open Source and IP Advisory Committee for the United Nations’ Technology Innovation Labs. The UN subscribes to the Digital Principles.

Provision 6 states: “Use open standards, open data, open source and open innovation” and the UN is working towards this, to allow Member States to leverage the benefits of open technologies and the ease of scaling these across them. The UN’s Sustainable Development Goals focus on global challenges related to poverty, inequality, prosperity, and others. Digital technologies, which are increasingly driven by open source software and the collaborative innovation process it engenders, can help to provide awareness and education to the world’s population, which can help to address these issues. For example, mobile technologies can be used in remote areas to access government services, banking, training, and other services that might not otherwise be available. These digital technologies are best delivered with software technologies that are trustable.

It’s not just organisations like the UN who are using open source for good. Increasingly I am working with organisations like Modus Box, using the Mojaloop interoperable open source payment switching platform to bank the unbanked in African countries and Endless Mobile, which allows people to harness the power of computing everywhere through its Endless Operating System. 

Who knows where we are going in terms of technology innovation?

My friend, Armando Vieria, is a thought leader in a different field, but one I have mentioned already, AI. He tells me that: “We will finally realise that all types of intelligence can be reduced to an algorithm that can be efficiently trained with lots of data on large neural networks achieving super-human capabilities.” 

We will have to wait and see if he is right, but what we do know is that we are moving into a world of self-driving vehicles, digital medical devices creating bionic people, and a revolution in how we buy and receive goods. Decent connectivity, device capacity and uptake of the cloud have revolutionised our worlds over a couple of decades. Wherever our future lies, the one thing I am sure of is that it lies in collaboration.

Amanda Brock is CEO of OpenUK and the Trustable Software Engineering Projet, European Representative of the Open Invention Network and Chair of the United Nations Technology Innovation Labs’ IP and Open Source Advisory Board and provides consultancy services on open source and collaboration. She has also been shortlisted for 2020 London Women in IT Awards.

If you have any predictions or resolutions for 2020 or reflections on the past year, however short or in other formats (videos welcome!) send them to david.chaplin@scl.org and we will publish edited highlights in the December issue of Computers & Law and on scl.org throughout December.