EVENT REPORT: Back to Basics – The Technology: 9 Key Takeaways

June 20, 2019

The SCL’s second “Back to Basics: the technology” conference promised to be just as popular as the first, helping lawyers understand the technology which underpins almost everything they do. I missed it first time round so I was delighted to be asked by SCL to cover the event and give a run-down of what I gleaned from this fascinating day. 

So here are the nine essential things I learnt.

1. There’s more to chips than meets the eye

Perhaps the most powerful image of the conference was the one Neil Brown put on show at the start to back up his explanation that the blunt aim of computing is to cram as many transistors as possible into a given space. Zooming into an image of a transistor at nanometre level revealed just how precise human manufacturing has become, explaining the rapid increases in processing power over recent decades.

2. The future is virtualisation

Virtualisation, that is emulating hardware in software, was explained in Inception-like terms. As a ‘computer-in-a-computer’, virtualisation is fairly new but enables virtual computers to be run within the bounds of physical computers, enabling capabilities to be added to data centres and computers without the addition of ‘additional/new tin’ – provided that ‘spare tin’ is available in the first place. 

3. Machine learning is backward

In answer to a question posed by a member of the audience, the panel elaborated on the way in which machine learning occurs. In respect of a set of inputs, hidden layers, and outputs, the machine/computer is given lots of questions and lots of right answers – after this, it performs numerous operations in order to fill in the gaps between the questions and answers, via trial and error approaches. The machines ‘learn’ because they reason backwards from their wrong answers in order to find the correct route to a right answer. This can only happen at scale due to processing power increases, meaning that much more data can now be given to computers to ‘learn’ how right answers are gotten to from a given set of inputs. 

4. Humans aren’t binary

Processors read binary instructions, that is 1s and 0s. The problem that is binary is hard for humans to process and understand as an intelligible data format so for software to run, machine/object code is compiled from source code, the latter being (relatively) more comprehensible for humans.

5. We are layers of abstraction away from machine code 

Chris James told us that a software program itself was explained as entailing not just the source code but also libraries/frameworks, resources (including databases, icons, sound files) and tests (which reduce bugs). Furthermore, software, such as operating systems, contain an ‘API’ or application programming interface which entails accessibility and user interface elements for users, i.e. the Windows operating system we use day-to-day. In this sense, the layer upon layer of abstraction (away from machine code) which a user interface relies on was revealed. 

Chris then demonstrated the range of programming languages which humans use to write software, from Python to Javascript. High level programming languages, such as these, are abstracted from the underlying machine code of 1s and 0s, with a view to making it easier for humans to accomplish certain programming tasks/purposes. Low level programming languages, such as C, although ‘fast’, are not as good as object-orientated languages at programming complex software or interpreted languages for creating web applications. Different languages serve different purposes, ultimately.

6. Not master and slave but server and client

In the talk about Networks we found out that a network is made up of servers and clients: the former, consisting of hardware or software, makes a service available to clients, which include physical boxes, virtualised computers or software on computers. The client, as the requestor, will ask for data from the server, e.g. your web browser client may ask ‘may I have this web page, please?’ Upon which the web server will respond. 

7. How do we navigate the internet? 

Well, each internet service provider (ISP) uses a Border Gateway Protocol to work out the best way from point A to point B. For example for a user on a client machine to access bbc.co.uk, the BBC’s name servers will tell the ISP the address of the DNS servers which will then direct the request to the IP address of bbc.co.uk, that is the webserver. A similar process occurs when sending emails.

8. Cloud Computing crunches time

A stunning example of the efficiency cloud computing outsourcing might provide to organisations was given by Bankinter as a credit risk simulation service on AWS: such credit risk simulation activities used to take 23 hours in-house and the cloud now provides the same function in 20 minutes.

9. it’s not Just me

From my perspective, perhaps the most important takeaway was a reminder (as remarked upon by another attendee) that Moore’s law holds – and this fact in conjunction with the clear amount of learning that had gone on/that most attendees seemed to relish in (as gathered from the conversations I had on the day). If computing power indeed doubles every year then the legal profession should pay much more attention to the many intersections legal practice has with computing, whether this be from how the internet of things and technological advancements might render legal practice more efficient to how clients’ business models will increasingly rely on networks and software. 

This is just a snapshot of some of the more arresting points I gleaned from the day and cannot really reflect the extraordinary range of hands-on knowledge presented on the day so for some other feedback from SCL Rising Stars who attended the event click here.  


Gerald Brent is a trainee solicitor at Fladgate

————————————————

EDITOR’S NOTE: The event was filmed and a box set of the day can be purchased online here