Thoughts from AI Con 2019

Tom Swann
8 min readDec 12, 2019

AI Con held its inaugural event at the Europa hotel in Belfast at the end of November, the first dedicated Artificial Intelligence conference to be held here and hopefully the first of many more to come.

I wanted to share my thoughts both on the conference itself and more broadly on why I thought AI was an exciting enough space that I should go and work in it at Kainos.

And yes, I’m not exactly striking whilst the iron is hot here, but in the intervening two weeks I have had some time to think about the conference and formulate some opinions. So y’know… thinking about things is always nice!

AI Models in Production

The first theme of the conference which caught my interest was the idea of how software which incorporates elements of AI and/or Machine Learning is different from software which is “traditional”, i.e. based on defined business rules which are coded into clearly defined algorithms.

My colleague at Kainos, Oliver Wilson, had many excellent thoughts to share from his own experience of taking AI models to production. I particularly liked the section which focused on the difficulty of integrating data science Notebooks like Jupyter into an engineering process which involves version control, artefact management and related development lifecycle disciplines that software engineers take for granted.

A Jupyter Notebook is a visual environment for writing data science code. It allows you to mix code with tables, plots, images and so on. Functionally it behaves much like a Python interpreter does in the shell — it is good for iterative and exploratory development.

An example of a typical python notebook

From my own experience, Notebooks bring to mind Extract Transform Load (ETL) tools when it comes to managing them. These are tools primarily based around a visual drag and drop design space and you use them to create a pipeline which moves data from A to B with some sort of transformation of the data in the middle. Pentaho and Talend are fairly popular examples.

Example of an ETL pipeline in Talend designer

Both ETL designers and Notebooks sit in a nebulous grey area between version control and document revision control. Because whilst both are ostensibly “source” in the traditional sense, they are also tied to versions of data. They are also both used as collaborative workspaces across a team.

People collaborate on the same version together, or multiple versions of the same file at once. And usually there is some mechanism for managing their state in a workflow — they will move through a sequence of states like draft, completed, published etc. again, similar to traditional EDRM in many ways.

The fact that they are coupled to data and that data also needs to be “versioned” or tracked adds to the complexity. Often this is tackled at the level of the platform architecture within which they run — through processes to track lineage and metadata of files.

It’s not that different from code, but it’s different *enough* to cause headaches when you try to treat them as identical use cases.

And we as engineers should always respect user need in this regard. We have to recognise that this is a thing which needs to be adapted somewhat differently to satisfying our own requirements of delivering robust solutions to production.

And sometimes the answer — and I agree with Oli here — is to use Notebooks for the exploratory phase where they can play to their strengths and then have a more rigorous approach to packaging the final code for deployment.

I’ve also encountered this in the world of Apache Spark where the Spark shell offers an interactive workflow for developing pipelines and a separate phase of packaging that code into a jar file to deploy it to the Spark cluster.

AI and Why Now

The Trough of Disillusionment

I suspect this is where things will get a bit ramble-y. BUCKLE IN. Time to get deep.

So as mentioned right at the top, I’ve decided to get involved in AI and more specifically, the point at which it intersects with software engineering.

My background is in digital services, backend systems development and processing data in the more traditional sense. What data scientists themselves might somewhat uncharitably describe as “plumbing”

I am coming at this very much as a software engineer and not a data scientist, in other words.

There are a few things that I find come up repeatedly in the conversation around AI and sure enough they were present at AI Con. Some recurring motifs relating to how new any of this *actually* is, the technology hype cycle and the general cynicism which is always present when people do in fact consider something to be new.

The infamous Gartner hype cycle

I’m not out to pick on academics in particular, but it must be said that one of the initial keynote speeches delivered at AI Con by the universities did hone in repeatedly on this idea that within research, much of this is old hat.

#triggered

The Evolution of Ideas

The history of ideas and how people adopt them is something I‘ve been very interested in for a long time. There are a couple of touchstones that shaped how I think about this.

The first and probably most important was Thomas Kuhn’s “The Structure of Scientific Revolutions”[1]. It was the first time that I was really made to consider “progress” as meaning something other than an incremental progression of ever better ideas; of concepts that built directly on what had gone before them.

Our popular use of the word “paradigm” originates within this book. It presents the concept of progress as periodic upheavals in our accepted understanding from out of which new lines of enquiry are developed and used to drive us forwards.

New models of thinking do not necessarily offer immediate improvement over the established order of ideas. In fact it is often many years before any significant practical payoff comes about. What is truly important and ultimately most valuable is that they force us to ask new questions, or open up new avenues of exploration that the old model was simply incapable of.

It’s easy to understand why the immediate practical benefit a new tool or technique can eat up a lot of the narrative. You might well argue that for businesses this is probably rightly so. Tangible benefits are pretty compelling. Maybe — but when you look at things through the lens of innovation and competitive differentiation over the longer term then I think the opposite is probably true.

People who’s job it is to worry about strategy will be more preoccupied with where the next big shift in ideas is coming from next.

My second reference on thinking about ideas steps right outside the world of STEM and over to this quote from the critic Matthew Arnold[2].

For the creation of a masterwork of literature two powers must concur, the power of the person and the power of the moment, and the person is not enough without the moment.

Arnold asserted that ideas in and off themselves could only be considered truly impactful — “great” to use his word, meaning how we measure the force of cultural relevance that they carry — when they came at the right moment in time.

Maybe he was calling out bandwagon bias all the way back in the early 19th century!

Arnolds quote is usually the first thing that pops into my head whenever I hear someone jump up on a stage and start proclaiming that there is nothing new under the sun. Yeah. So. And. What?

Our Latest Computer

I have one final example to share and I think it exemplifies the last thought above about the relevance of the external conditions under which new ideas can live or die.

In 1982, Gerald Weinberg asserted that for any given “new hotness” that comes along in software engineering, you can just replace it in a sentence with “our latest computer”.

If you are having problems in data processing, you can solve them by installing our latest computer. Our latest computer is more cost effective and easier to use. Your people will love our latest computer, although you won’t need so many people once our latest computer has been installed. With our latest computer, you’ll start to realize savings in a few weeks, at most.

The anecdote comes Chapter 2 of Tom & Mary Poppendieck’s book on Lean software development [3].

This section of the book contains some fascinating examples of the evolution of ideas in software engineering and identifies some of those which can be labelled as fads and others which are core concepts that have stood the test of time.

The concept of integrating early and often and the value in discovering defects early are traced back all the way to structured programming, to Dijkstra and others who framed establishing correctness as an ongoing activity of development — not something that gets left to the last moment.

What was once called top-down programming, was called step-wise integration is now called continuous integration. The label changes, but the criteria for success remains the same — if you’re doing big-bang integration then you’re doing it wrong.

In the span of time since “continuous integration” became an idea that carried widespread cultural weight the capability has become available to automate build and testing at the unit, component and system levels all the time.

Having the compute resources available to perform this early and frequent integration in a cost effective manner was not always the case over the lifespan of the fundamental underlying principle. It took a change in external context to take the ideas that were always good and make them a ubiquitous part of our shared knowledge.

Times of change

All of which is a rather long-winded way of saying that novelty in and off itself only takes us so far when it comes to meaningful shifts in thinking, be that across an organisation, an industry or our wider culture itself.

Broad statements about things not being *new* because “I know someone who used that all the time!” are reductionist and boring.

I’m personally enthused for the widespread adoption of old and new techniques in Machine Learning and AI out there in the wilds of the wider software development community and how we continue to develop it into products and services and to make it reliable.

Those ideas are going to compete for dominance in a marketplace of problem solvers and I think thats a pretty exciting thing to get involved in.

References

Further reading material referenced in this blog.

[1] The Structure of Scientific Revolutions — Thomas S Kuhn

[2] Culture and Anarchy — Matthew Arnold

[3] Leading Lean Software Development (Results Are Not The Point)— Tom & Mary Poppendieck

--

--

Tom Swann

Botherer of data, player of games. All my views are materialised.