The latest advances in artificial intelligence (particularly large language models) continue to reverberate. Even for an “old school” AI person like myself (who cut his teeth with Prolog) it is clear that there has been a step change in our ability to create computer systems that can interact with humans using natural language. GPT-4 et al are exhibiting early signs of “common sense” and have encoded useful conceptual representations of the world. The debate rages on as to whether this is “intelligence”, but to an engineer like me, it sure seems useful!
One of the most exciting prospects is the potential to unlock the creativity of all 8 billion people on the planet, by giving them the tools to provide instructions to computers, also known as “programming” — not by forcing them to learn formal programming languages, but by submitting instructions in natural language.
What seemed like pipe-dream just five years ago now seems
tantalisingly close.
LLMs have an encyclopaedic knowledge of programming languages, and appear to be going beyond stocastic parrots, and have encoded some sort of abstract representation of programming instructions, capable of transforming (some) natural language text into a programming language A, and then transforming that into programming language B.
That said, we do have to temper our excitement with a dose of engineering realism. I don’t believe we are close to the infamous singularity and I think many of the fears around advances in AI are well-meaning but (currently) over-blown. Yes, humanity may struggle to adapt to recent rapid advances (from a regulatory and an employment/education perspective), but I think once the current hype subsides a little, and the limitations of the technology become better understood, we will adapt.
To illustrate, here are the results of a recent experiment I performed using the gpt-3.5-turbo model. I’d expect the results of gpt-4 to be even better…
The input to the model was the text of the first page of a car insurance policy (publicly available on the Internet):

What is produced by the model (using just prompt-engineering) is a UML activity diagram that summarises the policy:

It’s a small subsequent step to transform this into a decision tree, decision table, set of logic rules, or procedural code.
Is this the breakthrough that will turn everyone in the world into a programmer? The honest answer is that we don’t know quite yet, but I’m sure there are lots of smart people working on this fundamental Human-Computer-Interface problem, and they’ve just been given a very early Christmas present!
AI projects fail due to false-positives and negatives, so accuracy will be critical for our ability to convert natural language to code. 50% accuracy is probably useless for anyone who is not already a trained programmer.
100% accuracy is unattainable, given the ambiguity inherent in natural language,
however if we can hit 75% plus, then something magical may happen…
So, expect generation of all sorts of digital artefacts from natural language: emails, documents, presentations, images, videos, user interface designs (see Tweet above), unit tests, procedural and algorithmic code, even enterprise systems — first using a human-in-the-loop co-pilot paradigm, but as accuracy improves, humans increasingly trusting the machine to transform natural language into computer instructions with limited oversight.
And remember, Don’t Panic and Carry a Towel!
Leave a Reply