In this article I compare symbolic AI and Large Language Model based processing purely from a cost perspective. A basic analysis shows that if you are processing less than 1 transaction per minute, you may well be better off (financially at least) using LLM. In addition I expect cost economics to shift radically over the coming years, as specialised LLM hardware is developed and LLM market competition increases.

The exploratory Google Sheet I developed allows you to specify the length of the policy document you need to scan to process an incoming transaction, as well as the number of symbolic rules in the document, how frequently rules need to be updated, and the engineering cost of extracting those rules and ontologies via human knowledge-engineering.

Note that I’ve intensionally ignored all the other factors that may influence a decision such as this: straight-through-processing vs human-in-the-loop, explainability, regulatory oversight, accuracy, etc.

In addition there are some basic fixed costs to run the supporting engineering infrastructure to process transactions: logging, monitoring, persistence etc.

On the LLM side of the calculation you can specify how much you expect to spend on prompt engineering and the supporting infrastructure, such as vector databases etc.

The results are enlightening. Assuming we process 1 transaction per minute:

  1. LLM (ChatGPT) is slightly cheaper than symbolic reasoning
  2. GPT 4 costs approximately 3x ChatGPT per transaction, making GPT-4 68% more expensive than symbolic processing

However, If you are processing 60 transactions per minute LLM (ChatGPT) is 93% more expensive than symbolic processing.

Economics are strongly in favour of LLM however, with Moore’s Law (or whatever the LLM equivalent is!) and market forces quickly driving down costs, while the labour intensive work of knowledge engineering likely to remain expensive for some time. Of course, we will apply LLM to knowledge engineering as well, attempting to automate it, so the comparison may soon be more apples-to-pears, than apples-to-oranges.

In addition I think the major factor in LLMs’ favour is that the policy documents used to process transactions can truly be updated with “no code” and zero downtime, or training.

It’s going to be fascinating to see how this plays out in the enterprise, but my bet is that we will see all the low volume, human-in-the-loop use cases quickly move to LLM.