Search

Innovation That Matters

Technology trends for the world of tomorrow

Tag

LLM

Miss Manners with ChatGPT

“Miss Manners” is organizing a dinner party and needs to devise a seating arrangement for her guests. She has a large circular table and will be inviting 16 guests: 8 males and 8 females. Miss Manners is an aging lady of a bygone era and isn’t aware that gender is not binary. She would like to ensure that guests are not seated next to someone of the same gender, and that guests seated next to each other share at least one hobby.

In this article I will examine how ChatGTP fares with this venerable optimisation (or product rules) benchmark and present conclusions.

Continue reading “Miss Manners with ChatGPT”

Knowledge Graph in 100 Lines of Code

In this article I am going to show you how easy it is to create your own custom Knowledge Graph using Typescript, Open Source tools and the Neo4J database.

Knowledge graphs are getting lots of attention at the moment, as they are the natural Yin to the Yang of LLMs, providing structured data to chat interfaces, and powering Retrieval Augmented Generation.

Continue reading “Knowledge Graph in 100 Lines of Code”

Text-Oriented Programming

In this article I introduce the nascent field of Text-Oriented Programming (TOP), commonly used when building applications that use Large Language Models (LLMs). TOP poses new challenges for application design, DevOps, robustness and security.

This article is informed by my hands-on experience building Finchbot, an application that converts natural language text to a symbolic domain model.

Continue reading “Text-Oriented Programming”

Symbolic AI vs LLM: Cost Comparison

In this article I compare symbolic AI and Large Language Model based processing purely from a cost perspective. A basic analysis shows that if you are processing less than 1 transaction per minute, you may well be better off (financially at least) using LLM. In addition I expect cost economics to shift radically over the coming years, as specialised LLM hardware is developed and LLM market competition increases.

Continue reading “Symbolic AI vs LLM: Cost Comparison”

You Can See The Specialist Now

I’m becoming increasingly convinced that the conversational AI future is a mixture of general (foundational) large language models (LLMs) that can provide a high-level diagnosis of a situation or question, and which then delegate to LLMs for specialized reasoning. The general LLM is used to process generic language to orchestrate calls to specialized services and LLMs with deep domain knowledge, and then to potentially summarise and synthesis the results back into a general form for the end-user.

Continue reading “You Can See The Specialist Now”

Breaking the Language Barrier: Why Large Language Models Need Open Text Formats

Foundational LLMs are trained on huge corpuses of text collected from the public Internet, including websites, books, Wikipedia, GitHub, academic papers, chat logs, Enron emails (!) etc. One of the better known public collections of training data is called The Pile and is an 800 GB dataset of diverse text for language modelling.

In this article I will examine how the training sets for LLMs should influence your choice of data formats and best-practices for data formats that can be generated by LLMs.

Continue reading “Breaking the Language Barrier: Why Large Language Models Need Open Text Formats”

Website Powered by WordPress.com.

Up ↑