Over the past few weeks I’ve been researching, and building a framework that combines the power of Large Language Models for text parsing and transformation with the precision of structured data queries over Knowledge Graphs for explainable data retrieval.
In this fourth article of the series (one, two, three) I will show a generic web interface that helps explain how the LLM is using tools and graph queries to answer a wide variety of structured and unstructured questions.
I want to explore how the LLM uses automatically generated knowledge graph tools to answer a variety of questions. In the video below I pose questions and review/explain how the LLM chose to use tools, which tools were used and informally gauge the quality of the answers.
Conclusions
- General quality of answers was high, with most grounded by the data in the knowledge graph
- GPT 4o does a good job at choosing an appropriate tool and sequencing tools usage into a query plan
- When data was missing from the knowledge graph (or invalid queries were generated) the LLM degrades gracefully, indicating that it is using world knowledge
- It remains to be seen how well this will generalize to more rich domain models (ontologies)
- Automatic creation and running of tools over the knowledge graph (model driven) means experiments can be performed very quickly
- More work needs to be done to summarise/explain how an answer was generated
- Adding streaming responses would make the user experience more engaging
1 Pingback