There’s been a Cambrian explosion in the number of companies working on orchestration of large language model (LLM) tools, agent-based systems, and exposing enterprise data to LLMs, via retrieval augments generation (RAG), or via graph-based approaches (GraphRAG) or Knowledge Graphs.
In this article I explore LLM agent-based systems, particularly as they will be applied to enterprise systems of record.
Naturally, enterprise vendors would like to “own” the user experience and data federation layer. Salesforce Agentforce is one of the enterprise solutions, but I expect similar frameworks from all the enterprise system of record vendors (SAP Joule, Microsoft Copilot, Google Vertex AI, ServiceNow, Workday Illuminate, Atlassian Rovo etc…) each attempting to place their system of record data at the centre of an ecosystem of partner integrations, as Salesforce does with Agentforce Exchange.
In addition the LLM vendors (OpenAI, Anthropic, IBM etc…) are also starting to offer APIs and tools to assist developers creating agent-based solutions.
In the Open Source space there are a plethora of frameworks to help developers create agent-based solutions, not least Langchain and LangGraph.

Challenges remain however, particularly when it comes to allowing an agent to act on behalf of a human, and having them inherit all of the access control permissions of the human.
There is also an issue of discovery of data and capabilities. For example, how does an agent acting on my behalf know that it should look-up data in both Sharepoint and Salesforce to help answer a given question? Does the agent delegate to dedicated agents provided by those vendors, or does it use the existing system of record APIs? Does it create an index to speed access to that data in the future? How does it ensure that the index is up to date and that access controls are respected? How does API usage by agents change pricing of calls?
For discovery, there is a proposal for an LLM.txt file that could be added at the root of a website, used to guide LLMs in their use of a website.
For client (desktop?) applications Anthropic have announced the Model Context Protocol (MCP) which exposes LLM tools over RPC. For example, here is the MCP server for Google Drive. It deals with authenticating to the service using OAuth, returns definitions of a search tool for Google Drive documents, and then calls the standard Google Drive APIs to search and retrieve documents in response to calls to its search tool by the Anthropic Desktop (the MCP client).
The (Open Source) MCP client-server model is attractive in that it abstracts the system of record vendors away from specific agent frameworks or protocols. It could become the “Open API” of LLM/Agent interaction. It also opens up possibilities for the web-browser to become the place where MCP servers are registered, requests are launched, status can be monitored and credentials are stored. For example, the W3C could ensure that any compatible web-browser could be used as the agent “user agent” management interface.
Ultimately, we will need a higher-level protocol that allows agents to hand-off work to each other and to communicate asynchronous status and progress; allowing for a true coordinated swarm of agents acting in parallel, and sharing a common blackboard-system to store intermediate results.
Exciting times ahead for all vendors, large and small!
Leave a comment