(Part 1 appeared yesterday)
As a result, we have a few consequences that many may not be aware of:
1) The LLM does not actually call any “structural functions”, i.e., LLMs cannot integrate with each other — this is the prerogative of AI’s modules only. Instead, the LLM can generate a structured specification — often in JSON — of the LLM need that the AI’s module is supposed to resolve by engaging additional resources. Even if this module (like an orchestrator) could feed the acquired resources back to the LLM, its initial session is finished by that time. Furthermore, in the real-world scenario, the data to be processed by the LLM may change. So, the new LLM session may not need the updated information but can request a new one. This open-ended process relies on “the mercy” of unknown changes in the data and the not really comprehended internal logic of LLM. Thus, giving such “tools” to the people for the “life assistance” is not only reckless but also evil.
2) In order to generate a meaningful structured specification, the LLM must be aware of the functions available to the AI. This design not only eliminates any adaptive behaviour to the potential needs, but it also couples the LLM with concrete implementation of AI. That is, if the same AI with the same LLM is used in a different data context, the new needs of the LLM may appear unsupported by the AI’s modules. Altogether, this makes AI not vertically scalable.
3) The disability of a modern LLM to self-interrupt and resume its execution session outlines the crucial importance of understanding the inter-influence between the prompt-goal-model-data at runtime. Recently reported findings of “patterns of activity within an AI model’s neural network that control the character traits” of the to-be outcome still require more detailed verification considering the modern trend of presenting desirable for real.
4) An emerging Complex Reasoning and Problem Solving: LLMs can have difficulty with multi-step reasoning tasks, like solving complex math problems or puzzles. The LLMs might get “confused”, make logical errors, or fail to grasp the overall strategy needed. For example, an LLM might generate code that looks correct but doesn’t actually achieve the desired functionality because it doesn’t fully “understand” the intended logic.
I requested two leading AIs to enumerate a few real-world cases where LLM was not able to resolve the task and asked for additional information or actions. I was surprised (not confused) with the responses, which deserve to be analysed:
While the goal and LLM are, probably, in sync and enumerating medical data in plain text is possible, for the last 40 years it has been considered a bad practice because people are widely unaware of symptoms and their attributes needed for consistent representation of such data. That is, the format of the prompt is ambiguous.
An Interaction with Resources in the Resource Registry
Concepts
Thanks to a new tradition of IT that uses rubbish terms “because we say so”, we should establish clear terms first.
First of all, I believe that an LLM in realisation is either a module or component that can receive data and return data. The LLM module may not do anything else due to a fundamental principle (that modern developers are probably unaware of) known as “Separation of Concerns”. So, if someone says that she or he works on automation for connecting Apps with LLMs, this person either uses jargon or has not a bit of understanding of what an LLM is. The ChatGPT has given me an example of such ‘automation for connecting’, referring to the case: “OpenAI GPT calls triggered by database changes”, which in reality means that the OpenAI GPT Application receives a prompt that it passes internally to the LLM module.
In other words, the LLM module communicates with other components or modules of the AI App or AI Agent, which, in turn, integrates with any other resources. This communication-integration can be understood as four scenarios:
1) An integration of the LLM module with another module within the AI App: the LLM itself is unaware of any resources;
2) An LLM is made artificially aware of a supplemental “structural function”, adapter, or another module within the AI App;
3) An integration of the LLM module with a resource outside of the AI App: the LLM itself is unaware of any resources;
4) An LLM is made artificially aware of a resource outside of the AI app — another app, AI Agent, data source, function or adapter.
The best — the most flexible, adaptable and robust — design scenario is presented by 1). Scenario 2) violates “separation of concerns” and makes the LLM coupled with the implementation of the AI app. In scenario 3), the design breaches the integrity and consistency of the AI app. The worst case — the bad practice — is demonstrated in scenario 4) because it couples the LLM and the execution environment, which can change independently from the LLM and screw it at any moment.
In parallel with the described scenarios, the real-world AI and AI Agents have a non-deterministic scenario that is mainly missed. Here it is:
5) An LLM being unaware of any surroundings can dynamically decide what it needs for solving the overall task given to the AI. The word “what” is a placeholder for either additional functionality, or data, or a change in the world state.
Before talking about the dynamically decided needs, we have to clarify a difference between the AI Agent’s goal and the AI Agent’s prompt. In this article, a prompt is the input text (in a natural or structured language) or an image given to an LLM via the AI Agent modules to generate a response and then the outcome. The input text may include, e.g., instructions, scenarios, constraints/policies, or questions. So far, so good. The AI goal is the persistent or long-term objective the AI or AI agent is trying to achieve. The goal exists independently of any prompts and may guide behaviour across multiple of the AI’s internal steps of the execution. The AI, or AI agent, interprets the prompt and should use its goal to guide the response, if designed properly.
So, the data flow for scenario 5) is:
A prompt → AI Agent with a goal → LLM starts executing → LLM recognises a need → LLM stops and exposes the need to the AI agent’s dedicated internal module. → The AI Agent decides on one of three options:
→ whether to obtain the requested resource and feed it back to the LLM for a new processing session
→ or complete its work and request a new prompt containing the requested resource from the user
→ or complete its work in full, admitting its inability to resolve the task.
This scenario requires special attention if the design defines an automated AI chain of invocations. The designer and developers should somehow guarantee that none of the engaged AI Agents would identify an additional need or that an identified need can be automatically satisfied at runtime. Since the AI Agents are, in general, created-offered by different independent providers, the chance of such a guarantee is quite low.
The LLM in need may request any or all of the following:
- a) additional supplemental data;
- b) additional functionality to be executed and results returned;
- c) an execution of a command by another (outside of the AI App) entity.
Certainly, each of the needs should be fulfilled at a certain time of the LLM execution with one potential exception described in c). That is, the needs a) and b) usually require a real-time resolution (synchronous), while c) may be asynchronous depending on the task. Also, as the AI Agents are assumed to work autonomously (without people), any actual request for help should be considered as a breach of autonomy, i.e., as not suitable for an automatic chain of executions.
If ‘additional supplemental data’ is self-exploratory, a need for additional functionality is a more complex matter. Because we cannot predict the consumer’s request (a prompt), the needed functionality may be a very rich spectrum of options. The trivial GRUD or HTML ‘actions’ are not enough any more. The functional need is rather similar to the one offered in the old WSDL model. It allowed specifying any actions or functionality if it was accompanied by an appropriate description understandable to a machine, e.g., a schema. If this schema were available to the requester and provider, they could understand each other and cooperate (interact) effectively. For example, a logic of resolving the overall task may require triggering certain processes or procedures provided outside of AI or AI agents. The results of this process may not necessarily be returned, but just change the real-world state.
Talking about a need to push a command outside of the AI App or AI Agent, it is similar to the case about additional functionality. The difference is in that certain functions are not specified. In this case, the dedicated AI or AI Agent internal components should search for the resource capable of executing the command. The command simultaneously constitutes the message to those who would recognise it and execute it appropriately.
Some functionality needs and command emissions lead to an asynchronous integration between AI Agents.
About the Resource Registry
In the article “A New Concept for Authentication and Authorisation for AI Agents”, I mentioned a Resource Registry as an intermediary means that handles security controls such as Authentication and Authorisation for the resources identified by the AI Agents at runtime as needed but not provided initially by design. Therefore, the Resource Registry can play a much more valuable role than just a security enabler.

In essence, the Resource Registry is a general-purpose runtime integration platform for AI Agents. The Resource Registry realises a known pattern of a marketplace pattern (familiar to many via Amazon). The assumption here is that several AI Agents might need additional unforeseen design resources for solving their overall tasks (prompts), and many providers would offer their AI agents and data sources or applications as the resources that the AI Agents might need. This assumption extends a simple P2P integration in two aspects:
1) The integration can be performed with unforeseen resources if needed.
2) The integration may be conditional, policy-constrained and even monetised.
This pattern involves dynamic resource discovery, plug-and-play supplemental tools, and actors. The major elements of this pattern are the following:
- a) producers
- b) consumers
- c) the marketplace
- d) offers
- e) programmatic contracts including interfaces/protocols.
The modus operandi of this pattern may be expressed with “give me what I want”; it is a consumer-centric paradigm.
The Resource Registry welcomes providers to register their resources — AI Agents and data sources — with marketing descriptions and the consumers to contract their AI Agents. The Resource Registry promises to automatically look up a resource upon the consumer request and, if it is found, assist in the technical integration between the AI Agent and the resources. The better the description of the resource capabilities is, the more chances that the resource would be selected for the integration. Apparently, both consumer and provider may have their terms and limitations (via policies), i.e., a discovered resource may not be necessarily integrated. That is, the AI Agent in need should be able to assess the found offer and only then engage the Resource Registry to move to the second part of integration that involves the programmatic part.
Each Resource Registry may have its own language, e.g., domain-orientated, and charge for its use (while a registration with the Resource Registry may be free from charges). That is, this is a usability-based charge scheme. Also, the Resource Registries may be linked into a federated chain with appropriate mechanisms for transferring from one to another.
Scalability: Back to the Future
In one of the comments about Authentication and Authorisation for AI Agents, one person has suggested, “By shifting authentication and authorisation control into a central Resource Registry, you enable dynamic discovery without delegating credentials or deploying brittle runtime bonds across agents. That said, security at scale requires runtime policy enforcement, invocation governance, and telemetry-based drift detection, not just registry access control.” With my respect and high appreciation of these thoughts, let me say that I agree and disagree with the comment. Certainly, in my article I was able to skip such laborious and cumbersome “delegating credentials or deploying brittle runtime bonds across agents”. I can do it because I’ve found (or recalled) a more reliable and easier-to-manage solution that people used before and up until now, but it is not “hot stuff”.
This “stuff” is LDAP realised as JNDI or Microsoft Directory Service, for example. The scalability of LDAP has been proven for years, especially recently, when the LDAP implementations are made replicable across networks for both performance and redundancy. As we know, the majority of “new” things comprise the well-forgotten ones.
Thus, if a Resource Registriy contains an LDAP capability, it is scalable on its own even for the BigTech organisation needs. Also, the federation of Resource Registries makes AI integration a trusted environment “at scale”.
The federation of Resource Registries may be organised in the same way as we categorise AI Agents, i.e., by industrial domains. The Resource Registries may be owned and maintained independently from each other. Therefore, all security control must be in place when the request transfers from one registry to another and returns. The only exception comprises domains in the humanitarian spheres like culture in general, habits, customs, newspapers, websites — web platforms — forums and chats, videos, images and all variations of text types. Unfortunately, this is the most lucrative for the creator’s family of domains because governments following in the fairway of the UN’s human-unethical AI principles pay the most for AI Agents in these domains.
A Search Lingua Franca in the Resource Registry
As we know, an integration between AI Agents is based on the “language of communication” and technical interfaces that include protocols (in the context of this article).
It looks like the technology evolution is currently returned to the point where it started 25 years ago, though on the next spiral turn. That time, people in management and business wanted to understand what data their IT department operated with, and XML came up. It was a pity, but XML was not performing well enough on the hardware of that time, though it constituted a universal and standardised language, and it was very useful in integration and integration development. The XML was quickly replaced by weaker JSON, but it was OK in performance. It is still in use, but now technology focuses on the languages that people can understand — on the people’s language(s).
The Resource Registry supports the AI Agents in using the language that the prompts are written in, i.e., human language. It may be specific to the domain of the subject. The only exception is a humanitarian set of domains. The prompts in such domains are primarily based on the enormous amount of language variations around the world, and any standardisation here will be equal to cultural dictatorship. In these domains, a convenience of technology/developers does not really matter — only the convenience of the human consumers matters. So, the proprietary languages of distinct Resource Registries may differ not only in alphabet but also in semantics and structure.
The overarching requirement for the Resource Registry’s language is to be consistent, unambiguous, and simple enough to ease translation. The cases of translation comprise translations:
l from the specification of the need created by the AI Agent to the Resource Registry’s language;
- from the specification of constraints/terms imposed by the resource on its consumers to the Resource Registry’s language;
- from the Resource Registry’s language to the internal vernacular of the AI Agent in need for its assessment of the provider’s terms;
- from the specification of the resource’s interface to the internal vernacular of the AI Agent responsible for the setting of communication with the resource.
There is no need to translate the requested need into the resource description because the description should be articulated in the Resource Registry’s language during the resource registration.
At the same time, Resource Registry’s linguistic section is not free from a few linguistic problems typical of all linguistic translations. They are:
- a) the translations must be free from any form of censoring and “political correctness”, which encounters risks of misinterpretation of the needs and capability descriptions;
- b) if more than one consistent translation is possible, all of them should be presented to the AI Agent, to the Resource Registry and to the resource as the only decision-makers on the interaction handshake; this includes near-synonyms;
- c) Different domains may have different semantics for the same words.
The listed problems may be mitigated by special algorithms that are discussed in the following sections.
The Search Protocol in the Resource Registry
The fundamental assumption that AI Agents work autonomously (without human interventions) and independently leads to the only one model of interaction with the surrounding world — cooperative integration. Jargon expressions mentioning a collaboration between AI Agents assume implicitly that interacting AI Agents may and will be aware of each other, i.e., coupled, breaking the independence. Usually, this takes place when a few AI Agents are created by the same team for a set of tasks the team is responsible for. Such a design destroys the reuse of an individual AI Agent because it glues a “tail of dependencies” that the reuse-case does not want.
The autonomy and independence disallow direct search of a counterpart because the former may not know about the latter by design. In practice, this strict rule is frequently violated and named a soft dependency. Regardless of soft or hard style, dependency means coupling. At the same time, an independent AI Agent may have means of interactions like interfaces, i.e., two AI Agents can interact but only after their “acquaintance” is established by a 3rd party (in a good design). In essence, AI Agents interact in the model very similarly to interactions of SOA Service or Microservices, which are also independent in the SOA ecosystem, can be deployed independently and do not know about each other in the right design. Such isolation is unpopular among developers that used to integrate components via API known at the design time. If a developer violates the described independence, a crippled service is the outcome.
An integration of AI Agents, therefore, follows two general patterns:
1) “I am as an AI Agent in need to ‘communicate’ with XYZ via ABC protocol to continue my work””;
2) “I am as an AI Agent in need for information KLM, or for execution of functionality FNC, or issuing a command CMD before I can continue my work.”
The pattern in 1) is a traditional programming method that explicitly calls the resource XYZ via ABC protocol, i.e., it is a coupling pattern. If you want to reuse the AI Agent in a new composition, it will drag XYZ and ABC into there, and you will have a challenge to provide compatibility of this “tail” with the composition’s environment, including all security controls.
The pattern 2) specifies the need with no indication of what, where and when may satisfy it. This is like an “open-end” integration that is only suitable for the integration with non-anticipated resources and may be realised in a Resource Registry.
The Resource Registry provides resolution of the AI Agent’s needs by the resources that the Registry knows about or, alternatively, via a federation of Resource Registries split per, e.g., domain-specific realms. In the Resource Registry, a search is based on surfing the meta-data in the Registry’s LDAP or alike directory to “match” the AI Agent’s need. The status of “match” can vary from deterministic to statistical and depends on the provider of the Resource Registry. The matching may be based on the Vector Model, enabling querying by semantical similarity.
The Resource Registry is fulfilled by voluntary registration of resources conducted by providers independently from the AI Agent at any time. The providers compete for the AI Agents and try to describe their offers in the best way available to increase the chance of matching. Similarly to SOA, the resource The proprietary descriptions comprise two parts: a) capability or functionality and effect on the real world upon execution, and 2) programmatic interfaces, request and return formats in a programming language, e.g., REST, or GraphQL, or GRPC, or so forth.
Each Resource Registry’s meta-language used for describing and querying resources looks like an extra complexity at first glance. However, it is a special and very powerful feature that makes the Resource Registry independent from any AI Agents, applicability domains and allows refining its capabilities with almost no impact on the participating AI Agents and resources. This feature delivers the flexibility to the integration realm in the world that speedily changes on us.
A search for resources for an AI Agent’s need is represented by the following steps:
- The AI Agent’s LLM returns its need to the dedicated AI Agent’s module in any predefined specification format (that may not contain any actual names of providers, other AI Agents, URLs or any identifiers tying the AI Agent to a particular environment);
- It is recommended that each “AI Agent in need” be supplied or accompanied by its own Agent Adaptor for a particular Resource Registry. The Agent Adaptor provides a) a bi-directional translation of the AI Agent’s need-specification into the Resource Registry’s language, and b) information for physical connectivity with the Resource Registry. Such Agent Adaptor guarantees the flexibility of the AI Agent to work with any resource registry, as needed. In the Resource Registry federation, the Agent Adaptor works with only the immediate Resource Registry, which, in turn, transforms the request to another Resource Registry via special infrastructural bridges in both directions that are unrelated to the AI Agents. The Adapter pattern is actively used in such integration platforms as Boomi and MuleSoft.
- When the Resource Registry receives the request from the Agent Adaptor of the AI Agent in need, it forms the “query of needs” in its meta-language and executes it by navigating through the Resource Registry’ directory with all possible performance optimisation, if needed;
- Since each AI Agent may have its conditions/terms, including the cost of service, support duration, any political or environmental constraint policies and the like, the Resource Registry should return this information first to the AI Agent in need if the resources, i.e., the match, are found. The AI Agent should assess the functionality, in- and out information, and conditions and decide whether to continue the search or whether the found resource is suitable for integration. Some AI Agents may be pre-configured with certain restricting policies that prevail over a simple finding of a resource. An enforcement of such policies is the prerogative of the AI Agent itself. At this point, a few different scenarios are possible;
- Then, the AI Agent in need informs the Resource Registry (via the Agent Adaptor) about the decision, and, if accepted, the Resource Registry’s “query of needs” should obtain and return part 2) of the resource description — the programmatic contract;
- If the resource is found, additional interaction protocols may be used depending on the situation and the overall task. For instance, it may be Google’s A2A protocolfor synchronous return of the final outcome to the requester, or it may be an event-based “fire-n-forget”/pub-sub protocol if the final result is not for a return but for a certain change in the world state;
- If the resource is not found in the Resource Registry, the AI Agent can either “re-plan” its execution and potentially work around the absent resource or report a failure and stop its work. The “re-plan” may include another LLM or another version of the same LLM, but it is not recommended since it may be costly, with a performance hit and with no positive result;
- If the resource is found and accepted, the integration/interaction between the AI Agent and the resource can take place.
- The following steps, like refining, reasoning, and execution in loops, are the specifics of each AI Agent design.
Also, it should be noticed that:
1) The longer the AI Agent’s execution cycle lasts, the higher the chance of the data for processing gaining a higher probability, so all efforts for resolving the need may be annulled;
2) Any provider who registered a resource is free to revoke it from the Registry. For the latter case, the Resource Registry defines a reasonable “graceful period of revocation” to protect the consumers that work with the resource at the moment of the removal. This also means that a later reuse of the same AI Agent may encounter a problem if it tries to reconnect to the previously found resource without a preliminary search.
The Integration Post-conditions in the Resource Registry
An integration case may contain several integration events, each of which has a risk of a failure. A precondition of each next event is the post-condition of the previous event. In a distributed environment where independent entities interact, the post-pre-condition aspect plays a highly important role.
The Resource Registry requires that each of its engaged resources return an acknowledgement upon completion of its work to the Resource Registry. An analogous acknowledgement between the interacting AI Agent and the resource is not controlled by the Resource Registry.
The acknowledgements are persisted in the Resource Registry for a configurable period of time for the purposes of audit or possible conflict resolution.
The AI Agent Integration Interfaces
For the sake of simplicity, this section considers that a notion of “interface” includes the notion of “channel” through which the interface may be reached.
Integration interfaces are much more complex entities than so-called endpoints. Interfaces have an undertone of integration intention. This is one of the most important aspects of integration, reflecting its purpose. While an integration intention is generally overseen by data-focused developers, it is very important from the architectural perspective.
The integration interfaces address exchange by data, by function requests and by commands. Here are integration interfaces sorted into four categories:
- “I have something you might be interested in”;
- “Give me what I want”;
- “Give me what you have”;
- “I want you to have it.”
The table below depicts the three most popular integration interfaces used in the AI Agent integration.
Conclusion
The topic of dynamic integration for AI Agents explores the complexities and methodologies involved in enabling seamless collaboration among AI systems, particularly in humanitarian contexts. Unlike traditional IT integration, which involve known components and deterministic processes, AI Agents may dynamically identify their integration needs at runtime, often without prior knowledge of potential partners or required resources. This unique challenge is exacerbated in areas where information is subjective and unverifiable, such as culture and social media, highlighting the need for robust solutions that can adapt to unpredictable environments.
The proposed solutions emphasise the concept of a Resource Registry that facilitates runtime discovery and integration of AI Agents. By adopting a model that allows resource providers to register their capabilities and AI Agents to express their needs, describe them in a specified Registry’s language, and hire the Registry to search among available resources to find a match with the needs. This approach not only enhances scalability and flexibility but also encourages a federated system where Registries can operate independently across different domains. Additionally, it is crucial for AI Agents to assess compatibility of their own constraints with the conditions imposed by the resources they seek, ensuring they do not engage in operations beyond their designed purposes.
Altogether, the integration of AI Agents within a dynamic environment necessitates a shift from traditional methods towards innovative frameworks that support autonomous, cooperative interactions. Emphasizing the importance of clear communication protocols and trusted registries can help mitigate the risks associated with uncertainty and variability in data. As AI technologies continue to evolve, developing effective integration strategies will be key to harnessing their full potential across diverse applications, particularly in humanitarian spheres where accurate and reliable information is critical.
