AI Agents Architecture, Actors and Microservices: Let’s Try LangGraph Command
December 23, 2024

AI Agents Architecture, Actors and Microservices: Let’s Try LangGraph Command

In enterprise software development, decentralized systems have been critical over the past 15 years. We’ve adopted SOA, microservices, actor models like Akka (Akka.NET), Microsoft Orleans, Erlang processes, and countless message brokers, frameworks, and architectures.

But two years ago, we started over.

There are no established frameworks, no observability tools, and no automated testing solutions for AI/LLM models. We start from scratch.



letter recovery

artificial intelligence agent It is a software entity driven by artificial intelligence designed to perform tasks autonomously or semi-autonomously in a specific environment to achieve a specific goal. It processes input, makes decisions, and takes actions based on predefined rules, learned patterns, or dynamic interactions.

actor It is a more fine-grained, lightweight isolated entity that encapsulates state and behavior, communicates through message passing, and processes one message at a time. Thousands or millions of participants can exist in a single system, often within the same process.

microservices It is independently deployable, highly maintainable, and typically communicates over the network using protocols such as HTTP/REST or gRPC. The grain is coarser and heavier compared to the cast. Each microservice is typically an independent application or process.

Actors can be used in microservices to manage internal state and concurrency, combining the best of both paradigms. For example, a microservice can implement an actor model for event handling while exposing an API to other microservices.



The Changing Role of Artificial Intelligence Agents

naming AI agents Depends on context, marketing, and sometimes misunderstanding.

  • At the product level (e.g. your company’s chatbot) I have an agent is an actor.
  • At the company level (e.g. Google sailor) I have an agent is a service.

Over time, the community may establish more precise terminology, e.g. micro-agent, AI actoror AI serviceto distinguish these concepts.

aspect actor Services/Microservices
particle size fine grained Coarse grained
state internal, package external, usually stateless
communicate message Via web API
Concurrency Built-in, every actor Depends on service design
Zoom In-system or distributed level, each service
Fault tolerance Regulatory level External patterns/mechanisms
Use cases Real-time, event-driven Enterprise level, modular

If you compare similar tools CrewAI Agent, Autogen Agentor LangChain Agent Go to this table and you will find that they function more like actor.

As for one AI service or AI microservice I haven’t quite defined that for myself yet. It may be something we don’t need or it may still be a concept waiting to be built. I had hope Anthropic MCP to fill this role, but it is not yet fully realized. (I wrote more about this here.)



LangGraph: Towards multi-actor applications

recent, Langtu introduce Order And redefine itself from a “multi-agent” framework to a “multi-actor” framework. now its focus is stateful, multi-actor applications with LLMs for building workflows.

I believe this is absolutely correct. Let’s take a closer look and establish some examples.

The source code is at GitHub

import  "refund"  from "@langchain/langgraph";
import  route: "quotation"  from "@langchain/openai";
import  route: "quotation"  from "@langchain/langgraph";
import  route: "quotation"  from "@langchain/core/messages";
import { StateGraph } from "@langchain/langgraph";
import dotenv from 'dotenv';

dotenv.config();

const StateAnnotation = Annotation.Root({
    customerInquiry: Annotation({
        value: (_prev, newValue) => newValue,
        default: () => "",
    }),
    route: Annotation({
        value: (_prev, newValue) => newValue,
        default: () => "",
    })
});

const model = new ChatOpenAI({
    modelName: "gpt-4o-mini"
});

const routeUserRequest = async (state: typeof StateAnnotation.State) => {
    const response = await model.withStructuredOutput<{ route: "quotation" | "refund" }>({
        schema: {
            type: "object",
            properties: {
                route: { type: "string", enum: ["quotation", "refund"] }
            },
            required: ["route"]
        }
    }).invoke([
        new SystemMessage('Please categorize the user request'),        
        new HumanMessage(state.customerInquiry)
    ]);

    const routeToFunctionName = {
        "quotation": "quotationAgent",
        "refund": "refundAgent"
    };

    return new Command({
        update: {
            route: response.route
        },
        goto: routeToFunctionName[response.route],
    });
};

const quotationAgent = (state) => {
    return {};
};

const refundAgent = (state) => {
    return {};
};

const graph = new StateGraph(StateAnnotation)
    .addNode("routeUserRequest", routeUserRequest, { ends: ["quotationAgent", "refundAgent"] })
    .addNode("quotationAgent", quotationAgent)
    .addNode("refundAgent", refundAgent)
    .addEdge(START, "routeUserRequest")
    .compile();


async function main() {
  try {
    await graph.invoke({ customerInquiry: 'Hi, I need refund' });
    console.log("Done");
  } catch (error) {
    console.error("Error in main function:", error);
  }
}

main();
Enter full screen mode

Exit full screen mode

This method eliminates explicit edge Statement, leaving only nodes (participants).

In the future, LangGraph may move beyond its graph-based structure. by adding a message broker, actor addressesand autodiscoverywhich could evolve into something like Microsoft Orleans.



The future of AI service communications

Tools such as LangChain/LangGraph continue to be developed. Now they focus on in-service design and lack inter-service communications capabilities, but they are starting to add features for broader integration. For example, LangChain recently added Open telemetry supportwhich is crucial for decentralized systems.

The next important step for the community will be to achieve seamless AI-to-AI service communication. Whether through Anthropic MCP, LangChain or other innovations, this will define the future of artificial intelligence in decentralized systems.

2024-12-23 05:02:29

Leave a Reply

Your email address will not be published. Required fields are marked *