Run your first agent, attach a tool, and grow it into a workflow.
Introduction
NxAgent is a Python framework for building, orchestrating, and deploying multi-agent workflows. It provides a clean abstraction over LLM APIs, tool execution, memory management, and agent communication.
Designed for production use, NxAgent keeps the public surface small: define agents, expose tools, compose workflows, run them, and inspect what happened.
| Use case | What NxAgent provides |
|---|---|
| Research assistants | Specialist agents for searching, checking, and writing. |
| Tool-using assistants | Typed Python functions exposed safely to agents. |
| Production workflows | Step limits, tracing, streaming, and explicit orchestration. |
Installation
Install the package into a Python 3.10+ environment:
pip install nx-agent
For a clean project setup, create a virtual environment first:
python -m venv .venv
source .venv/bin/activate
pip install nx-agent
Optional extras can install provider integrations or memory backends as the framework grows:
pip install nx-agent[memory] # adds vector store support
pip install nx-agent[all] # installs all extras
Set provider keys through environment variables instead of hardcoding secrets:
export OPENAI_API_KEY="..."
export ANTHROPIC_API_KEY="..."
Your First Agent
Start with one focused agent. Give it a role and goal that are specific enough to guide model behavior.
from nx_agent import Agent
agent = Agent(
role="Research Analyst",
goal="Answer clearly in one paragraph",
)
result = agent.run("What is NxAgent?")
print(result)
The result should contain the final answer. As you add tools or workflows, inspect the trace to understand how the agent reached that answer.
| Common mistake | Fix |
|---|---|
| Role is too vague | Use a concrete responsibility, like Research Analyst. |
| Goal is too broad | Describe the expected output shape and quality bar. |
| Too many tools too early | Start with no tools, then add one tool at a time. |
Agents
An Agent is the core unit in NxAgent. Each agent has a role, a goal, and optional backstory, tools, model settings, and memory.
from nx_agent import Agent
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and surface insights",
backstory="Expert in statistical analysis and data visualization.",
tools=[...],
llm="gpt-4o",
verbose=True,
)
| Parameter | Type | Description |
|---|---|---|
role | str | The agent's job title — shapes LLM behavior. |
goal | str | What the agent is trying to accomplish. |
backstory | str | Optional context that refines agent persona. |
tools | list | List of decorated tool functions. |
llm | str | LLM model string. Defaults can come from config. |
verbose | bool | Log reasoning steps to stdout. |
Good agents are narrow enough to make consistent choices, but broad enough to complete useful work. Split unrelated responsibilities into separate agents.
Tools
Decorate any Python function with @tool to make it available to agents. NxAgent auto-generates the tool schema from type hints and docstrings.
from nx_agent import tool
@tool
def web_search(query: str) -> str:
"""Search the web and return a text summary."""
# your implementation here
return results
Tool names should be action-oriented, arguments should be typed, and docstrings should explain when the tool should be called.
| Part | Guidance |
|---|---|
| Name | Use verbs like search_web, read_file, or create_ticket. |
| Arguments | Prefer simple typed arguments over nested blobs. |
| Return value | Return concise strings or structured data the agent can reason over. |
| Errors | Raise useful exceptions or return a clear failure message. |
@tool
def create_ticket(title: str, priority: str = "medium") -> str:
"""Create a support ticket when the user asks to track work."""
ticket_id = internal_api.create(title=title, priority=priority)
return f"Created ticket {ticket_id}"
Workflows
A Workflow orchestrates one or more agents. Use workflows when a task needs sequencing, review, routing, or collaboration.
from nx_agent import Workflow
workflow = Workflow(
agents=[researcher, writer],
mode="sequential", # or "parallel" | "dynamic"
)
result = workflow.run("Your task here")
| Mode | Use when |
|---|---|
sequential | Each agent depends on the previous agent's output. |
parallel | Agents can work independently before results are merged. |
dynamic | The workflow should route tasks based on intent or confidence. |
researcher = Agent(role="Researcher", goal="Collect reliable facts")
writer = Agent(role="Writer", goal="Turn facts into a concise brief")
reviewer = Agent(role="Reviewer", goal="Check accuracy and missing context")
workflow = Workflow(
agents=[researcher, writer, reviewer],
mode="sequential",
tracing=True,
)
Memory
NxAgent supports short-term session memory and long-term vector store memory. Memory is agent-scoped by default so unrelated agents do not leak context into each other.
agent = Agent(
role="Assistant",
goal="Help users with research tasks",
memory=True,
long_term_memory="chroma", # or "pinecone", "weaviate"
)
Use short-term memory for a conversation or single run. Use long-term memory when the agent needs durable knowledge across sessions.
Results
A run result should make the final output easy to use while preserving execution details for debugging and observability.
result = workflow.run("Draft a launch note.")
print(result.output)
print(result.steps)
print(result.trace_id)
Streaming is useful when you want to update a UI as the workflow progresses:
for event in workflow.stream("Research this topic"):
print(event.type, event.data)
Configuration
Set defaults through environment variables, explicit constructor arguments, or a .nxagent.toml config file.
# .nxagent.toml
[llm]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
api_key_env = "ANTHROPIC_API_KEY"
[logging]
level = "info"
trace = true
| Setting | Description |
|---|---|
provider | The model provider used by default. |
model | The default model for agents without an override. |
api_key_env | Name of the environment variable containing the API key. |
trace | Enable structured run traces for debugging. |
Examples
Use these as starting points for real applications. Keep each example small, then add production concerns like retries, logging, and persistence.
| Example | Agents | Pattern |
|---|---|---|
| Research assistant | Researcher, reviewer, writer | Sequential workflow with tools. |
| Code reviewer | Analyzer, reviewer, summarizer | Tool-using workflow over repository files. |
| Report generator | Collector, analyst, writer | Multi-agent synthesis with final formatting. |
| Support assistant | Classifier, resolver, ticket creator | Dynamic routing based on user intent. |
# Research assistant sketch
researcher = Agent(role="Researcher", goal="Find source material", tools=[web_search])
reviewer = Agent(role="Reviewer", goal="Check claims and identify gaps")
writer = Agent(role="Writer", goal="Create a concise brief")
workflow = Workflow(agents=[researcher, reviewer, writer], mode="sequential")
result = workflow.run("Summarize agent observability best practices")
API Reference
These are the primary public objects to start with. Keep application code close to this surface until deeper runtime features are needed.
| Object | Purpose |
|---|---|
Agent | Defines a role, goal, model settings, tools, and memory. |
@tool | Turns a typed Python function into an agent-callable tool. |
Workflow | Coordinates one or more agents and controls execution mode. |
workflow.run() | Runs a workflow once and returns a result object. |
workflow.stream() | Streams runtime events for UI or observability surfaces. |
from nx_agent import Agent, Workflow, tool
__all__ = [
"Agent",
"Workflow",
"tool",
]