A composable .NET library for building LLM agent runtimes. Provider-agnostic orchestration, attribute-driven tool registration, and MCP hosting — built for internal development.
LLM orchestration, provider adapters, canonical message types, tool registry, schema generation, session context management, and the agentic loop. Multi-Agent development is supported out of the box.
MCP server hosting. Exposes Saucer tools as Model Context Protocol endpoints. AI agents and IDEs discover and invoke tools through a standard protocol. Our own Grapple MCP runs on this.
DI registration, builder patterns, provider configuration, and tool scanning. Wires Saucer into the .NET generic host and ASP.NET service collection. Sets up complete Agent runtimes with a few lines of code.
Most AI orchestration libraries target Python or JavaScript. C# developers are left with thin SDK wrappers or framework ports that trail the original by months. Our work in .NET needed a first-class solution designed for it, and one that can reflect our opinions on how an LLM orchestration framework should work.
Our immediate goal was not to learn how another library implements access to LLMs. It's to have the direct control we need and the opportunity to innovate that comes with that. When we need to step outside the box, as we often do, we do not want to dig through another vendor's source code to figure out if what we need is even supported.
AI is a rapidly evolving field. Coupling our infrastructure to a single provider's SDK would mean inheriting their decisions, their update cadence, and their deprecations. Saucer's approach means the framework evolves readily, and has the composable elements we can evolve when we need to.
Saucer's development is informed by decades of software architecture experience. This puts development of our own framework well within reach. Some of that experience shows that framework designers often have a tendency to hide complexity to make their libraries more marketable. This invariably leads to friction in other areas. A hard lesson for many teams is that “hiding complexity creates complexity”. Maintaining our own vision of how an orchestration should behave and favoring composition over unnecessary abstractions lets us side-step all of these issues. Put simply, we can achieve immediate results without vendor lock-in, or arm wrestling with other libraries.
AgentTools are a core feature of Saucer.AI. Decorate a class with [AgentToolService] and its methods with [AgentTool] and they can be given to the LLM. In this sense, Saucer's AgentTools behave like ASP Controllers and are simple to implement. (If you can call it from code, so can the LLM.)
Note that Saucer.MCP shares the same AgentTool infrastructure. Tools are portable from Dev/Test flows, callable locally, and readily ported to and from external MCP servers. Here, The Criterion Companion uses a local AgentTool registration to allow the Cineguide LLM assistant access to the Criterion corpus.
[AgentToolService("Criterion Catalog Tools")] public class CriterionFilmTools { private readonly ICriterionService _criterion; [AgentTool( Name = "search_films", Description = "Finds Criterion Collection films using a WHERE clause.")] public async Task<List<FilmSummary>> SearchFilmsAsync( [ToolParam("SQL WHERE clause")] string query, [ToolParam("Max results")] int limit = 25) { return await _criterion.SearchAsync(query, limit); } }
Registration is DI-native. Providers, tools, and orchestration are configured through the standard .NET service collection.
// Register providers and tools services.AddSaucerAI(ai => { ai.AddAnthropic(o => o.ApiKey = config["Anthropic:Key"]); ai.AddOpenAI(o => o.ApiKey = config["OpenAI:Key"]); ai.AddGemini(o => o.ApiKey = config["Gemini:Key"]); }); // Scan and register tools from the assembly services.AddAgentTools(tools => { tools.ScanAssembly(typeof(CriterionFilmTools)); }); // Configure orchestration services.AddOrchestration(orch => { orch.DefaultProvider = "anthropic"; });
Multiple LLM providers coexist in the same application. The LlmProviderResolver routes requests based on the orchestration policy, and each provider's adapter handles the translation to and from the canonical message format.
The Orchestrator runs the agentic loop: send context to the LLM, receive a response, execute any tool calls, feed results back, repeat until the model completes. The full lifecycle is observable through IOrchestrationObserver events.
Orchestration policies are per-request. Different conversations can use different providers, different tool sets, and different system prompts within the same application.
All session context is stored in canonical types. Switch providers between sessions or mid-orchestration without losing conversation history.
Every LLM API has its own message format. Anthropic uses content blocks, OpenAI uses message arrays, Gemini uses parts. Saucer normalizes all of these into FrameworkMessage and ContentPart types that are provider-independent.
When a request goes out, the provider's RequestConverter translates from canonical to vendor format. When a response comes back, the ResponseConverter translates back. The session context never sees a vendor-specific type.
This means a session that started on Claude can continue on GPT-4 or Gemini. Rate limited by one provider? Switch to another mid-turn. Need to A/B test models? Same context, different policy.
// Switch providers per-request or mid-session var policy = new OrchestrationPolicy { LlmProvider = "openai", SystemPrompt = systemPrompt, Tools = toolRegistry.GetTools(), Observer = new OrchestrationObserver() }; // Same session context, different provider // No data loss, no context reset var result = await orchestrator .RunAsync(session, policy, ct);
Semantic Kernel is tightly coupled to the OpenAI API shape. LangChain is Python-first with a C# port that lags behind. Both impose their abstractions on your code. Saucer inverts this — tools are plain C# methods, and every orchestration component is readily composable. (We also need provider-agnostic, portable session context that survives vendor switches, which neither supports natively.)
Saucer is an internal library. Making it public would mean maintaining it for external consumers — documentation, support, backward compatibility — which would slow down development. We share the architecture and approach here because the design decisions are valuable to many on their own.
Anthropic (Claude), OpenAI (GPT, o-series), and Google Gemini. Adding a provider means implementing ILlmProvider and writing a request/response converter. Custom LLM providers are even achievable without a rebuild of the library. (The adapter pattern makes this straightforward — the canonical FrameworkMessage types handle the translation.)
All conversation history is stored in canonical FrameworkMessage types, not vendor-specific formats. When you switch providers, the adapter converts the canonical context to the target API format. This works between calls, sessions or (theoretically) even mid-orchestration — useful for 429 rate limit recovery or cost routing. Imagine deciding to go with Anthropic and you have 1000's of sessions already active with OpenAI.
They share the same AgentTools system. A method decorated with [AgentTool] can be invoked by the LLM orchestrator (Saucer.AI), exposed as an MCP tool (Saucer.MCP), called locally in application code, or mocked in tests. One definition, multiple execution contexts.
We are considering making this available to future partners, but Saucer.AI is not available as a public package. As much as we would love for external teams to have access to it, the overhead of sharing the IP is just out of scope.
Availability: Saucer is developed and used internally at The Martian Workshop. The architecture and patterns described here are shared for informative purposes.