Skip to main content
Global
AIMenta
Blog

APAC LLM Framework Guide 2026: Semantic Kernel, DSPy, and Guidance Compared

A practitioner guide for APAC AI engineering teams selecting specialized LLM frameworks in 2026 — covering Microsoft Semantic Kernel for enterprise .NET and Python AI orchestration with typed plugin functions, SK Planner for automatic plugin selection, and Azure OpenAI integration for APAC data residency; Stanford DSPy for programmatic LLM pipeline optimization using declarative module signatures and MIPRO optimizers that automatically tune prompts and few-shot examples from APAC labeled datasets rather than manual prompt engineering; and Microsoft Guidance for token-level constrained generation that physically prevents LLMs from producing tokens violating APAC JSON schemas, enum values, or regex patterns — eliminating structured extraction parse errors in APAC document processing pipelines.

AE By AIMenta Editorial Team ·

Three Distinct Problems, Three Distinct APAC LLM Frameworks

The LLM framework landscape fragments into tools solving different problems. Semantic Kernel solves APAC enterprise orchestration — how to compose multiple LLM calls, plugins, and memory into an enterprise agent on the Microsoft stack. DSPy solves APAC prompt engineering obsolescence — how to optimize LLM pipeline quality automatically from labeled data rather than manually. Guidance solves APAC structured generation reliability — how to guarantee LLM output conforms to a schema without retry loops. APAC teams should select based on which problem is most acute.

Semantic Kernel — Microsoft enterprise AI SDK for .NET and Python with plugin architecture and planner-based agent orchestration.

DSPy — Stanford framework for declarative LLM pipeline composition with automated prompt optimization using labeled APAC data.

Guidance — Microsoft constrained generation library enforcing JSON schemas and regex patterns at the token level for APAC structured extraction.


APAC LLM Framework Selection Matrix

Problem                              → Framework        → Mechanism

APAC .NET enterprise AI agent        → Semantic Kernel   Plugin discovery;
(Azure stack, Copilot, Office)       →                  SK Planner; Azure OAI

APAC prompt quality degrading        → DSPy              Optimizer tunes prompts
(manual prompts brittle, inaccurate) →                  from APAC training data

APAC JSON parse errors in prod       → Guidance          Token-level grammar
(structured extraction failing)      →                  constraint enforcement

APAC Python production LLM app       → PydanticAI        Pydantic validation
(type safety, testability)           →                  + DI (iter 480 above)

APAC multi-agent collaboration       → AutoGen           GroupChat + human
(complex multi-step workflows)       →                  oversight (iter 480)

Semantic Kernel: APAC Enterprise Agent Architecture

SK APAC plugin definition

// APAC: Semantic Kernel plugin in C# — CRM data retrieval skill

using Microsoft.SemanticKernel;

public class ApacCRMPlugin
{
    private readonly IApacCRMService _apacCRMService;

    public ApacCRMPlugin(IApacCRMService apacCRMService)
    {
        _apacCRMService = apacCRMService;
    }

    [KernelFunction, Description("Get APAC customer account details by company name")]
    public async Task<string> GetApacAccountAsync(
        [Description("APAC company name to look up")] string companyName,
        KernelArguments args)
    {
        var apacAccount = await _apacCRMService.FindByNameAsync(companyName);
        if (apacAccount == null) return $"No APAC account found for: {companyName}";
        return $"""
            Company: {apacAccount.Name}
            Market: {apacAccount.Market}
            Status: {apacAccount.Status}
            Last Contact: {apacAccount.LastContactDate:d}
            APAC Revenue YTD: {apacAccount.RevenueYtd:C}
            """;
    }

    [KernelFunction, Description("Get open APAC opportunities for a customer account")]
    public async Task<string> GetApacOpportunitiesAsync(
        [Description("APAC account ID")] string accountId)
    {
        var opps = await _apacCRMService.GetOpportunitiesAsync(accountId);
        return string.Join("\n", opps.Select(o =>
            $"- {o.Name}: {o.Stage} | {o.Value:C} | Close: {o.CloseDate:d}"));
    }
}

SK APAC agent with planner

// APAC: Semantic Kernel agent with auto-function calling

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

var apacKernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: "gpt-4o",
        endpoint: "https://apac-openai.openai.azure.com",
        apiKey: "APAC_AZURE_KEY"
    )
    .Build();

// APAC: Register plugin
apacKernel.ImportPluginFromType<ApacCRMPlugin>();
apacKernel.ImportPluginFromType<ApacEmailPlugin>();

// APAC: Chat with auto function calling — SK planner selects plugins
var apacChatHistory = new ChatHistory();
apacChatHistory.AddUserMessage(
    "Prepare a meeting summary for TechCorp Asia — show their account status, " +
    "open opportunities, and draft a follow-up email based on last quarter's activity."
);

var apacResult = await apacKernel.GetRequiredService<IChatCompletionService>()
    .GetChatMessageContentAsync(
        apacChatHistory,
        new AzureOpenAIPromptExecutionSettings {
            ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
        },
        apacKernel
    );
// APAC: SK auto-calls GetApacAccount → GetApacOpportunities → DraftApacEmail
// combining plugin outputs into coherent APAC meeting summary response

DSPy: APAC Programmatic Prompt Optimization

DSPy APAC pipeline definition

# APAC: DSPy pipeline — RAG with automatic prompt optimization

import dspy

# APAC: Configure LM
apac_lm = dspy.LM("openai/gpt-4o-mini", api_key="APAC_KEY")
dspy.configure(lm=apac_lm)

# APAC: Define typed signatures — no prompt strings written manually
class ApacRAGSignature(dspy.Signature):
    """Answer APAC enterprise AI questions using retrieved context."""
    apac_question: str = dspy.InputField()
    apac_context: list[str] = dspy.InputField(desc="Retrieved APAC document passages")
    apac_answer: str = dspy.OutputField(desc="Accurate APAC answer citing sources")
    apac_confidence: float = dspy.OutputField(desc="Confidence 0.0-1.0")

class ApacRAGModule(dspy.Module):
    def __init__(self):
        self.apac_retriever = dspy.Retrieve(k=5)
        self.apac_reader = dspy.Predict(ApacRAGSignature)

    def forward(self, apac_question: str):
        apac_passages = self.apac_retriever(apac_question).passages
        return self.apac_reader(
            apac_question=apac_question,
            apac_context=apac_passages,
        )

apac_rag = ApacRAGModule()

DSPy APAC optimizer — automated prompt tuning

# APAC: DSPy MIPRO optimizer — tune prompts from labeled examples

from dspy.teleprompt import MIPROv2

# APAC: Labeled training set (50-200 examples)
apac_trainset = [
    dspy.Example(
        apac_question="What APAC regulations govern LLM deployment in Singapore?",
        apac_answer="MAS Notice on AI Fairness and Transparency..."
    ).with_inputs("apac_question"),
    # ... 49 more APAC examples
]

# APAC: Metric function — what "correct" means for your APAC task
def apac_accuracy_metric(example, pred, trace=None):
    return float(example.apac_answer.lower() in pred.apac_answer.lower())

# APAC: MIPRO optimizer finds best prompts + few-shot examples
apac_optimizer = MIPROv2(metric=apac_accuracy_metric, num_candidates=10)
apac_optimized_rag = apac_optimizer.compile(
    apac_rag,
    trainset=apac_trainset,
    num_trials=20,          # APAC: LLM API calls during optimization
    max_bootstrapped_demos=3,
)

# APAC: Optimized pipeline has better prompts than hand-crafted
# APAC: Save and deploy the optimized program
apac_optimized_rag.save("apac_rag_optimized.json")

Guidance: APAC Zero-Parse-Error Structured Extraction

Guidance APAC JSON schema enforcement

# APAC: Guidance — constrained APAC entity extraction

from guidance import models, gen, select
import guidance

# APAC: Local model (token-level constraints work fully)
apac_model = models.LlamaCpp(
    model="/models/Llama-3.1-8B-Instruct.gguf",
    n_ctx=4096
)

# APAC: Guidance program — constrained JSON generation
@guidance
def apac_extract_company(lm, apac_document_text: str):
    lm += f"""Extract company information from this APAC document:

{apac_document_text}

Output JSON:
{{
  "apac_company_name": "{gen('name', stop='"')}",
  "apac_market": "{select(['SG', 'HK', 'TW', 'JP', 'KR', 'MY', 'VN', 'ID', 'CN'], name='market')}",
  "apac_employee_count": {gen('employees', regex='[0-9]+'')},
  "apac_founded_year": {gen('year', regex='(19|20)[0-9]{2}')},
  "apac_ai_ready": {select(['true', 'false'], name='ai_ready')}
}}"""
    return lm

# APAC: Execute — LLM cannot produce invalid market codes or non-numeric employees
apac_doc = "TechCorp Asia, founded in 2019, operates in Singapore with 450 staff..."
apac_result = apac_extract_company(apac_model, apac_document_text=apac_doc)

# APAC: Guaranteed valid output — market is always one of the enum values
print(apac_result["market"])   # → "SG"
print(apac_result["employees"]) # → "450" (always numeric)
# APAC: No try/except JSON parse — cannot fail

Guidance APAC vs. Instructor pattern comparison

# APAC: Instructor pattern (Pydantic validation + retry)
import instructor
from pydantic import BaseModel
from openai import OpenAI

class ApacCompany(BaseModel):
    apac_company_name: str
    apac_market: Literal["SG", "HK", "TW", "JP", "KR", "MY", "VN", "ID", "CN"]
    apac_employee_count: int

apac_client = instructor.from_openai(OpenAI(api_key="APAC_KEY"))
# APAC: Instructor calls LLM → validates output → retries if invalid
# Success rate: ~95-99% (occasional retry for malformed JSON)

# APAC: Guidance pattern (token-level constraint)
# APAC: Guidance intercepts token generation → masks invalid tokens
# Success rate: 100% (invalid tokens physically impossible)

# APAC: Tradeoff:
# Instructor: works with any LLM API (OpenAI, Anthropic, etc.)
# Guidance: requires logprob access (local models or special API endpoints)
# APAC recommendation: Instructor for API models, Guidance for local APAC models

APAC Framework Convergence Pattern

APAC Production LLM Pipeline Architecture (combining frameworks):

Layer 1 — APAC Orchestration (Semantic Kernel / LangChain):
  → Plugin/tool selection, memory retrieval, conversation management

Layer 2 — APAC Pipeline Optimization (DSPy):
  → Tune retrieval + reader + reranker prompts from APAC labeled data
  → Re-optimize when switching APAC models or accuracy degrades

Layer 3 — APAC Output Reliability (Guidance / Instructor / PydanticAI):
  → Enforce APAC structured output schema at generation or validation layer
  → Eliminate retry loops for APAC document processing pipelines

APAC teams: start with Layer 1 (get working), add Layer 3 (fix parse errors),
add Layer 2 only when APAC accuracy plateaus and manual prompt tuning fails.
DSPy optimization is not a day-one investment — it requires APAC labeled data.

Related APAC LLM Engineering Resources

For the AI agent frameworks (AutoGen, PydanticAI, smolagents) that build on top of these APAC LLM libraries for multi-step agent orchestration, see the APAC AI agent frameworks guide.

For the structured output alternative Instructor (Pydantic-based with OpenAI/Anthropic API), covered in the RAG guide, see the APAC RAG engineering guide.

For the LLM evaluation tools (DSPy metrics, DeepEval, Ragas) that measure APAC pipeline quality to guide DSPy optimization, see the APAC LLM evaluation guide.

Beyond this insight

Cross-reference our practice depth.

If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.

Keep reading

Related reading

Want this applied to your firm?

We use these frameworks daily in client engagements. Let's see what they look like for your stage and market.