Skip to main content
Singapore
AIMenta
C

Continue

by Continue.dev

Open-source AI coding extension for VS Code and JetBrains IDEs — connects to any LLM (Claude, GPT-4, Gemini, Ollama, vLLM) for autocomplete, chat-based code generation, and context-aware refactoring within the editor. The primary open-source alternative to GitHub Copilot with support for self-hosted APAC LLMs.

AIMenta verdict
Recommended
5/5

"Open-source AI coding extension for VS Code and JetBrains — connects to Claude, GPT-4, Ollama, or vLLM for inline completion and context-aware refactoring. Supports self-hosted APAC LLMs for data-sovereign coding assistance in APAC financial services environments."

Features
6
Use cases
3
Watch outs
3
What it does

Key features

  • VS Code and JetBrains support — APAC engineering teams stay in their IDE of choice
  • Self-hosted LLM support — connect to Ollama, vLLM, or LiteLLM for APAC data sovereignty
  • Inline autocomplete — continuous APAC code suggestions as engineers type
  • @ mention context — @file, @codebase, @git-diff for precise APAC context control
  • Chat-based code generation — natural language → APAC code within the IDE
  • Open-source and configurable — APAC teams customize model selection per use case
When to reach for it

Best for

  • APAC engineering teams wanting GitHub Copilot functionality with self-hosted LLMs — Continue provides the same IDE integration but routes completions to APAC-hosted Ollama or vLLM endpoints
  • APAC financial services and regulated industry developers who cannot send proprietary APAC source code to US AI providers — Continue's Ollama/vLLM backend keeps code within APAC infrastructure
  • APAC development teams standardizing on open-source tooling — Continue's open-source model allows APAC customization and audit of the extension behavior
Don't get burned

Limitations to know

  • ! Autocomplete quality depends on the self-hosted APAC model — open-weight code models (StarCoder2 7B, DeepSeek Coder 7B) are capable but lag behind GPT-4o or Claude 3.5 Sonnet for complex APAC code generation
  • ! Setup complexity for self-hosted APAC backends — configuring Ollama or vLLM endpoints requires APAC platform team involvement; less turnkey than GitHub Copilot
  • ! Context window limits for APAC codebase search — large APAC monorepos may exceed the context window of smaller self-hosted models, reducing APAC codebase search accuracy
Context

About Continue

Continue is an open-source IDE extension for VS Code and JetBrains (IntelliJ, PyCharm, WebStorm) that provides GitHub Copilot-style AI coding assistance — inline autocomplete, chat-based code generation, and context-aware refactoring — while supporting any LLM backend through a flexible provider configuration, including self-hosted APAC inference endpoints (Ollama, vLLM, LiteLLM) that keep APAC source code within APAC-controlled infrastructure.

Continue's LLM provider configuration — where APAC engineering teams specify model providers and endpoints in a `config.json` file, supporting OpenAI-compatible APIs including Ollama running locally, vLLM running in the APAC Kubernetes cluster, and LiteLLM proxying to APAC-approved model providers — enables APAC financial services and enterprise engineering teams to use AI coding assistance without routing APAC proprietary source code through US-hosted AI provider APIs.

Continue's @ mention context system — where APAC engineers type `@file`, `@codebase`, `@git-diff`, or `@docs` in the chat panel to attach specific APAC source files, search the entire APAC codebase for relevant context, attach the current APAC git diff for review, or load documentation from APAC internal knowledge bases — provides structured ways to give the LLM precise APAC context rather than relying on the model to infer what's relevant.

Continue's autocomplete feature — where the extension requests completions from a fast autocomplete model (StarCoder2, DeepSeek Coder, or Qwen2.5-Coder via Ollama) as APAC engineers type, displaying multi-line code completions inline in the editor — enables continuous APAC coding assistance without manually triggering AI generation, similar to GitHub Copilot but routing completions to self-hosted APAC models.

Beyond this tool

Where this category meets practice depth.

A tool only matters in context. Browse the service pillars that operationalise it, the industries where it ships, and the Asian markets where AIMenta runs adoption programs.