Skip to main content
Taiwan
AIMenta

Curated weekly · 5 tools · 30 categories

The AI tool landscape,
curated & ranked.

Each entry includes pricing, use cases, limitations, and an AIMenta editorial verdict — so you can spend less time evaluating and more time deploying.

Browse

By category

💬 08
Chat & assistants
General-purpose conversational AI
Explore
🧠 09
Foundation model APIs
Programmatic LLM access
Explore
⌨️ 07
Code assistants
IDE copilots & autocomplete
Explore
04
Code generation platforms
Build apps from prompts
Explore
🤖 06
Agent platforms
Autonomous LLM workflows
Explore
🗂️ 07
RAG & vector databases
Retrieval infrastructure
Explore
🎨 08
Image generation
Text-to-image models
Explore
🖼️ 05
Image editing & enhancement
AI-powered photo workflows
Explore
🎬 06
Video generation
Text-to-video models
Explore
📹 05
Video editing & repurposing
Cuts, captions, clips
Explore
🎵 03
Audio & music
Generative music
Explore
🗣️ 03
Voice & TTS
Text-to-speech & voice cloning
Explore
📝 06
Transcription & STT
Speech-to-text
Explore
🌐 02
Translation
Multilingual AI
Explore
🔍 06
Search & research
Answer engines
Explore
📋 05
Meetings & note-taking
Recording, transcripts, summaries
Explore
✍️ 04
Writing assistants
Marketing & long-form copy
Explore
📊 04
Presentations
AI slide builders
Explore
🎯 05
Design & UI
AI for designers
Explore
📈 05
Data analysis
Talk to your data
Explore
📄 02
PDF & document AI
Chat with documents
Explore
💼 05
CRM & sales AI
Pipeline & call intelligence
Explore
📣 05
Marketing AI
Campaigns, personalization, ops
Explore
🎧 04
Customer support
AI agents for support
Explore
🔗 04
Workflow automation
Connect AI to your stack
Explore
☁️ 06
LLM hosting & inference
Serve open-weight models
Explore
⚙️ 05
ML platforms & ops
Experiment tracking & MLOps
Explore
🛡️ 04
AI safety & guardrails
Production controls
Explore
📚 04
Knowledge management
AI-native notes & wikis
Explore
📡 06
AI observability
LLM monitoring & evals
Explore
All tools

5 matching tools for "self-hosted"

Clear all

Dify

· LangGenius Inc.
Decent fit

Dify is an open-source LLM application development platform that combines visual workflow building, RAG pipeline configuration, AI agent construction, and LLM application monitoring in a single interface. Available as a self-hosted deployment (Docker Compose or Kubernetes) or as Dify Cloud (managed SaaS). Dify has become one of the most popular AI application development platforms in APAC — particularly in China, Japan, and Singapore — due to its strong Chinese-language documentation, active community, and self-hosting capability for data residency compliance. For APAC technology companies and enterprise teams with developers who want to build LLM-powered applications faster than building from scratch with LangChain but with more control than no-code tools like Coze, Dify occupies an important middle ground in the AI application development landscape.

Freemium · API · Free tier · Self-host

Google Gemma

· Google DeepMind
Decent fit

Google Gemma is Google DeepMind's open-weights model family, designed as lightweight and efficient alternatives to the full Gemini product line for self-hosted deployments. Gemma 2 models (2B, 9B, 27B) offer strong quality-per-parameter performance — Gemma 2 27B is competitive with Llama 3 70B while requiring less GPU memory due to architectural optimisations. For APAC enterprises on Google Cloud Platform, Gemma has a practical infrastructure advantage: native integration with Vertex AI Model Garden enables managed deployment with GCP-native IAM, monitoring, and serving infrastructure, without the operational overhead of self-managing model serving. Gemma models are available under Google's terms of service for commercial use and are particularly well-supported in the Keras and JAX ecosystems.

Open source · Free tier · Self-host

Meta Llama 3

· Meta
Recommended

Meta Llama 3 is the world's most widely used open-source large language model family, with weights released under Meta's custom community licence (free for most commercial use up to 700M monthly active users). The Llama 3 family ranges from Llama 3.2 1B/3B (lightweight edge deployment) to Llama 3.3 70B (GPT-4o-mini competitive) to Llama 3.1 405B (frontier-class). For APAC enterprises, Llama 3 is the default option for use cases requiring on-premises deployment, data sovereignty controls, or product integration without per-call API costs. Llama 3 is available via major cloud providers (AWS Bedrock, Azure AI, Google Vertex AI) for managed hosting, or can be self-hosted via Ollama, vLLM, or HuggingFace Transformers on GPU infrastructure.

Open source · Free tier · Self-host

Mistral AI

· Mistral AI
Recommended

Mistral AI is a French AI company producing both proprietary frontier models (Mistral Large, Mistral Medium) and open-weights models (Mistral 7B, Mixtral 8×7B, Mixtral 8×22B) available for self-hosting. Mistral Large competes with GPT-4o and Claude 3.5 Sonnet on general reasoning and multilingual tasks, with particularly strong French, German, Italian, and Spanish performance alongside English. For APAC enterprises, Mistral's appeal is twofold: the open-weights models can be self-hosted on-premises or in APAC regional cloud instances, avoiding US-routed API calls for data-sensitive use cases; and Mistral's EU-based infrastructure provides an alternative to US-origin cloud AI for enterprises with European data sovereignty requirements.

Usage-based · API · Free tier · Self-host

MLflow

· Linux Foundation / Databricks
Recommended

Open-source ML lifecycle platform. The de facto standard when self-hosted experiment tracking is required, especially for Databricks customers.

Open source · Free OSS; Databricks managed · API · Free tier · Self-host ML platforms & ops
Vendor-neutral by design

Need help choosing the right stack?

We help APAC enterprises design AI tool stacks that match their data, compliance, and budget realities — not vendor decks.