Skip to main content
Taiwan
AIMenta

Category · 21 terms

Machine Learning
defined clearly.

Algorithms and methods that learn patterns from data: supervised, unsupervised, and reinforcement learning.

intermediate

Adam Optimizer

An adaptive-learning-rate optimiser that combines momentum with per-parameter scaling — the default optimiser for training Transformers and most deep networks.

intermediate

Bias-Variance Tradeoff

The fundamental ML tradeoff: simpler models have high bias (underfit), complex models have high variance (overfit) — total error minimises somewhere between.

intermediate

Cross-Validation

A model evaluation technique that splits data into multiple folds, training on most and testing on the rest, then averaging — gives a more reliable estimate than a single train/test split.

foundational

Feature Engineering

The craft of transforming raw data into features that expose structure to ML models — less prominent since foundation models, still decisive for tabular work.

intermediate

Few-Shot Learning

Learning a new task from only a handful of examples — sometimes just two or three.

intermediate

Gradient Descent

The iterative optimization algorithm that trains nearly every modern ML model — adjust parameters in the direction that most reduces the loss.

foundational

Hyperparameter

A setting you choose before training begins (learning rate, batch size, number of layers) — distinct from the parameters the model learns.

intermediate

Loss Function (Objective Function)

The function a model tries to minimise during training — defines what "good" means mathematically.

Acronym foundational

Machine Learning (ML)

A branch of AI where systems learn patterns from data rather than being explicitly programmed — the technical foundation of modern AI.

foundational

Overfitting

When a model memorises the training set instead of learning generalisable patterns — low training error, high test error.

foundational

Recommendation System

A ranking model that picks the next item (product, article, video, lesson) most likely to match a user from a large candidate set.

intermediate

Regularization

Techniques that constrain a model to prevent overfitting — penalty terms, dropout, early stopping, weight decay.

Acronym intermediate

Reinforcement Learning (RL)

An ML paradigm where an agent learns by interacting with an environment and receiving rewards — the framework behind game-playing AIs and RLHF.

intermediate

Self-Supervised Learning

A learning paradigm where the model generates its own training signal from the raw data structure — the engine behind today's foundation models.

intermediate

Semi-Supervised Learning

A hybrid approach that uses a small amount of labelled data alongside a large pool of unlabelled data.

Acronym intermediate

Stochastic Gradient Descent (SGD)

A gradient descent variant that updates parameters using a randomly-sampled subset of data per step, trading exact gradients for speed.

foundational

Supervised Learning

Learning from labelled examples — each training input has a known correct output. The most widely deployed ML paradigm.

foundational

Transfer Learning

Taking a model trained on one task and adapting it for another, leveraging learned representations to reduce training cost and data needs.

foundational

Underfitting

When a model is too simple to capture the underlying pattern — high error on both training and test data.

foundational

Unsupervised Learning

Learning patterns and structure from data without explicit labels — clustering, dimensionality reduction, anomaly detection.

foundational

Zero-Shot Learning

Performing a task with no task-specific training examples, relying entirely on the model's pre-existing knowledge and the task description.