• 模型结构：一般深度学习基于深度神经网络架构较为复杂化；
• 人工干预：传统机器学习需要人工构造特征但深度学习基于神经网络自动提取特征；
• 数据量：深度学习需要大量的数据集来训练多层神经网络；

## 学习范式

• 主流学习范式
• 监督学习(Supervised Learning)
• 无监督学习(Unsupervised Learning)
• 强化学习(Reinforcement Learning)
• 混合学习范式
• 半监督学习（Semi-Supervised Learning）
• 自监督学习（Self-Supervised Learning）
• 对比式学习（Contrastive Learning）
• 生成式学习（Generative Learning）
• 多示例学习（Multi-Instance Learning）
• 统计推断
• 归纳学习（Inductive Learning）
• 演绎推断（Deductive Inference）
• 直推学习（Transductive Learning）
• 学习技术
• 主动学习（Active Learning）
• 在线学习（Online Learning）
• 迁移学习（Transfer Learning）
• 集成学习（Ensemble Learning）

## 主流学习范式

### 监督学习

Applications in which the training data comprises examples of the input vectors along with their corresponding target vectors are known as supervised learning problems. —— Page 3, Pattern Recognition and Machine Learning, 2006.

Our goal is to find a useful approximation$f(x)$ to the function$f(x)$ that underlies the predictive relationship between the inputs and outputs. — Page 28, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edition, 2016.

• 分类（Classification）：训练的模型主要用于预测类别标签，例如手写数字识别；
• 回归（Regression）：训练的模型主要用来预测数值，例如房价预测；

### 无监督学习

The goal in such unsupervised learning problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization. — Page 3, Pattern Recognition and Machine Learning, 2006.

• 聚类（Clustering）：对输入数据进行分组；
• 密度估计（Density Estimation）：学习输入数据的分布；
• 可视化（Visualization）：对数据简单进行统计或将高维数据映射到低维向量空间进行可视化；

### 强化学习

Reinforcement learning is learning what to do — how to map situations to actions—so as to maximize a numerical reward signal. The learner is not told which actions to take, but instead must discover which actions yield the most reward by trying them. — Page 1, Reinforcement Learning: An Introduction, 2nd edition, 2018.

• Q-learning
• temporal-difference learning
• deep reinforcement learning

## 混合学习范式

### 半监督学习

In semi-supervised learning we are given a few labeled examples and must make what we can of a large collection of unlabeled examples. Even the labels themselves may not be the oracular truths that we hope for. — Page 695, Artificial Intelligence: A Modern Approach, 3rd edition, 2015.

### 自监督学习

The self-supervised learning framework requires only unlabeled data in order to formulate a pretext learning task such as predicting context or image rotation, for which a target objective can be computed without supervision. — Revisiting Self-Supervised Visual Representation Learning, 2019.

### 多示例学习

In multi-instance learning, an entire collection of examples is labeled as containing or not containing an example of a class, but the individual members of the collection are not labeled. — Page 106, Deep Learning, 2016.

## 统计推断

### 归纳学习

the problem of induction, which is the problem of how to draw general conclusions about the future from specific observations from the past. — Page 77, Machine Learning: A Probabilistic Perspective, 2012.

### 演绎学习

… the simple observation that induction is just the inverse of deduction! — Page 291, Machine Learning, 1997.

### 直推学习

Induction, deriving the function from the given data. Deduction, deriving the values of the given function for points of interest. Transduction, deriving the values of the unknown function for points of interest from the given data. — Page 169, The Nature of Statistical Learning Theory, 1995.

## 学习技术

### 迁移学习

In transfer learning, the learner must perform two or more different tasks, but we assume that many of the factors that explain the variations in P1 are relevant to the variations that need to be captured for learning P2. — Page 536, Deep Learning, 2016.

### 多任务学习

Multi-task learning is a way to improve generalization by pooling the examples (which can be seen as soft constraints imposed on the parameters) arising out of several tasks. — Page 244, Deep Learning, 2016.

### 主动学习

Active learning: The learner adaptively or interactively collects training examples, typically by querying an oracle to request labels for new points. — Page 7, Foundations of Machine Learning, 2nd edition, 2018.

### 在线学习

Traditionally machine learning is performed offline, which means we have a batch of data, and we optimize an equation […] However, if we have streaming data, we need to perform online learning, so we can update our estimates as each new data point arrives rather than waiting until “the end” (which may never occur). — Page 261, Machine Learning: A Probabilistic Perspective, 2012.

### 集成学习

The field of ensemble learning provides many ways of combining the ensemble members’ predictions, including uniform weighting and weights chosen on a validation set. — Page 472, Deep Learning, 2016.