Improving Deep Neural Networks
在 Improving deep neural networks 中
將學習到 deep learning 中的 "magic"
而不只是當成 black box 一樣使用他
知道如何控制 performance, 並且有系統性的取得 good results
了解業界在 build dl application 的 best-practices
能夠使用 common neural network tricks
initialization, L2, dropout regularization, batch normalization, gradient checking
能夠應用各種優化演算法
mini-batch gradient descent, momentum, RMSprop, Adam, check convergence
知道如何設定 new best-practices
train/dev/test sets
analyze bias/variance
會用 TensorFlow
這將是 Deep Learning 的第二堂課 !
Practical aspects of Deep Learning
Setting up your Machine Learning Application
train / dev / test sets
bias / variance
Regularizing your neural network
regularization
dropout
L2
Setting up your optimization problem
normalizing inputs
vanishing / exploding gradients
weight initialization
gradient checking
Optimization algorithms
mini-batch gradient descent
exponentially weighted averages
bias correction
momentum
RMSprop
Adam
learning rate decay
Hyperparameter tuning, Batch Normalization and Programming Frameworks
Hyperparameter tuning
appropriate scale to pick hyperparameter
pandas, caviar practice
Batch normalization
normalize activations
fitting batch norm
Multi-class classification
softmax regression
softmax classifier
Introduction to programming frameworks
deep learning frameworks
tensorflow
Last updated