Skip to content

Lecture 1: DL Fundamentals

Video

Slides

Download slides as PDF

Notes

Lecture by Sergey Karayev.

In this video, we discuss the fundamentals of deep learning. We will cover artificial neural networks, the universal approximation theorem, three major types of learning problems, the empirical risk minimization problem, the idea behind gradient descent, the practice of back-propagation, the core neural architectures, and the rise of GPUs.

This should be a review for most of you; if not, then briefly go through this online book -neuralnetworksanddeeplearning.com.

  • 1:25​ - Neural Networks
  • 6:48​ - Universality
  • 8:48​ - Learning Problems
  • 16:17​ - Empirical Risk Minimization / Loss Functions
  • 19:55​ - Gradient Descent
  • 23:57​ - Backpropagation / Automatic Differentiation
  • 26:09​ - Architectural Considerations
  • 29:01​ - CUDA / Cores of Compute

We are excited to share this course with you for free.

We have more upcoming great content. Subscribe to stay up to date as we release it.

We take your privacy and attention very seriously and will never spam you. I am already a subscriber