Getting Up to Speed on Deep Learning

Here at the Coaching Blog- one of the world’s leading blogs on the subject of Leadership and Coaching we quite often post articles by leading authors and authorities- today we are delighted to post an article from by Isaac Madan and David Dindi.

For good reason, deep learning is increasingly capturing mainstream attention. Just recently, on March 15th, Google DeepMind’s AlphaGo AI — technology based on deep neural networks — beat Lee Sedol, one of the world’s best Go players, in a professional Go match.

Behind the scenes, deep learning is an active, fast-paced research area that’s proliferating quickly among some of the world’s most innovative companies. We are asked frequently about our favorite resources to get up to speed on deep learning and follow its rapid developments. As such, we’ve outlined below some of our favorite resources. While certainly not comprehensive, there’s a lot here, and we’ll continue to update this list — if there’s something we should add, let us know.

Structured Resources

Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016). A comprehensive and in-depth book on machine learning and deep learning core concepts.

Course notes from Stanford CS 231N: Convolutional Neural Networks for Visual Recognition. This course is a deep dive into details of neural network architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.

Course notes from Stanford CS 224D: Deep Learning for Natural Language Processing. In this class, students will learn to understand, implement, train, debug, visualize and potentially invent their own neural network models for a variety of language understanding tasks.

Blogs, Papers, and Articles

Deep Learning in a Nutshell by Tim Dettmers, via NVidia (2015). These articles are digestible and do not rely heavily on math.

  • Part 1: A gentle introduction to deep learning that covers core concepts and vocabulary.
  • Part 2: History of deep learning and methods of training deep learning architectures quickly and efficiently.
  • Part 3: Sequence learning with a focus on natural language processing.

Podcast with Yoshua Bengio: The Rise of Neural Networks and Deep Learning in Our Everyday Lives. An exciting overview of the power of neural networks as well as their current influence and future potential.

Deep learning reading list. A thorough list of academic survey papers on the subjects of reinforcement learning, computer vision, NLP & speech, disentangling factors, transfer learning, practical tricks, sparse coding, foundation theory, feedforward networks, large scale deep learning, recurrent networks, hyper parameters, optimization, and unsupervised feature learning.

Christopher Olah’s blog. Christopher has in-depth, well-explained articles with great visuals on neural networks, visualization, and convolutional neural networks.

Adrian Coyler’s blog. Adrian selects and reviews an interesting/influential/important paper from the world of CS every weekday morning.

Academic papers & presentations:

  • Representation Learning: A Review and New Perspectives by Yoshua Bengio, Aaron Courville, and Pascal Vincent (2012). This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks.
  • Deep Learning of Representations: Looking Forward by Yoshua Bengio (2013). This paper examines some of the challenges ahead in the field of deep learning as well as some forward-looking insight on research direction.
  • Deep Learning Tutorial by Yann LeCun and Marc’Aurelio Ranzato (2013). Slides from a talk at the 2013 International Conference on Machine Learning in Atlanta. Filled with many great slides, diagrams, and charts that explain concepts visually.
  • Deep Learning in Neural Networks: An Overview by Jurgen Schmidhuber (2014). A summary of 900 influential deep learning papers. Whew. Adrian summarizes it here.
  • Natural Language Processing (almost) from Scratch by Collobert et al. (2009). This paper presents a multilayer neural network architecture that can handle a number of NLP tasks with both speed and accuracy.
  • Practical recommendations for gradient-based training of deep architectures by Yoshua Bengio (2012). This paper provides a practical guide on how to select some of the most commonly used hyper parameters for deep learning models.
  • TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems by Google Research (2015). How TensorFlow works.


Deep learning Google Group. Where deep learning enthusiasts and researchers hangout and share latest news.

Deep learning research groups. A list of many of the academic and industry labs focused on deep learning.

San Francisco AI meetup. A local meetup for AI enthusiasts and researchers that we’re involved in. Pieter Abbeel will be speaking on April 28, and Vinod Khosla on May 5.


  • International Conference on Learning Representations. May 2–4, 2016 in the Caribe Hilton, San Juan, Puerto Rico. Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void. Yoshua Bengio & Yann Lecun are General Chairs.
  • International Conference on Machine Learning. June 19-24, 2016 in New York City, NY. ICML is the leading international machine learning conference and is supported by the International Machine Learning Society (IMLS).
  • Conference on Neural Information Processing Systems (NIPS).December 5–10, 2016 in Barcelona, Spain. A single-track machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations of refereed papers.
  • GPU Technology Conference (GTC). April 4–7, 2016 in San Jose, CA; there are others later throughout the year in other countries. Presented by NVIDIA, GTC is comprised of the annual conference, year-long webinar series, and workshops that connect the global community of developers, researchers, and scientists through unique educational and networking opportunities.


Deep Learning Frameworks in VentureBeat (2015). An overview of major deep learning libraries, as of December 2015.

TensorFlow neural network playground. Play with neural networks visually in your browser to get a feel for what they are and what they do.

TensorFlow tutorial. Google’s tutorial that explains TensorFlow and MNIST, as well as the basics of machine learning and deep learning networks. This is in Python.

OpenAI Gym. A toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Go.

Neon. Nervana System’s fast Python-based deep learning library. Tutorialshere.

Debugging neural networks by Russell Stewart. Neural networks are hard to debug and this affects the learning curve involved in implementing deep learning. Russell offers some great insight.

Theano. Numerical computation library for Python (faster and more mature than TensorFlow).

Lasagne. Lightweight Python library for deep learning (built on Theano).

Caffe. Deep learning framework.

Model Zoo. Pretrained Caffe models for a variety of tasks.


  • No comments yet.
  • Add a comment