• Training Neural Networks: Q&A with Ian Goodfellow, Google


    Training Neural Networks: Q&A with Ian Goodfellow, Google

    Neural networks require considerable time and computational firepower to train. Previously, researchers believed that neural networks were costly to train because gradient descent slows down near local minima or saddle points. At the RE.WORK Deep Learning Summit in San Francisco, Ian Goodfellow, Research Scientist at Google, will challenge that view and look deeper to find the true bottlenecks in neural network training.

    Before joining the Google team, Ian earned a PhD in machine learning from Université de Montréal, under his advisors Yoshua Bengio and Aaron Courville. During his studies, which were funded by the Google PhD Fellowship in Deep Learning, he wrote Pylearn2, the open source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in Computer Science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group. 

    We caught up with Ian ahead of the summit in January 2016 to hear more about his current work and thoughts on the future of deep learning.

    What are you currently working on in deep networks?
    I am interested in developing generic methods that make any neural network train faster and generalize better. To improve generalization, I study the way neural networks respond to “adversarial examples” that are intentionally constructed to confuse the network. To improve optimization, I study the structure of neural network optimization problems and determine which factors cause learning to be slow.

    What are the key factors that have enabled recent advancements in deep learning? 
    The basic machine learning algorithms have been in place since the 1980s, but until very recently, we were applying these algorithms to neural networks with fewer neurons than a leech. Unsurprisingly, such small networks performed poorly. Fast computers with larger memory capacity and better software infrastructure have allowed us to train neural networks that are large enough to perform well. Larger datasets are also very important. Some changes in machine learning algorithms, like designing neural network layers to be very linear, have also led to noticeable improvements.

    What are the main types of problems now being addressed in the deep learning space?
    There is a gold rush to be the first to use existing deep learning algorithms on new application areas. Every day, there are new articles about deep learning for counting calories from photos, deep learning for separating two voices in a recording, etc.

    What are the practical applications of your work and what sectors are most likely to be affected?
    My work is generic enough that it impacts everything we use neural networks for. Anything you want to do with a neural net, I aim to make faster and more accurate.

    What developments can we expect to see in deep learning in the next 5 years?
    I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well. I also predict that neural networks will become an important tool in other scientific disciplines. For example, neural networks could be trained to model the behavior of genes, drugs, and proteins and then used to design new medicines.

    What advancements excite you most in the field?
    Recent extensions of variational auto-encoders and generative adversarial networks have greatly improved the ability of neural networks to generate realistic images. Generating data has been a constantly studied problem for decades, and we still do not seem to have the right algorithm to do it. The last year or so has shown that we are getting much closer though. 

    Ian Goodfellow will be speaking at Deep Learning Summit in San Francisco, on 28-29 January 2016, alongside speakers from Baidu, Twitter, Clarifai, MIT and more.

  • 相关阅读:
    sqlserver 中的 substring函数(转)
    C#二个相减怎么获得天数,就是比如201225 与201231之间相差的天数
    C++文件添加到项目中
    VS2008动态链接库(DLL)的创建与导入
    美剧字幕绿箭侠第1季第7集
    C++中#define用法
    C++头文件的重复引用
    visual studio中解决方案是什么
    NewWords/300400
    指针
  • 原文地址:https://www.cnblogs.com/yymn/p/4783659.html
Copyright © 2020-2023  润新知