• They're much closer in spirit to how our brains work than feedforward networks.


    http://neuralnetworksanddeeplearning.com/chap1.html

    Up to now, we've been discussing neural networks where the output from one layer is used as input to the next layer. Such networks are called feedforward neural networks. This means there are no loops in the network - information is always fed forward, never fed back. If we did have loops, we'd end up with situations where the input to the σσ function depended on the output. That'd be hard to make sense of, and so we don't allow such loops.

    However, there are other models of artificial neural networks in which feedback loops are possible. These models are calledrecurrent neural networks. The idea in these models is to have neurons which fire for some limited duration of time, before becoming quiescent. That firing can stimulate other neurons, which may fire a little while later, also for a limited duration. That causes still more neurons to fire, and so over time we get a cascade of neurons firing. Loops don't cause problems in such a model, since a neuron's output only affects its input at some later time, not instantaneously.

    Recurrent neural nets have been less influential than feedforward networks, in part because the learning algorithms for recurrent nets are (at least to date) less powerful. But recurrent networks are still extremely interesting. They're much closer in spirit to how our brains work than feedforward networks. And it's possible that recurrent networks can solve important problems which can only be solved with great difficulty by feedforward networks. However, to limit our scope, in this book we're going to concentrate on the more widely-used feedforward networks.

  • 相关阅读:
    触屏时间控制
    小程序 坐标算距离 (copy)
    微信小程序 对接口常用
    conversion function to_char to_number
    南通
    日期 function
    数字 function
    字符串处理函数
    沪通铁路1
    NVL NVL2 COALESCE NULLIF decode
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7562250.html
Copyright © 2020-2023  润新知