Activation Functions(激活函数)
- tf.nn.relu(features, name=None) #max(features, 0)
- tf.nn.relu6(features, name=None) #min(max(features, 0), 6)
- tf.nn.softplus(features, name=None) #log(exp(features) + 1)
- tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None) #计算dropout
- tf.nn.bias_add(value, bias, name=None) #加偏置
- tf.sigmoid(x, name=None) # 1/(1+exp(-x))
- tf.tanh(x, name=None) #双曲正切曲线 (exp(x)-exp(-x))/(exp(x)+exp(-x))
Convolution(卷积运算)
- tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None) #4D input
- tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None) #5D input
- tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None) #执行一个深度卷积,分别作用于通道上,然后执行一个混合通道的点卷积
Pooling(池化)
- tf.nn.avg_pool(value, ksize, strides, padding, name=None) #平均值池化
- tf.nn.max_pool(value, ksize, strides, padding, name=None) #最大值池化
- tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None) #放回最大值和扁平索引
Normalization(标准化)
- tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) #L2范式标准化
- tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None) #计算局部数据标准化,每个元素被独立标准化
- tf.nn.moments(x, axes, name=None) #平均值和方差
Losses(损失)
- tf.nn.l2_loss(t,name=None) #sum(t^2)/2
Classification(分类)
- tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) #交叉熵
- tf.nn.softmax(logits, name=None) #softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))
- tf.nn.log_softmax(logits, name=None) #logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))
- tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) #计算logits和labels的softmax交叉熵
RNN
- tf.nn.rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None) #基于RNNCell类的实例cell建立循环神经网络
- tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None) #基于RNNCell类的实例cell建立动态循环神经网络与一般rnn不同的是,该函数会根据输入动态展开返回(outputs,state)
- tf.nn.state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None) #可储存调试状态的RNN网络
- tf.nn.bidirectional_rnn(cell_fw, cell_bw, inputs,initial_state_fw=None, initial_state_bw=None, dtype=None,sequence_length=None, scope=None) #双向RNN, 返回一个3元组tuple (outputs, output_state_fw, output_state_bw)
##未完待续