1. Conv1D 层
1.1 语法
keras.layers.convolutional.Conv1D(filters, kernel_size, strides=1, padding='valid', dilation_rate=1, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
一维卷积层(即时域卷积),用以在一维输入信号上进行邻域滤波。当使用该层作为首层时,需要提供关键字参数input_shape
。例如(10,128)
代表一个长为10的序列,序列中每个信号为128向量。而(None, 128)
代表变长的128维向量序列。
该层生成将输入信号与卷积核按照单一的空域(或时域)方向进行卷积。如果use_bias=True
,则还会加上一个偏置项,若activation
不为None,则输出为经过激活函数的输出。
1.2 参数
-
filters:卷积核的数目(即输出的维度)
-
kernel_size:整数或由单个整数构成的list/tuple,卷积核的空域或时域窗长度
-
strides:整数或由单个整数构成的list/tuple,为卷积的步长。任何不为1的strides均与任何不为1的dilation_rate均不兼容
-
padding:补0策略,为“valid”, “same” 或“causal”,“causal”将产生因果(膨胀的)卷积,即output[t]不依赖于input[t+1:]。当对不能违反时间顺序的时序信号建模时有用。参考WaveNet: A Generative Model for Raw Audio, section 2.1.。“valid”代表只进行有效的卷积,即对边界数据不处理。“same”代表保留边界处的卷积结果,通常会导致输出shape与输入shape相同。
-
activation:激活函数,为预定义的激活函数名(参考激活函数),或逐元素(element-wise)的Theano函数。如果不指定该参数,将不会使用任何激活函数(即使用线性激活函数:a(x)=x)
-
dilation_rate:整数或由单个整数构成的list/tuple,指定dilated convolution中的膨胀比例。任何不为1的dilation_rate均与任何不为1的strides均不兼容。
-
use_bias:布尔值,是否使用偏置项
-
kernel_initializer:权值初始化方法,为预定义初始化方法名的字符串,或用于初始化权重的初始化器。参考initializers
-
bias_initializer:权值初始化方法,为预定义初始化方法名的字符串,或用于初始化权重的初始化器。参考initializers
-
kernel_regularizer:施加在权重上的正则项,为Regularizer对象
-
bias_regularizer:施加在偏置向量上的正则项,为Regularizer对象
-
activity_regularizer:施加在输出上的正则项,为Regularizer对象
-
kernel_constraints:施加在权重上的约束项,为Constraints对象
-
bias_constraints:施加在偏置上的约束项,为Constraints对象
1.3 输入 shape
形如(samples,steps,input_dim)的3D张量
1.4 输出 shape
形如(samples,new_steps,nb_filter)的3D张量,因为有向量填充的原因,steps
的值会改变
【Tips】可以将Convolution1D看作Convolution2D的快捷版,对例子中(10,32)的信号进行1D卷积相当于对其进行卷积核为(filter_length, 32)的2D卷积。【@3rduncle】
2. MaxPooling1D
2.1 语法
keras.layers.pooling.MaxPooling1D(pool_size=2, strides=None, padding='valid')
2.2 参数
-
pool_size:整数,池化窗口大小
-
strides:整数或None,下采样因子,例如设2将会使得输出shape为输入的一半,若为None则默认值为pool_size。
-
padding:‘valid’或者‘same’
2.3 输入 shape
- 形如(samples,steps,features)的3D张量
2.4 输出 shape
- 形如(samples,downsampled_steps,features)的3D张量
3. 举例
3.1 数据预处理
from keras.datasets import imdb from keras.preprocessing import sequence max_features = 10000 # number of words to consider as features max_len = 500 # cut texts after this number of words (among top max_features most common words) print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=max_len) x_test = sequence.pad_sequences(x_test, maxlen=max_len) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape)
outputs:
Loading data... 25000 train sequences 25000 test sequences Pad sequences (samples x time) x_train shape: (25000, 500) x_test shape: (25000, 500)
3.2 数据训练
from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.Embedding(max_features, 128, input_length=max_len)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.MaxPooling1D(5)) model.add(layers.Conv1D(32, 7, activation='relu')) model.add(layers.GlobalMaxPooling1D()) model.add(layers.Dense(1)) model.summary() model.compile(optimizer=RMSprop(lr=1e-4), loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2)
outputs:
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 500, 128) 1280000 _________________________________________________________________ conv1d_1 (Conv1D) (None, 494, 32) 28704 _________________________________________________________________ max_pooling1d_1 (MaxPooling1 (None, 98, 32) 0 _________________________________________________________________ conv1d_2 (Conv1D) (None, 92, 32) 7200 _________________________________________________________________ global_max_pooling1d_1 (Glob (None, 32) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 33 ================================================================= Total params: 1,315,937 Trainable params: 1,315,937 Non-trainable params: 0 _________________________________________________________________ Train on 20000 samples, validate on 5000 samples Epoch 1/10 20000/20000 [==============================] - 71s 4ms/step - loss: 0.8337 - acc: 0.5092 - val_loss: 0.6874 - val_acc: 0.5654 Epoch 2/10 20000/20000 [==============================] - 73s 4ms/step - loss: 0.6699 - acc: 0.6383 - val_loss: 0.6641 - val_acc: 0.6590 Epoch 3/10 20000/20000 [==============================] - 86s 4ms/step - loss: 0.6234 - acc: 0.7532 - val_loss: 0.6079 - val_acc: 0.7436 Epoch 4/10 20000/20000 [==============================] - 88s 4ms/step - loss: 0.5256 - acc: 0.8075 - val_loss: 0.4843 - val_acc: 0.8060 Epoch 5/10 20000/20000 [==============================] - 80s 4ms/step - loss: 0.4125 - acc: 0.8478 - val_loss: 0.4287 - val_acc: 0.8318 Epoch 6/10 20000/20000 [==============================] - 88s 4ms/step - loss: 0.3495 - acc: 0.8659 - val_loss: 0.4164 - val_acc: 0.8344 Epoch 7/10 20000/20000 [==============================] - 86s 4ms/step - loss: 0.3128 - acc: 0.8601 - val_loss: 0.4437 - val_acc: 0.8184 Epoch 8/10 20000/20000 [==============================] - 82s 4ms/step - loss: 0.2819 - acc: 0.8454 - val_loss: 0.4282 - val_acc: 0.8024 Epoch 9/10 20000/20000 [==============================] - 73s 4ms/step - loss: 0.2574 - acc: 0.8302 - val_loss: 0.4700 - val_acc: 0.7780 Epoch 10/10 20000/20000 [==============================] - 71s 4ms/step - loss: 0.2345 - acc: 0.8064 - val_loss: 0.5111 - val_acc: 0.7470