前言
看完神经网络及BP算法介绍后,这里做一个小实验,内容是来自斯坦福ULIDL教程,实现图像的压缩表示,模型是用神经网络模型,训练方法是BP后向传播算法。
理论
在有监督学习中,训练样本是具有标签的,一般神经网络是有监督的学习方法。我们这里要讲的是自编码神经网络,这是一种无监督的学习方法,它是让输出值等于自身来实现的。
从图中可以看到,神经网络模型只有一层隐含层,输出层跟输入层的神经单元个数是一样的。如果隐含层单元个数比输入层少的话,我们用这个模型学到的是输入数据的压缩表示,相当于对输入数据进行降维(这是一种非线性的降维方法)。实际上,如果隐含层单元个数比输入层多,我们可以让隐含层的大部分单元激活值接近0,就是让它们稀疏,这样学到的也是压缩表示。我们模型要使得输出层跟输入层一样,就是隐含层要能够重建出跟输入层一样的输出层,这样我们学到的压缩表示才是有意义的。
回忆下之前介绍过的损失函数:
在这里,y是输出层,跟输入层是一样的。
自编码神经网络还增加了稀疏性惩罚一项。它是对隐含层进行了稀疏性的约束,即使得隐含层大部分值都处于非active状态。定义隐含层节点j的稀疏程度为
上式是对整个样本求隐含层节点j的平均值,如果是所有隐含层节点,那么就组成一个向量。
我们要设置期望隐含层稀疏性的程度,假设为,因此我们希望对于所有的节点j。
那怎么衡量实际跟期望的差别呢?
实际上是关于伯努利变量p与q的KL离散度(参考我之前写的关于信息熵的博客)。
此时损失函数为
由于加了稀疏项损失函数,对第二层节点求残差时公式变为
实验
实验教程是在Exercise:Sparse Autoencoder,要实现的文件是sampleIMAGES.m, sparseAutoencoderCost.m,computeNumericalGradient.m
实验步骤:
- 生成训练集
- 稀疏自编码目标函数
- 梯度校验
- 训练稀疏自编码
- 可视化
最后一步可视化是,把x用图像表示出来的。
代码如下:
sampleIMAGES.m
function patches = sampleIMAGES() % sampleIMAGES % Returns 10000 patches for training load IMAGES; % load images from disk patchsize = 8; % we'll use 8x8 patches numpatches = 10000; % Initialize patches with zeros. Your code will fill in this matrix--one % column per patch, 10000 columns. patches = zeros(patchsize*patchsize, numpatches); %% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Fill in the variable called "patches" using data % from IMAGES. % % IMAGES is a 3D array containing 10 images % For instance, IMAGES(:,:,6) is a 512x512 array containing the 6th image, % and you can type "imagesc(IMAGES(:,:,6)), colormap gray;" to visualize % it. (The contrast on these images look a bit off because they have % been preprocessed using using "whitening." See the lecture notes for % more details.) As a second example, IMAGES(21:30,21:30,1) is an image % patch corresponding to the pixels in the block (21,21) to (30,30) of % Image 1 [m,n,num] = size(IMAGES); for i=1:numpatches j = randi(num); bx = randi(m-patchsize+1); by = randi(n-patchsize+1); block = IMAGES(bx:bx+patchsize-1,by:by+patchsize-1,j); patches(:,i) = block(:); end %% --------------------------------------------------------------- % For the autoencoder to work well we need to normalize the data % Specifically, since the output of the network is bounded between [0,1] % (due to the sigmoid activation function), we have to make sure % the range of pixel values is also bounded between [0,1] patches = normalizeData(patches); end %% --------------------------------------------------------------- function patches = normalizeData(patches) % Squash data to [0.1, 0.9] since we use sigmoid as the activation % function in the output layer % Remove DC (mean of images). patches = bsxfun(@minus, patches, mean(patches)); % Truncate to +/-3 standard deviations and scale to -1 to 1 pstd = 3 * std(patches(:)); patches = max(min(patches, pstd), -pstd) / pstd; % Rescale from [-1,1] to [0.1,0.9] patches = (patches + 1) * 0.4 + 0.1; end
SparseAutoencoderCost.m
function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ... lambda, sparsityParam, beta, data) % visibleSize: the number of input units (probably 64) % hiddenSize: the number of hidden units (probably 25) % lambda: weight decay parameter % sparsityParam: The desired average activation for the hidden units (denoted in the lecture % notes by the greek alphabet rho, which looks like a lower-case "p"). % beta: weight of sparsity penalty term % data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector). % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this % follows the notation convention of the lecture notes. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end); % Cost and gradient variables (your code needs to compute these values). % Here, we initialize them to zeros. cost = 0; W1grad = zeros(size(W1)); W2grad = zeros(size(W2)); b1grad = zeros(size(b1)); b2grad = zeros(size(b2)); %% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder, % and the corresponding gradients W1grad, W2grad, b1grad, b2grad. % % W1grad, W2grad, b1grad and b2grad should be computed using backpropagation. % Note that W1grad has the same dimensions as W1, b1grad has the same dimensions % as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with % respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) % with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term % [(1/m) Delta W^{(1)} + lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 % of the lecture notes (and similarly for W2grad, b1grad, b2grad). % % Stated differently, if we were using batch gradient descent to optimize the parameters, % the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. % %矩阵向量化形式实现,速度比不用向量快得多 Jocst = 0; %平方误差 Jweight = 0; %规则项惩罚 Jsparse = 0; %稀疏性惩罚 [n, m] = size(data); %m为样本数,这里是10000,n为样本维数,这里是64 %feedforward前向算法计算隐含层和输出层的每个节点的z值(线性组合值)和a值(激活值) %data每一列是一个样本, z2 = W1*data + repmat(b1,1,m); %W1*data的每一列是每个样本的经过权重W1到隐含层的线性组合值,repmat把列向量b1扩充成m列b1组成的矩阵 a2 = sigmoid(z2); z3 = W2*a2 + repmat(b2,1,m); a3 = sigmoid(z3); %计算预测结果与理想结果的平均误差 Jcost = (0.5/m)*sum(sum((a3-data).^2)); %计算权重惩罚项 Jweight = (1/2)*(sum(sum(W1.^2))+sum(sum(W2.^2))); %计算稀疏性惩罚项 rho_hat = (1/m)*sum(a2,2); Jsparse = sum(sparsityParam.*log(sparsityParam./rho_hat)+(1-sparsityParam).*log((1-sparsityParam)./(1-rho_hat))); %计算总损失函数 cost = Jcost + lambda*Jweight + beta*Jsparse; %反向传播求误差值 delta3 = -(data-a3).*fprime(a3); %每一列是一个样本对应的误差 sterm = beta*(-sparsityParam./rho_hat+(1-sparsityParam)./(1-rho_hat)); delta2 = (W2'*delta3 + repmat(sterm,1,m)).*fprime(a2); %计算梯度 W2grad = delta3*a2'; W1grad = delta2*data'; W2grad = W2grad/m + lambda*W2; W1grad = W1grad/m + lambda*W1; b2grad = sum(delta3,2)/m; %因为对b的偏导是个向量,这里要把delta3的每一列加起来 b1grad = sum(delta2,2)/m; %%---------------------------------- % %对每个样本进行计算, non-vectorial implementation % [n m] = size(data); % a2 = zeros(hiddenSize,m); % a3 = zeros(visibleSize,m); % Jcost = 0; %平方误差项 % rho_hat = zeros(hiddenSize,1); %隐含层每个节点的平均激活度 % Jweight = 0; %权重衰减项 % Jsparse = 0; % 稀疏项代价 % % for i=1:m % %feedforward向前转播 % z2(:,i) = W1*data(:,i)+b1; % a2(:,i) = sigmoid(z2(:,i)); % z3(:,i) = W2*a2(:,i)+b2; % a3(:,i) = sigmoid(z3(:,i)); % Jcost = Jcost+sum((a3(:,i)-data(:,i)).*(a3(:,i)-data(:,i))); % rho_hat = rho_hat+a2(:,i); %累加样本隐含层的激活度 % end % % rho_hat = rho_hat/m; %计算平均激活度 % Jsparse = sum(sparsityParam*log(sparsityParam./rho_hat) + (1-sparsityParam)*log((1-sparsityParam)./(1-rho_hat))); %计算稀疏代价 % Jweight = sum(W1(:).*W1(:))+sum(W2(:).*W2(:));%计算权重衰减项 % cost = Jcost/2/m + Jweight/2*lambda + beta*Jsparse; %计算总代价 % % for i=1:m % %backpropogation向后传播 % delta3 = -(data(:,i)-a3(:,i)).*fprime(a3(:,i)); % delta2 = (W2'*delta3 +beta*(-sparsityParam./rho_hat+(1-sparsityParam)./(1-rho_hat))).*fprime(a2(:,i)); % % W2grad = W2grad + delta3*a2(:,i)'; % W1grad = W1grad + delta2*data(:,i)'; % b2grad = b2grad + delta3; % b1grad = b1grad + delta2; % end % %计算梯度 % W1grad = W1grad/m + lambda*W1; % W2grad = W2grad/m + lambda*W2; % b1grad = b1grad/m; % b2grad = b2grad/m; % ------------------------------------------------------------------- % After computing the cost and gradient, we will convert the gradients back % to a vector format (suitable for minFunc). Specifically, we will unroll % your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %% Implementation of derivation of f(z) % f(z) = sigmoid(z) = 1./(1+exp(-z)) % a = 1./(1+exp(-z)) % delta(f) = a.*(1-a) function dz = fprime(a) dz = a.*(1-a); end %% %------------------------------------------------------------------- % Here's an implementation of the sigmoid function, which you may find useful % in your computation of the costs and the gradients. This inputs a (row or % column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = 1 ./ (1 + exp(-x)); end
computeNumericalGradient.m
function numgrad = computeNumericalGradient(J, theta) % numgrad = computeNumericalGradient(J, theta) % theta: a vector of parameters % J: a function that outputs a real-number. Calling y = J(theta) will return the % function value at theta. % Initialize numgrad with zeros numgrad = zeros(size(theta)); %% ---------- YOUR CODE HERE -------------------------------------- % Instructions: % Implement numerical gradient checking, and return the result in numgrad. % (See Section 2.3 of the lecture notes.) % You should write code so that numgrad(i) is (the numerical approximation to) the % partial derivative of J with respect to the i-th input argument, evaluated at theta. % I.e., numgrad(i) should be the (approximately) the partial derivative of J with % respect to theta(i). % % Hint: You will probably want to compute the elements of numgrad one at a time. EPSILON = 1e-4; for i=1:length(numgrad) theta1 = theta; theta1(i) = theta1(i)+EPSILON; theta2 = theta; theta2(i) = theta2(i)-EPSILON; numgrad(i) = (J(theta1)-J(theta2))/(2*EPSILON); end %% --------------------------------------------------------------- end
如果用向量化计算,几十秒钟就运算出来了,最后结果如下: