• mahout源码KMeansDriver分析之五CIMapper


    接上文重点分析map操作:

     Vector probabilities = classifier.classify(value.get());// 第一行
    			    Vector selections = policy.select(probabilities); // 第二行
    			    for (Iterator<Element> it = selections.iterateNonZero(); it.hasNext();) {
    			      Element el = it.next();
    			      classifier.train(el.index(), value.get(), el.get()); // 第三行
    			    }

    这几句要如何理解?

    比如我随机的中心点向量是:

    2.9,2.9
    3.0,3.0

    然后我的所有的输入向量为:

    [{1:8.1,0:8.1}, {1:8.0,0:8.0}, {1:7.0,0:7.0}, {1:7.1,0:7.1}, {1:6.1,0:6.1}, {1:6.2,0:6.2}, {1:9.0,0:9.0}, {1:2.0,0:2.0}, {1:7.1,0:7.1}, {1:1.0,0:1.0}, {}, {1:2.1,0:2.1}, {1:2.9,0:2.9}, {1:1.1,0:1.1}, {1:0.1,0:0.1}, {1:3.0,0:3.0}]

    那么第一行就是针对一个输入向量,求其到中心点向量的距离,如果我有三个中心点,那么probabilities的size就是3,第二行的作用就是找到probabilities值较大(这里为什么是较大?而不是较小?因为在求距离的时候用到了倒数,这样原来小的就变大了,具体计算过程有时间再分析)的下标值,然后用第三行的方法把这个输入向量分入到其对应的中心点向量。如何分?比如第一个输入向量[8.1,8.1]那么应该把其分入[3.0,3.0],那么第1个中心点向量在第一条记录后,其s0=2,s1=8.1+3.0,s2=8.1*8.1+3.0*3.0 ,一次类推,等全部输入结束后,两个中心点的属性如下:

    [2.9,2.9]:  s0=8,  s1={1:12.1,0:12.1}  ,s2={1:27.450000000000003,0:27.450000000000003}

    [3.0,3.0]:  s0=10,  s1={1:64.60000000000001,0:64.60000000000001} , s2={1:454.08000000000004,0:454.08000000000004}

    然后这两个中心点 输出到reduce;


    然后我整体跑了一遍,得到第一个输出结果即cluster-1的结果是两个中心点,为 CL-12{n=8 c=[1.513, 1.513] r=[1.069, 1.069]},

    CL-15{n=10 c=[6.460, 6.460] r=[1.917, 1.917]}。

    然后我又仿造了Reducer:

    package mahout.fansy.kmeans;
    
    import java.io.IOException;
    import java.util.ArrayList;
    import java.util.Iterator;
    import java.util.List;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.mahout.clustering.Cluster;
    import org.apache.mahout.clustering.classify.ClusterClassifier;
    import org.apache.mahout.clustering.iterator.ClusterWritable;
    import org.apache.mahout.clustering.iterator.ClusteringPolicy;
    import org.apache.mahout.common.iterator.sequencefile.PathFilters;
    import org.apache.mahout.common.iterator.sequencefile.PathType;
    import org.apache.mahout.common.iterator.sequencefile.SequenceFileDirValueIterable;
    import org.apache.mahout.math.Vector;
    import org.apache.mahout.math.VectorWritable;
    import org.apache.mahout.math.Vector.Element;
    
    import com.google.common.collect.Lists;
    
    public class TestCIReducer {
    
    	/**
    	 * @param args
    	 */
    	
    	private static ClusterClassifier classifier;
    	  
    	private static ClusteringPolicy policy;
    	
    	public static void main(String[] args) throws IOException {
    		setup();
    		reduce();
    	}
    	
    	/**
    	 * 仿造setup函数
    	 * @throws IOException
    	 */
    	public static void setup() throws IOException{
    		
    		Configuration conf=new Configuration();
    		conf.set("mapred.job.tracker", "hadoop:9001"); // 这句是否可以去掉?
    		
    	    String priorClustersPath ="hdfs://hadoop:9000/user/hadoop/out/kmeans-output/clusters-0";
    	    classifier = new ClusterClassifier();
    	    classifier.readFromSeqFiles(conf, new Path(priorClustersPath));
    	    policy = classifier.getPolicy();
    	    policy.update(classifier);
    	}
    	/**
    	 * 仿造map函数
    	 */
    	public static void map(){
    		List<VectorWritable> vList=getInputData();
    		for(VectorWritable value: vList){
    			 Vector probabilities = classifier.classify(value.get());
    			    Vector selections = policy.select(probabilities);
    			    for (Iterator<Element> it = selections.iterateNonZero(); it.hasNext();) {
    			      Element el = it.next();
    			      classifier.train(el.index(), value.get(), el.get());
    			    }
    		}
    	}
    	
    	/**
    	 * 仿造cleanup函数
    	 */
    	public static List<ClusterWritable> cleanup(){
    		List<Cluster> clusters = classifier.getModels();
    		List<ClusterWritable> cList=Lists.newArrayList();
    	    ClusterWritable cw = null;
    	    for (int index = 0; index < clusters.size(); index++) {
    	    	cw=new ClusterWritable();
    	      cw.setValue(clusters.get(index));
    	      cList.add(cw);
    	      //System.out.println("index:"+index+",cw :"+ cw.getValue().getCenter()	);
    	    }
    	    return cList;
    	}
    	
    	
    	
    	
    	public static void reduce(){
    		map();  // 给classifier赋值
    		List<ClusterWritable>cList = cleanup();
    		ClusterWritable first = null;
    	    for (ClusterWritable cw :cList) {
    	      if (first == null) {
    	        first = cw;
    	      } else {
    	        first.getValue().observe(cw.getValue());
    	      }
    	    }
    	    List<Cluster> models = new ArrayList<Cluster>();
    	    models.add(first.getValue());
    	    classifier = new ClusterClassifier(models, policy);
    	    classifier.close();
    	    System.out.println("value:"+first);
    		
    	}
    	
    	
    	
    	/**
    	 * 获得输入数据
    	 * @return
    	 */
    	public static List<VectorWritable> getInputData(){
    		String input="hdfs://hadoop:9000/user/hadoop/out/kmeans-in-transform/part-r-00000";
    		Path path=new Path(input);
    		Configuration conf=new Configuration();
    		List<VectorWritable> vList=Lists.newArrayList();
    		for (VectorWritable cw : new SequenceFileDirValueIterable<VectorWritable>(path, PathType.LIST,
    		        PathFilters.logsCRCFilter(), conf)) {
    		      vList.add(cw);
    		}
    		return vList;
    	}
    }
    

    但是最终只是输出了一个中心点,结果有误?应该是我仿造的代码有问题,明天继续。。。



    分享,快乐,成长


    转载请注明出处:http://blog.csdn.net/fansy1990 




  • 相关阅读:
    float及清除浮动
    HTML meta标签总结与属性使用介绍
    jQuery相关知识
    FullCalendar日程设置
    Python基础知识
    波段选择
    CSS基础知识
    稀疏表示的高光谱分类
    Sass基本特性
    [Sass]混合宏
  • 原文地址:https://www.cnblogs.com/jiangu66/p/3243813.html
Copyright © 2020-2023  润新知