• 将基因组数据分类并写出文件,python,awk,R data.table速度PK


        由于基因组数据过大,想进一步用R语言处理担心系统内存不够,因此想着将文件按染色体拆分,发现python,awk,R 语言都能够非常简单快捷的实现,那么速度是否有差距呢,因此在跑几个50G的大文件之前,先用了244MB的数据对各个脚本进行测试,并且将其速度进行对比。

    首先是awk处理,awk进行的是逐行处理,具有自己的语法,具有很大的灵活性,一行代码解决,用时24S,

     1 #!/usr/bin/sh
     2 function main()
     3 {
     4 start_tm=date
     5 start_h=`$start_tm +%H`
     6 start_m=`$start_tm +%M`
     7 start_s=`$start_tm +%S`
     8 awk -F $sep '{print $1","$2","$3 >> "'"$inputfile"'""_"$1}' $inputfile
     9 end_tm=date
    10 end_h=`$end_tm +%H`
    11 end_m=`$end_tm +%M`
    12 end_s=`$end_tm +%S`
    13 use_tm=`echo $end_h $start_h $end_m $start_m $end_s $start_s | awk '{ print ($1 - $2),"h",($3-$4),"m",($5-$6),"s"}'`
    14 echo "Finished in "$use_tm
    15 }
    16 
    17 
    18 if [ $# == 2 ]; then
    19 sep=$1
    20 inputfile=$2
    21 main
    22 else
    23 echo "usage: SplitChr.sh sep inputfile"
    24 echo "eg: SplitChr.sh , test.csv"
    25 fi

    接下来是用python,python语言简单,书写方便。因此很快就实现了程序,同样逐行处理,比awk添加了一点细节,只挑出需要的染色体。用时19.9秒。

     1 #!/usr/bin/python
     2 import sys
     3 import time
     4 def main():
     5     if len(sys.argv)!=3:
     6         print "usage : SplitChr sep inputfile eg: SplitChr ',' test.txt"
     7         exit()
     8     sep=sys.argv[1]
     9     filename=sys.argv[2]
    10     f=open(filename,'r')
    11     header=f.readline()
    12     if len(header.split(sep))<2:
    13         print "The sep can't be recongnized !"
    14         exit()
    15     chrLst=range(1,23)
    16     chrLst.extend(["X","Y"])
    17     chrLst=["chr"+str(i) for i in chrLst]
    18     outputdic={}
    19     for chrI in chrLst:
    20         output=filename+"_"+chrI
    21         outputdic[chrI]=open(output,'w')
    22         outputdic[chrI].write(header)
    23     for eachline in f:
    24         tmpLst=eachline.strip().split(sep)
    25         tmpChr=tmpLst[0]
    26         if tmpChr in chrLst:
    27             outputdic[tmpChr].write(eachline)
    28     end=time.clock()
    29     print "read: %f s" % (end - start)
    30 
    31 
    32 
    33 if __name__=='__main__':
    34     start=time.clock()
    35     main()

    最后用R语言data.table包进行处理,data.table是data.frame的高级版,在速度上作了很大的改进,但是和awk和python相比,具有优势吗?

     1 #!/usr/bin/Rscript
     2 library(data.table)
     3 main <- function(filename,sep){
     4 started.at <- proc.time()
     5 arg <- commandArgs(T)
     6 sep <- arg[1]
     7 inputfile <- arg[2]
     8 dt <- fread(filename,sep=sep,header=T)
     9 chrLst <- lapply(c(1:22,"X","Y"),function(x)paste("chr",x,sep=""))
    10 for (chrI in chrLst){
    11     outputfile <- paste(filename,"_",chrI,sep="")
    12     fwrite(dt[.(chrI),,on=.(chr)],file=outputfile,sep=sep)
    13 }
    14 cat ("Finished in",timetaken(started.at),"
    ")
    15 }
    16 
    17 arg <- commandArgs(T)
    18 if (length(arg)==2){
    19 sep <- arg[1]
    20 filename <- arg[2]
    21 main(filename,sep)
    22 }else{
    23 cat("usage: SplitChr.R sep inputfile eg: SplitChr.R '\t' test.csv","
    ")
    24 }

        用时10.6秒,发现刚刚读完数据,立刻就处理和写出完毕,处理和写出时间非常短,因此总体用时较短。

    总结

        虽然都是逐行处理,但由上述结果猜测awk内部运行并没有python快,但awk书写一行代码搞定,书写速度快,至于python比data.table慢,猜测原因是R data.table用C语言写,并且运用多线程写出,hash读取,传地址各种方式优化速度的结果。当然,上述结果仅供参考。

  • 相关阅读:
    hibernate--could not initialize proxy
    20160509-hibernate--继承映射
    CF1111C Creative Snap
    CF1097D Makoto and a Blackboard
    CF1091D New Year and the Permutation Concatenation
    CF1096D Easy Problem
    CF1076E Vasya and a Tree
    CF1081C Colorful Bricks
    CF1081E Missing Numbers
    CF1093D Beautiful Graph
  • 原文地址:https://www.cnblogs.com/ywliao/p/6621804.html
Copyright © 2020-2023  润新知