• 多进程、多线程处理文件对比


    分别通过多进程、多线程方式处理文件,将结果保存到一个list中:

    1.多进程:

    import multiprocessing,cjson,os,collections
    from multiprocessing import Process,freeze_support,Manager,Pool,Queue
    
    def handlefile(lock,rst,fp):
        lst_tmp=[]
        #print type(rst)
        with open(fp,'rb') as fo:
            for line in fo:
                line = cjson.decode(line)
                lst_tmp.append(line['s-ip'])
        #print collections.Counter(lst_tmp)
        lock.acquire()
        rst.extend(lst_tmp)
        lock.release()
    
    
    if __name__ == '__main__':
        lock = Manager().Lock()
        rst = Manager().list()
    
        starttime = datetime.datetime.now()
        f1 = 'e:\logtest\iis__20160519105745.json'
        f2 = 'e:\logtest\iis__20160519105816.json'
        f3 = 'e:\logtest\iis_IDC-ExFE01_20160524134616.json'
        f4 = 'e:\logtest\iis_IDC-ExFE01_20160524134955.json'
        f5 = 'e:\logtest\iis_IDC-ExFE01_20160524134616.json'
        f6 = 'e:\logtest\iis_IDC-ExFE01_20160524134955.json'
        files = [f1,f2,f3,f4,f5,f6]
        p=Pool()
        for file in files:
            p.apply_async(handlefile,args=(lock,rst,file))
        p.close()
        p.join()
    
        print collections.Counter(rst)
    
        print (datetime.datetime.now() - starttime).total_seconds() #耗时16.631s

    2.多线程:

    import threading
    global rst
    rst = []
    def query(mutex,fp):
        lst_tmp=[]
        #print type(rst)
        with open(fp,'rb') as fo:
            for line in fo:
                line = cjson.decode(line)
                lst_tmp.append(line['s-ip'])
        #print collections.Counter(lst_tmp)
        mutex.acquire()                   #可以改写为with mutex(),替换掉acquire + release()
        rst.extend(lst_tmp)
        mutex.release()
    
    
    if __name__ == '__main__':
        threads=[]
        mutex=threading.Lock()
        starttime = datetime.datetime.now()
        f1 = 'e:\logtest\iis__20160519105745.json'
        f2 = 'e:\logtest\iis__20160519105816.json'
        f3 = 'e:\logtest\iis_IDC-ExFE01_20160524134616.json'
        f4 = 'e:\logtest\iis_IDC-ExFE01_20160524134955.json'
        f5 = 'e:\logtest\iis_IDC-ExFE01_20160524134616.json'
        f6 = 'e:\logtest\iis_IDC-ExFE01_20160524134955.json'
        files = [f1,f2,f3,f4,f5,f6]
    
        for filepath in files:
            t = threading.Thread(target=query,args=(mutex,filepath))
            t.setDaemon(True)
            t.start()
            threads.append(t)
        for t in threads:
            t.join()
    
        print collections.Counter(rst)
    
        print (datetime.datetime.now() - starttime).total_seconds() #耗时4.425s

    结论:多进程和多线程在分别处理每个文件,将结果写入各自tmp list中,多线程耗时2.468s,多线程耗时4.24s,多进程优于多线程(进程数量未控制,默认CPU核心数量)。

            但当多线程各结果写入到共享变量list()时,多线程严重耗时较久,多线程共计耗时4.425s,多进程耗时16.631s。多进程中的共享变量效率低下。

  • 相关阅读:
    javaweb之验证码验证技术
    HttpServletRequest常用方法
    设置浏览器不缓冲
    通过Referer设置来防盗链
    struts2启动时,出现的com.opensymphony.xwork2.util.finder.ClassFinder
    struts2实现jQuery的异步交互
    观察者模式和订阅发布模式的区别
    "ProgrammerHome"项目笔记
    《梦断代码》读书笔记
    关于python的“重载”
  • 原文地址:https://www.cnblogs.com/dreamer-fish/p/5535561.html
Copyright © 2020-2023  润新知