• 异步调用与回调机制


    提交任务的两种方式。

    同步调用:提交完任务后,就在原地等待任务执行完毕,拿到结果,再执行下一行代码,导致程序是串行执行

    异步调用:提交完任务后,不等待任务执行完毕

    from concurrent.futures import ThreadPoolExecutor
    import time,random
    
    def la(name):
        print('%s is laing'%name)
        time.sleep(random.randint(3,5))
        res = random.randint(7,13)*'#'
        return {'name':name,'res':res}
    
    def weigh(shit):
        shit = shit.result() # 异步回掉时,处理接收到的对象
        name = shit['name']
        size = len(shit['res'])
        print('%s 拉了 《%s》kg'%(name,size))
    
    if __name__ =='__main__':
        pool = ThreadPoolExecutor(13)
    
        # 同步调用
        # shit1 = pool.submit(la,'alex').result()
        # weigh(shit1)
        # shit2 = pool.submit(la, 'huhao').result()
        # weigh(shit2)
        # shit3 = pool.submit(la, 'zhanbin').result()
        # weigh(shit3)
    
        # 异步调用
        pool.submit(la, 'alex').add_done_callback(weigh)
        pool.submit(la, 'huhao').add_done_callback(weigh)
        pool.submit(la, 'zhanbin').add_done_callback(weigh)

    简单网页爬虫示例:

    import requests,time
    from concurrent.futures import ThreadPoolExecutor
    
    def get(url):
        print('get url',url)
        response = requests.get(url)
        time.sleep(3)
        return {'url':url,'content':response.text}
    
    def parse(res):
        res = res.result()
        print('%s parse res is %s'%(res['url'],len(res['content'])))
    
    if __name__ == '__main__':
        urls = [
            'http://www.cnblogs.com/stin',
            'https://www.python.org',
            'https://www.openstack.org',
        ]
        pool = ThreadPoolExecutor(2)
        for url in urls:
            pool.submit(get,url).add_done_callback(parse)
  • 相关阅读:
    在awk里引用shell变量(支持正则)
    python模块pyautogui
    一个完整的搜索系统
    信息检索笔记(9)再论文档评分
    信息检索导论学习笔记(8)向量空间模型
    搜索引擎查询扩展
    信息检索笔记(10)Lucene文档评分机制
    Lucene的分析过程
    信息检索导论学习笔记(7)文档评分、词项权重计算
    信息检索导论学习笔记(5)
  • 原文地址:https://www.cnblogs.com/stin/p/8548454.html
Copyright © 2020-2023  润新知