標籤:對象 訊息迴圈 裝飾器 獲得 forever mat ica name 3.2
迭代器
- 可迭代 (Iterable):直接作用於for迴圈變數
- 迭代器 (Iterator):直接作用於for迴圈變數,並且可以用next調用
- 鑒別,可用
isinstancle()
產生器
不用for,占記憶體小,一邊迴圈一邊計算——時間換空間
next函數調用,到最後一個,報StopIteration
異常
產生:
- 直接使用
g = (x * x for x in range(10)) # 中括弧是清單產生器,小括弧是產生器
- 函數包含yield,則叫產生器,next調用函數,遇yield返回
def odd(): print('step 1') yield 1 print('step 2') yield(3) print('step 3') yield(5)g = odd()one = next(g)print(one)two = next(g)print(two)three = next(g)print(three)
Step 1
1
Step 2
2
Step 3
3
注意:此時g是一個generator, next調用從上次yield後開始
- for迴圈調用產生器
def fib(max): n, a, b = 0, 0, 1 while n < max: yield b a, b = b, a+b n += 1 return 'Done'g = fib(5)for i in g: print(i)
1
1
2
3
5
協程
定義:為非搶佔式多任務產生子程式,可以暫停執行——像generator一樣
關鍵字:yield
和send
def simple_coroutine(a): print('-> start') b = yield a print('-> recived', a, b) c = yield a + b print('-> recived', a, b, c)# runcsc = simple_coroutine(5)aa = next(sc) # 預激print(aa)bb = sc.send(6) # 5, 6print(bb)cc = sc.send(7) # 5, 6, 7print(cc)
-> start
5
-> recived 5 6
11
-> recived 5 6 7
分析:動腦子
協程終止:向上冒泡,發送哨符值,讓協程退出
yield from:相當於加一個通道(協程與主線程間)
def gen(): for c in 'AB': yield cprint(list(gen()))def gen_new(): yield from 'AB'print(list(gen_new()))
委派產生器:包含yield from的產生器函數
from collections import namedtupleResClass = namedtuple('Res', 'count average')# 子產生器def averager(): total = 0.0 count = 0 average = None while True: term = yield if term is None: break total += term count += 1 average = total / count return ResClass(count, average)# 委派產生器def grouper(storages, key): while True: # 擷取averager()返回的值 storages[key] = yield from averager()# 用戶端代碼def client(): process_data = { 'boys_2': [39.0, 40.8, 43.2, 40.8, 43.1, 38.6, 41.4, 40.6, 36.3], 'boys_1': [1.38, 1.5, 1.32, 1.25, 1.37, 1.48, 1.25, 1.49, 1.46] } storages = {} for k, v in process_data.items(): # 獲得協程 coroutine = grouper(storages, k) # 預激協程 next(coroutine) # 發送資料到協程 for dt in v: coroutine.send(dt) # 終止協程 coroutine.send(None) print(storages)# runclient()
{‘boys_2‘: Res(count=9, average=40.422222222222224), ‘boys_1‘: Res(count=9, average=1.3888888888888888)}
解釋:
client()
函數開始,for k, v 迴圈內,每次建立一個新的grouper執行個體coroutine
next(coroutine)
預激協程,進入while True迴圈,調用averager()
,yield from處暫停
- 內層
for dt in v
結束後,grouper執行個體仍暫停,所以storages[key]的賦值還未完成
coroutine.send(None)
後,term變為None,averager子產生器中止,拋出StopIteration,並將返回的資料包含在異常對象的value中,yield from 直接抓取 StopItration ,將異常對象的 value 賦值給 storages[key]
asyncio
步驟:建立訊息迴圈(解決非同步IO,有中轉:相當於信箱,訊息queue)-> 匯入協程-> 關閉
import threadingimport asyncio@asyncio.coroutinedef hello(): print('Hello world! (%s)' % threading.currentThread()) print('Starting......(%s)' % threading.currentThread()) yield from asyncio.sleep(3) print('Done......(%s)' % threading.currentThread()) print('Hello again! (%s)' % threading.currentThread())loop = asyncio.get_event_loop()tasks = [hello(), hello()]loop.run_until_complete(asyncio.wait(tasks))loop.close()
async & await
更簡潔,不用裝飾器
import threadingimport asyncioasync def hello(): print('Hello world! (%s)' % threading.currentThread()) print('Starting......(%s)' % threading.currentThread()) await asyncio.sleep(3) print('Done......(%s)' % threading.currentThread()) print('Hello again! (%s)' % threading.currentThread())loop = asyncio.get_event_loop()tasks = [hello(), hello()]loop.run_until_complete(asyncio.wait(tasks))loop.close()
aiohttp
介紹:
- 用asyncio和coroutine配合——http是io操作
例:
import asynciofrom aiohttp import webasync def index(request): await asyncio.sleep(0.5) return web.Response(body=b'<h1>Index</h1>')async def hello(request): await asyncio.sleep(0.5) text = '<h1>hello, %s!</h1>' % request.match_info['name'] return web.Response(body=text.encode('utf-8'))async def init(loop): app = web.Application(loop=loop) app.router.add_route('GET', '/', index) app.router.add_route('GET', '/hello/{name}', hello) srv = await loop.create_server(app.make_handler(), '127.0.0.1', 8000) print('Server started at http://127.0.0.1:8000...') return srvloop = asyncio.get_event_loop()loop.run_until_complete(init(loop))loop.run_forever()
註:查+理解
concurrent.futures
類似線程池
用multiprocessing實現真正並行計算——運行多個解譯器
concurrent.furtures.Executor
- ThreadPoolExecutor
- ProcessPoolExecutor
例子:
from concurrent.futures import ThreadPoolExecutorimport timedef return_future(msg): time.sleep(3) return msg# 建立一個線程池pool = ThreadPoolExecutor(max_workers=2)# 往線程池加入2個taskf1 = pool.submit(return_future, 'hello')f2 = pool.submit(return_future, 'world')print(f1.done())time.sleep(3)print(f2.done())print(f1.result())print(f2.result())
map(fn, *iterables, timeout=None):
map和submit用一個就行
import timeimport reimport osimport datetimefrom concurrent import futuresdata = ['1', '2']def wait_on(argument): print(argument) time.sleep(2) return "ok"ex = futures.ThreadPoolExecutor(max_workers=2)for i in ex.map(wait_on, data): print(i)
Future
- future執行個體由
Executor.submit
建立
from concurrent.futures import ThreadPoolExecutor as Poolfrom concurrent.futures import as_completedimport requestsURLS = ['http://qq.com', 'http://sina.com', 'http://www.baidu.com', ]def task(url, timeout=10): return requests.get(url, timeout=timeout)with Pool(max_workers=3) as executor: future_tasks = [executor.submit(task, url) for url in URLS] for f in future_tasks: if f.running(): print('%s is running' % str(f)) for f in as_completed(future_tasks): try: ret = f.done() if ret: f_ret = f.result() print('%s, done, result: %s, %s' % (str(f), f_ret.url, len(f_ret.content))) except Exception as e: f.cancel() print(str(e))
Python 協程