久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

      <bdo id='mL2N4'></bdo><ul id='mL2N4'></ul>

    <small id='mL2N4'></small><noframes id='mL2N4'>

        <tfoot id='mL2N4'></tfoot><legend id='mL2N4'><style id='mL2N4'><dir id='mL2N4'><q id='mL2N4'></q></dir></style></legend>
      1. <i id='mL2N4'><tr id='mL2N4'><dt id='mL2N4'><q id='mL2N4'><span id='mL2N4'><b id='mL2N4'><form id='mL2N4'><ins id='mL2N4'></ins><ul id='mL2N4'></ul><sub id='mL2N4'></sub></form><legend id='mL2N4'></legend><bdo id='mL2N4'><pre id='mL2N4'><center id='mL2N4'></center></pre></bdo></b><th id='mL2N4'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='mL2N4'><tfoot id='mL2N4'></tfoot><dl id='mL2N4'><fieldset id='mL2N4'></fieldset></dl></div>

        結(jié)合 itertools 和多處理?

        Combining itertools and multiprocessing?(結(jié)合 itertools 和多處理?)
        <i id='uTS7Y'><tr id='uTS7Y'><dt id='uTS7Y'><q id='uTS7Y'><span id='uTS7Y'><b id='uTS7Y'><form id='uTS7Y'><ins id='uTS7Y'></ins><ul id='uTS7Y'></ul><sub id='uTS7Y'></sub></form><legend id='uTS7Y'></legend><bdo id='uTS7Y'><pre id='uTS7Y'><center id='uTS7Y'></center></pre></bdo></b><th id='uTS7Y'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='uTS7Y'><tfoot id='uTS7Y'></tfoot><dl id='uTS7Y'><fieldset id='uTS7Y'></fieldset></dl></div>
        • <bdo id='uTS7Y'></bdo><ul id='uTS7Y'></ul>

                <tbody id='uTS7Y'></tbody>
            • <small id='uTS7Y'></small><noframes id='uTS7Y'>

                <tfoot id='uTS7Y'></tfoot>

                  <legend id='uTS7Y'><style id='uTS7Y'><dir id='uTS7Y'><q id='uTS7Y'></q></dir></style></legend>
                1. 本文介紹了結(jié)合 itertools 和多處理?的處理方法,對(duì)大家解決問題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  限時(shí)送ChatGPT賬號(hào)..

                  我有一個(gè) 256x256x256 Numpy 數(shù)組,其中每個(gè)元素都是一個(gè)矩陣.我需要對(duì)這些矩陣中的每一個(gè)進(jìn)行一些計(jì)算,并且我想使用 multiprocessing 模塊來加快速度.

                  I have a 256x256x256 Numpy array, in which each element is a matrix. I need to do some calculations on each of these matrices, and I want to use the multiprocessing module to speed things up.

                  這些計(jì)算的結(jié)果必須像原來的一樣存儲(chǔ)在一個(gè)256x256x256數(shù)組中,這樣矩陣在元素[i,j,k]處的結(jié)果原數(shù)組中的元素必須放在新數(shù)組的[i,j,k]元素中.

                  The results of these calculations must be stored in a 256x256x256 array like the original one, so that the result of the matrix at element [i,j,k] in the original array must be put in the [i,j,k] element of the new array.

                  為此,我想創(chuàng)建一個(gè)列表,可以用偽方式編寫為 [array[i,j,k], (i, j, k)] 和將其傳遞給要多處理"的函數(shù).假設(shè) matrices 是從原始數(shù)組中提取的所有矩陣的列表,而 myfunc 是進(jìn)行計(jì)算的函數(shù),代碼看起來有點(diǎn)像這樣:

                  To do this, I want to make a list which could be written in a pseudo-ish way as [array[i,j,k], (i, j, k)] and pass it to a function to be "multiprocessed". Assuming that matrices is a list of all the matrices extracted from the original array and myfunc is the function doing the calculations, the code would look somewhat like this:

                  import multiprocessing
                  import numpy as np
                  from itertools import izip
                  
                  def myfunc(finput):
                      # Do some calculations...
                      ...
                  
                      # ... and return the result and the index:
                      return (result, finput[1])
                  
                  # Make indices:
                  inds = np.rollaxis(np.indices((256, 256, 256)), 0, 4).reshape(-1, 3)
                  
                  # Make function input from the matrices and the indices:
                  finput = izip(matrices, inds)
                  
                  pool = multiprocessing.Pool()
                  async_results = np.asarray(pool.map_async(myfunc, finput).get(999999))
                  

                  然而,似乎 map_async 實(shí)際上首先創(chuàng)建了這個(gè)巨大的 finput-list:我的 CPU 沒有做太多,但內(nèi)存和交換完全被消耗幾秒鐘的事,這顯然不是我想要的.

                  However, it seems like map_async is actually creating this huge finput-list first: My CPU's aren't doing much, but the memory and swap get completely consumed in a matter of seconds, which is obviously not what I want.

                  有沒有辦法將這個(gè)巨大的列表傳遞給一個(gè)多處理函數(shù)而無需先顯式創(chuàng)建它?或者你知道解決這個(gè)問題的另一種方法嗎?

                  Is there a way to pass this huge list to a multiprocessing function without the need to explicitly create it first? Or do you know another way of solving this problem?

                  非常感謝!:-)

                  推薦答案

                  所有 multiprocessing.Pool.map* 方法完全使用迭代器(demo code) 只要函數(shù)叫.要一次給迭代器的 map 函數(shù)塊提供一個(gè)塊,請(qǐng)使用 grouper_nofill:

                  All multiprocessing.Pool.map* methods consume iterators fully(demo code) as soon as the function is called. To feed the map function chunks of the iterator one chunk at a time, use grouper_nofill:

                  def grouper_nofill(n, iterable):
                      '''list(grouper_nofill(3, 'ABCDEFG')) --> [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]
                      '''
                      it=iter(iterable)
                      def take():
                          while 1: yield list(itertools.islice(it,n))
                      return iter(take().next,[])
                  
                  chunksize=256
                  async_results=[]
                  for finput in grouper_nofill(chunksize,itertools.izip(matrices, inds)):
                      async_results.extend(pool.map_async(myfunc, finput).get())
                  async_results=np.array(async_results)
                  

                  PS.pool.map_asyncchunksize 參數(shù)做了一些不同的事情:它將可迭代對(duì)象分成塊,然后將每個(gè)塊交給一個(gè)調(diào)用 map(func,chunk).如果 func(item) 完成得太快,這可以為工作進(jìn)程提供更多數(shù)據(jù)來咀嚼,但它對(duì)您的情況沒有幫助,因?yàn)榈魅匀辉?map_async<之后立即被完全消耗/code> 調(diào)用已發(fā)出.

                  PS. pool.map_async's chunksize parameter does something different: It breaks the iterable into chunks, then gives each chunk to a worker process which calls map(func,chunk). This can give the worker process more data to chew on if func(item) finishes too quickly, but it does not help in your situation since the iterator still gets consumed fully immediately after the map_async call is issued.

                  這篇關(guān)于結(jié)合 itertools 和多處理?的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個(gè)參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進(jìn)程池.當(dāng)其中一個(gè)工作進(jìn)程確定不再需要完成工作時(shí),如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊(duì)列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯(cuò)誤的另一個(gè)混淆,“模塊對(duì)象沒有屬性“f)
                  <legend id='D3o8Q'><style id='D3o8Q'><dir id='D3o8Q'><q id='D3o8Q'></q></dir></style></legend>
                  • <i id='D3o8Q'><tr id='D3o8Q'><dt id='D3o8Q'><q id='D3o8Q'><span id='D3o8Q'><b id='D3o8Q'><form id='D3o8Q'><ins id='D3o8Q'></ins><ul id='D3o8Q'></ul><sub id='D3o8Q'></sub></form><legend id='D3o8Q'></legend><bdo id='D3o8Q'><pre id='D3o8Q'><center id='D3o8Q'></center></pre></bdo></b><th id='D3o8Q'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='D3o8Q'><tfoot id='D3o8Q'></tfoot><dl id='D3o8Q'><fieldset id='D3o8Q'></fieldset></dl></div>
                  • <small id='D3o8Q'></small><noframes id='D3o8Q'>

                    • <bdo id='D3o8Q'></bdo><ul id='D3o8Q'></ul>
                        <tbody id='D3o8Q'></tbody>
                      <tfoot id='D3o8Q'></tfoot>

                            主站蜘蛛池模板: 国产黄视频在线播放 | 日韩美女在线看免费观看 | 日韩av一区在线观看 | 夜夜夜夜夜夜曰天天天 | 国产精品久久久久久久久久久免费看 | 亚洲国产精品久久久久婷婷老年 | 少妇特黄a一区二区三区88av | 久久综合一区 | 91精品久久久| 一区二区电影 | 国产在线91| 亚洲精品一区二区冲田杏梨 | 亚洲视频中文字幕 | 国产视频h | 一区二区三区不卡视频 | 欧美美女一区二区 | 国产精品免费视频一区 | 999re5这里只有精品 | 久久不射网 | 伊人一二三 | 亚洲一区二区三区国产 | a亚洲精品| 亚洲一区久久 | 伊人网综合在线观看 | 欧美在线视频一区 | 九九久久这里只有精品 | 色天堂影院 | 视频一区二区三区中文字幕 | 亚洲欧美日韩电影 | 国产乱码精品一区二区三区五月婷 | 日本一级淫片免费啪啪3 | 成人精品久久 | 欧美综合一区 | 亚洲高清在线 | 欧美成人一区二免费视频软件 | 一级毛片在线视频 | 99热这里都是精品 | 91激情电影 | 69电影网| 91成人影院 | 在线免费看91 |