久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

      <tfoot id='lshdA'></tfoot>

      <legend id='lshdA'><style id='lshdA'><dir id='lshdA'><q id='lshdA'></q></dir></style></legend>
    1. <small id='lshdA'></small><noframes id='lshdA'>

      <i id='lshdA'><tr id='lshdA'><dt id='lshdA'><q id='lshdA'><span id='lshdA'><b id='lshdA'><form id='lshdA'><ins id='lshdA'></ins><ul id='lshdA'></ul><sub id='lshdA'></sub></form><legend id='lshdA'></legend><bdo id='lshdA'><pre id='lshdA'><center id='lshdA'></center></pre></bdo></b><th id='lshdA'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='lshdA'><tfoot id='lshdA'></tfoot><dl id='lshdA'><fieldset id='lshdA'></fieldset></dl></div>

      • <bdo id='lshdA'></bdo><ul id='lshdA'></ul>

      1. python多處理:某些函數完成后不返回(隊列材料太大

        python multiprocessing: some functions do not return when they are complete (queue material too big)(python多處理:某些函數完成后不返回(隊列材料太大))

            <tbody id='xEJQe'></tbody>

          <small id='xEJQe'></small><noframes id='xEJQe'>

          <tfoot id='xEJQe'></tfoot>

              <legend id='xEJQe'><style id='xEJQe'><dir id='xEJQe'><q id='xEJQe'></q></dir></style></legend>

                <i id='xEJQe'><tr id='xEJQe'><dt id='xEJQe'><q id='xEJQe'><span id='xEJQe'><b id='xEJQe'><form id='xEJQe'><ins id='xEJQe'></ins><ul id='xEJQe'></ul><sub id='xEJQe'></sub></form><legend id='xEJQe'></legend><bdo id='xEJQe'><pre id='xEJQe'><center id='xEJQe'></center></pre></bdo></b><th id='xEJQe'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='xEJQe'><tfoot id='xEJQe'></tfoot><dl id='xEJQe'><fieldset id='xEJQe'></fieldset></dl></div>

                  <bdo id='xEJQe'></bdo><ul id='xEJQe'></ul>
                  本文介紹了python多處理:某些函數完成后不返回(隊列材料太大)的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  我正在使用多處理的進程和隊列.我并行啟動了幾個函數,并且大多數函數都表現良好:它們完成,它們的輸出進入它們的隊列,它們顯示為 .is_alive() == False.但是由于某種原因,一些函數沒有運行.它們總是顯示 .is_alive() == True,即使在函數的最后一行(打印語句說完成")完成之后也是如此.無論我啟動了哪些功能,都會發生這種情況,即使它只有一個.如果不并行運行,則函數運行良好并正常返回.什么種類可能是問題?

                  I am using multiprocessing's Process and Queue. I start several functions in parallel and most behave nicely: they finish, their output goes to their Queue, and they show up as .is_alive() == False. But for some reason a couple of functions are not behaving. They always show .is_alive() == True, even after the last line in the function (a print statement saying "Finished") is complete. This happens regardless of the set of functions I launch, even it there's only one. If not run in parallel, the functions behave fine and return normally. What kind of thing might be the problem?

                  這是我用來管理作業的通用函數.我沒有展示的只是我傳遞給它的函數.它們很長,經常使用 matplotlib,有時會啟動一些 shell 命令,但我不知道失敗的命令有什么共同點.

                  Here's the generic function I'm using to manage the jobs. All I'm not showing is the functions I'm passing to it. They're long, often use matplotlib, sometimes launch some shell commands, but I cannot figure out what the failing ones have in common.

                  def  runFunctionsInParallel(listOf_FuncAndArgLists):
                      """
                      Take a list of lists like [function, arg1, arg2, ...]. Run those functions in parallel, wait for them all to finish, and return the list of their return values, in order.   
                      """
                      from multiprocessing import Process, Queue
                  
                      def storeOutputFFF(fff,theArgs,que): #add a argument to function for assigning a queue
                          print 'MULTIPROCESSING: Launching %s in parallel '%fff.func_name
                          que.put(fff(*theArgs)) #we're putting return value into queue
                          print 'MULTIPROCESSING: Finished %s in parallel! '%fff.func_name
                          # We get this far even for "bad" functions
                          return
                  
                      queues=[Queue() for fff in listOf_FuncAndArgLists] #create a queue object for each function
                      jobs = [Process(target=storeOutputFFF,args=[funcArgs[0],funcArgs[1:],queues[iii]]) for iii,funcArgs in enumerate(listOf_FuncAndArgLists)]
                      for job in jobs: job.start() # Launch them all
                      import time
                      from math import sqrt
                      n=1
                      while any([jj.is_alive() for jj in jobs]): # debugging section shows progress updates
                          n+=1
                          time.sleep(5+sqrt(n)) # Wait a while before next update. Slow down updates for really long runs.
                          print('
                  ---------------------------------------------------
                  '+ '	'.join(['alive?','Job','exitcode','Func',])+ '
                  ---------------------------------------------------')
                          print('
                  '.join(['%s:	%s:	%s:	%s'%(job.is_alive()*'Yes',job.name,job.exitcode,listOf_FuncAndArgLists[ii][0].func_name) for ii,job in enumerate(jobs)]))
                          print('---------------------------------------------------
                  ')
                      # I never get to the following line when one of the "bad" functions is running.
                      for job in jobs: job.join() # Wait for them all to finish... Hm, Is this needed to get at the Queues?
                      # And now, collect all the outputs:
                      return([queue.get() for queue in queues])
                  

                  推薦答案

                  好吧,好像函數的輸出太大時,用來填充Queue的管道被堵塞了(我粗略的理解?這是一個未解決的/關閉的錯誤?http://bugs.python.org/issue8237).我已經修改了我的問題中的代碼,以便有一些緩沖(在進程運行時定期清空隊列),這解決了我所有的問題.所以現在這需要一組任務(函數及其參數),啟動它們,并收集輸出.我希望它看起來更簡單/更干凈.

                  Alright, it seems that the pipe used to fill the Queue gets plugged when the output of a function is too big (my crude understanding? This is an unresolved/closed bug? http://bugs.python.org/issue8237). I have modified the code in my question so that there is some buffering (queues are regularly emptied while processes are running), which solves all my problems. So now this takes a collection of tasks (functions and their arguments), launches them, and collects the outputs. I wish it were simpler /cleaner looking.

                  編輯(2014 年 9 月;2017 年 11 月更新:重寫以提高可讀性):我正在使用我此后所做的增強來更新代碼.新代碼(功能相同,但功能更好)在這里:https://gitlab.com/cpbl/cpblUtilities/blob/master/parallel.py

                  Edit (2014 Sep; update 2017 Nov: rewritten for readability): I'm updating the code with the enhancements I've made since. The new code (same function, but better features) is here: https://gitlab.com/cpbl/cpblUtilities/blob/master/parallel.py

                  調用說明也在下方.

                  def runFunctionsInParallel(*args, **kwargs):
                      """ This is the main/only interface to class cRunFunctionsInParallel. See its documentation for arguments.
                      """
                      return cRunFunctionsInParallel(*args, **kwargs).launch_jobs()
                  
                  ###########################################################################################
                  ###
                  class cRunFunctionsInParallel():
                      ###
                      #######################################################################################
                      """Run any list of functions, each with any arguments and keyword-arguments, in parallel.
                  The functions/jobs should return (if anything) pickleable results. In order to avoid processes getting stuck due to the output queues overflowing, the queues are regularly collected and emptied.
                  You can now pass os.system or etc to this as the function, in order to parallelize at the OS level, with no need for a wrapper: I made use of hasattr(builtinfunction,'func_name') to check for a name.
                  Parameters
                  ----------
                  listOf_FuncAndArgLists : a list of lists 
                      List of up-to-three-element-lists, like [function, args, kwargs],
                      specifying the set of functions to be launched in parallel.  If an
                      element is just a function, rather than a list, then it is assumed
                      to have no arguments or keyword arguments. Thus, possible formats
                      for elements of the outer list are:
                        function
                        [function, list]
                        [function, list, dict]
                  kwargs: dict
                      One can also supply the kwargs once, for all jobs (or for those
                      without their own non-empty kwargs specified in the list)
                  names: an optional list of names to identify the processes.
                      If omitted, the function name is used, so if all the functions are
                      the same (ie merely with different arguments), then they would be
                      named indistinguishably
                  offsetsSeconds: int or list of ints
                      delay some functions' start times
                  expectNonzeroExit: True/False
                      Normal behaviour is to not proceed if any function exits with a
                      failed exit code. This can be used to override this behaviour.
                  parallel: True/False
                      Whenever the list of functions is longer than one, functions will
                      be run in parallel unless this parameter is passed as False
                  maxAtOnce: int
                      If nonzero, this limits how many jobs will be allowed to run at
                      once.  By default, this is set according to how many processors
                      the hardware has available.
                  showFinished : int
                      Specifies the maximum number of successfully finished jobs to show
                      in the text interface (before the last report, which should always
                      show them all).
                  Returns
                  -------
                  Returns a tuple of (return codes, return values), each a list in order of the jobs provided.
                  Issues
                  -------
                  Only tested on POSIX OSes.
                  Examples
                  --------
                  See the testParallel() method in this module
                      """
                  

                  這篇關于python多處理:某些函數完成后不返回(隊列材料太大)的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)
                • <tfoot id='KoLVC'></tfoot>
                  <i id='KoLVC'><tr id='KoLVC'><dt id='KoLVC'><q id='KoLVC'><span id='KoLVC'><b id='KoLVC'><form id='KoLVC'><ins id='KoLVC'></ins><ul id='KoLVC'></ul><sub id='KoLVC'></sub></form><legend id='KoLVC'></legend><bdo id='KoLVC'><pre id='KoLVC'><center id='KoLVC'></center></pre></bdo></b><th id='KoLVC'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='KoLVC'><tfoot id='KoLVC'></tfoot><dl id='KoLVC'><fieldset id='KoLVC'></fieldset></dl></div>

                  <small id='KoLVC'></small><noframes id='KoLVC'>

                        <bdo id='KoLVC'></bdo><ul id='KoLVC'></ul>
                          <tbody id='KoLVC'></tbody>
                            <legend id='KoLVC'><style id='KoLVC'><dir id='KoLVC'><q id='KoLVC'></q></dir></style></legend>

                          1. 主站蜘蛛池模板: 久久av资源网 | 麻豆久久久久久 | 成人二区三区 | 麻豆视频在线免费观看 | 视频一区二区三区四区五区 | 精品国产一区二区三区久久久四川 | 精品欧美一区二区精品久久 | 伊人超碰在线 | www.99热.com| 午夜电影日韩 | 能免费看的av | 亚洲午夜小视频 | 91一区二区| 九九爱这里只有精品 | 成人不卡视频 | 日韩av.com | 91视频官网 | 蜜桃精品视频在线 | 亚洲精品免费视频 | 欧美aaa | 夜夜精品浪潮av一区二区三区 | 91精品国产自产在线老师啪 | 精品在线一区 | 久久久免费电影 | 免费观看www7722午夜电影 | 日韩av在线不卡 | 日日夜夜狠狠操 | 91在线区 | 亚洲综合在线一区二区 | av日韩一区 | 美女三区 | 国产日韩一区二区三区 | 国产欧美日韩在线一区 | 亚洲国产成人精品女人久久久野战 | 欧美日韩高清 | 欧美日韩国产高清视频 | 国产精品久久一区二区三区 | 日本三级黄视频 | 日韩在线视频一区 | 国产一区二区欧美 | 久久久久久av |