久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    <legend id='tetoQ'><style id='tetoQ'><dir id='tetoQ'><q id='tetoQ'></q></dir></style></legend>

      <bdo id='tetoQ'></bdo><ul id='tetoQ'></ul>
  1. <i id='tetoQ'><tr id='tetoQ'><dt id='tetoQ'><q id='tetoQ'><span id='tetoQ'><b id='tetoQ'><form id='tetoQ'><ins id='tetoQ'></ins><ul id='tetoQ'></ul><sub id='tetoQ'></sub></form><legend id='tetoQ'></legend><bdo id='tetoQ'><pre id='tetoQ'><center id='tetoQ'></center></pre></bdo></b><th id='tetoQ'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='tetoQ'><tfoot id='tetoQ'></tfoot><dl id='tetoQ'><fieldset id='tetoQ'></fieldset></dl></div>

    1. <tfoot id='tetoQ'></tfoot>
    2. <small id='tetoQ'></small><noframes id='tetoQ'>

      多處理 Queue.get() 掛起

      Multiprocessing Queue.get() hangs(多處理 Queue.get() 掛起)
      <legend id='uwW7G'><style id='uwW7G'><dir id='uwW7G'><q id='uwW7G'></q></dir></style></legend>

            <tfoot id='uwW7G'></tfoot>
              <tbody id='uwW7G'></tbody>

              <small id='uwW7G'></small><noframes id='uwW7G'>

                <bdo id='uwW7G'></bdo><ul id='uwW7G'></ul>
                <i id='uwW7G'><tr id='uwW7G'><dt id='uwW7G'><q id='uwW7G'><span id='uwW7G'><b id='uwW7G'><form id='uwW7G'><ins id='uwW7G'></ins><ul id='uwW7G'></ul><sub id='uwW7G'></sub></form><legend id='uwW7G'></legend><bdo id='uwW7G'><pre id='uwW7G'><center id='uwW7G'></center></pre></bdo></b><th id='uwW7G'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='uwW7G'><tfoot id='uwW7G'></tfoot><dl id='uwW7G'><fieldset id='uwW7G'></fieldset></dl></div>
                本文介紹了多處理 Queue.get() 掛起的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                問題描述

                限時送ChatGPT賬號..

                我正在嘗試實現基本的多處理,但遇到了問題.下面附上python腳本.

                I'm trying to implement basic multiprocessing and I've run into an issue. The python script is attached below.

                import time, sys, random, threading
                from multiprocessing import Process
                from Queue import Queue
                from FrequencyAnalysis import FrequencyStore, AnalyzeFrequency
                
                append_queue = Queue(10)
                database = FrequencyStore()
                
                def add_to_append_queue(_list):
                    append_queue.put(_list)
                
                def process_append_queue():
                    while True:
                        item = append_queue.get()
                        database.append(item)
                        print("Appended to database in %.4f seconds" % database.append_time)
                        append_queue.task_done()
                    return
                
                def main():
                    database.load_db()
                    print("Database loaded in %.4f seconds" % database.load_time)
                    append_queue_process = Process(target=process_append_queue)
                    append_queue_process.daemon = True
                    append_queue_process.start()
                    #t = threading.Thread(target=process_append_queue)
                    #t.daemon = True
                    #t.start()
                
                    while True:
                        path = raw_input("file: ")
                        if path == "exit":
                            break
                        a = AnalyzeFrequency(path)
                        a.analyze()
                        print("Analyzed file in %.4f seconds" % a._time)
                        add_to_append_queue(a.get_results())
                
                    append_queue.join()
                    #append_queue_process.join()
                    database.save_db()
                    print("Database saved in %.4f seconds" % database.save_time)
                    sys.exit(0)
                
                if __name__=="__main__":
                    main()
                

                AnalyzeFrequency 分析文件中單詞的頻率,get_results() 返回所述單詞和頻率的排序列表.列表非常大,可能有 10000 項.

                The AnalyzeFrequency analyzes the frequencies of words in a file and get_results() returns a sorted list of said words and frequencies. The list is very large, perhaps 10000 items.

                然后將該列表傳遞給 add_to_append_queue 方法,該方法將其添加到隊列中.process_append_queue 一項一項地獲取項目并將頻率添加到數據庫".此操作比 main() 中的實際分析需要更長的時間,因此我嘗試對此方法使用單獨的過程.當我嘗試使用線程模塊執行此操作時,一切正常,沒有錯誤.當我嘗試使用 Process 時,腳本掛在 item = append_queue.get().

                This list is then passed to the add_to_append_queue method which adds it to a queue. The process_append_queue takes the items one by one and adds the frequencies to a "database". This operation takes a bit longer than the actual analysis in main() so I am trying to use a seperate process for this method. When I try and do this with the threading module, everything works perfectly fine, no errors. When I try and use Process, the script hangs at item = append_queue.get().

                有人能解釋一下這里發生了什么,或許可以指導我解決問題嗎?

                Could someone please explain what is happening here, and perhaps direct me toward a fix?

                感謝所有答案!

                更新

                泡菜錯誤是我的錯,只是一個錯字.現在我在多處理中使用 Queue 類,但 append_queue.get() 方法仍然掛起.新代碼

                import time, sys, random
                from multiprocessing import Process, Queue
                from FrequencyAnalysis import FrequencyStore, AnalyzeFrequency
                
                append_queue = Queue()
                database = FrequencyStore()
                
                def add_to_append_queue(_list):
                    append_queue.put(_list)
                
                def process_append_queue():
                    while True:
                        database.append(append_queue.get())
                        print("Appended to database in %.4f seconds" % database.append_time)
                    return
                
                def main():
                    database.load_db()
                    print("Database loaded in %.4f seconds" % database.load_time)
                    append_queue_process = Process(target=process_append_queue)
                    append_queue_process.daemon = True
                    append_queue_process.start()
                    #t = threading.Thread(target=process_append_queue)
                    #t.daemon = True
                    #t.start()
                
                    while True:
                        path = raw_input("file: ")
                        if path == "exit":
                            break
                        a = AnalyzeFrequency(path)
                        a.analyze()
                        print("Analyzed file in %.4f seconds" % a._time)
                        add_to_append_queue(a.get_results())
                
                    #append_queue.join()
                    #append_queue_process.join()
                    print str(append_queue.qsize())
                    database.save_db()
                    print("Database saved in %.4f seconds" % database.save_time)
                    sys.exit(0)
                
                if __name__=="__main__":
                    main()
                

                更新 2

                這是數據庫代碼:

                class FrequencyStore:
                
                    def __init__(self):
                        self.sorter = Sorter()
                        self.db = {}
                        self.load_time = -1
                        self.save_time = -1
                        self.append_time = -1
                        self.sort_time = -1
                
                    def load_db(self):
                        start_time = time.time()
                
                        try:
                            file = open("results.txt", 'r')
                        except:
                            raise IOError
                
                        self.db = {}
                        for line in file:
                            word, count = line.strip("
                ").split("=")
                            self.db[word] = int(count)
                        file.close()
                
                        self.load_time = time.time() - start_time
                
                    def save_db(self):
                        start_time = time.time()
                
                        _db = []
                        for key in self.db:
                            _db.append([key, self.db[key]])
                        _db = self.sort(_db)
                
                        try:
                            file = open("results.txt", 'w')
                        except:
                            raise IOError
                
                        file.truncate(0)
                        for x in _db:
                            file.write(x[0] + "=" + str(x[1]) + "
                ")
                        file.close()
                
                        self.save_time = time.time() - start_time
                
                    def create_sorted_db(self):
                        _temp_db = []
                        for key in self.db:
                            _temp_db.append([key, self.db[key]])
                        _temp_db = self.sort(_temp_db)
                        _temp_db.reverse()
                        return _temp_db
                
                    def get_db(self):
                        return self.db
                
                    def sort(self, _list):
                        start_time = time.time()
                
                        _list = self.sorter.mergesort(_list)
                        _list.reverse()
                
                        self.sort_time = time.time() - start_time
                        return _list
                
                    def append(self, _list):
                        start_time = time.time()
                
                        for x in _list:
                            if x[0] not in self.db:
                                self.db[x[0]] = x[1]
                            else:
                                self.db[x[0]] += x[1]
                
                        self.append_time = time.time() - start_time
                

                推薦答案

                評論建議您嘗試在 Windows 上運行它.正如我在評論中所說,

                Comments suggest you're trying to run this on Windows. As I said in a comment,

                如果你在 Windows 上運行它,它就不能工作 - Windows 不能有 fork(),所以每個進程都有自己的隊列,他們什么都沒有彼此做.整個模塊由從頭開始"導入Windows 上的每個進程.您需要在 main() 中創建隊列,并將其作為參數傳遞給工作函數.

                If you're running this on Windows, it can't work - Windows doesn't have fork(), so each process gets its own Queue and they have nothing to do with each other. The entire module is imported "from scratch" by each process on Windows. You'll need to create the Queue in main(), and pass it as an argument to the worker function.

                這里充實了您需要做的事情以使其可移植,盡管我刪除了所有數據庫內容,因為它與您迄今為止描述的問題無關.我還刪除了 daemon 擺弄,因為這通常只是避免干凈地關閉事物的一種懶惰方式,而且通常以后會回來咬你:

                Here's fleshing out what you need to do to make it portable, although I removed all the database stuff because it's irrelevant to the problems you've described so far. I also removed the daemon fiddling, because that's usually just a lazy way to avoid shutting down things cleanly, and often as not will come back to bite you later:

                def process_append_queue(append_queue):
                    while True:
                        x = append_queue.get()
                        if x is None:
                            break
                        print("processed %d" % x)
                    print("worker done")
                
                def main():
                    import multiprocessing as mp
                
                    append_queue = mp.Queue(10)
                    append_queue_process = mp.Process(target=process_append_queue, args=(append_queue,))
                    append_queue_process.start()
                    for i in range(100):
                        append_queue.put(i)
                    append_queue.put(None)  # tell worker we're done
                    append_queue_process.join()
                
                if __name__=="__main__":
                    main()
                

                輸出是明顯"的東西:

                processed 0
                processed 1
                processed 2
                processed 3
                processed 4
                ...
                processed 96
                processed 97
                processed 98
                processed 99
                worker done
                

                注意:因為 Windows 不(不能)fork(),所以工作進程不可能繼承 Windows 上的任何 Python 對象.每個進程從一開始就運行整個程序.這就是為什么您的原始程序無法運行的原因:每個進程都創建了自己的 Queue,與另一個進程中的 Queue 完全無關.在上面顯示的方法中,只有主進程創建了一個 Queue,主進程將它(作為參數)傳遞給工作進程.

                Note: because Windows doesn't (can't) fork(), it's impossible for worker processes to inherit any Python object on Windows. Each process runs the entire program from its start. That's why your original program couldn't work: each process created its own Queue, wholly unrelated to the Queue in the other process. In the approach shown above, only the main process creates a Queue, and the main process passes it (as an argument) to the worker process.

                這篇關于多處理 Queue.get() 掛起的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                相關文檔推薦

                What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)
                  <i id='feLUJ'><tr id='feLUJ'><dt id='feLUJ'><q id='feLUJ'><span id='feLUJ'><b id='feLUJ'><form id='feLUJ'><ins id='feLUJ'></ins><ul id='feLUJ'></ul><sub id='feLUJ'></sub></form><legend id='feLUJ'></legend><bdo id='feLUJ'><pre id='feLUJ'><center id='feLUJ'></center></pre></bdo></b><th id='feLUJ'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='feLUJ'><tfoot id='feLUJ'></tfoot><dl id='feLUJ'><fieldset id='feLUJ'></fieldset></dl></div>

                  <small id='feLUJ'></small><noframes id='feLUJ'>

                  • <legend id='feLUJ'><style id='feLUJ'><dir id='feLUJ'><q id='feLUJ'></q></dir></style></legend>
                      <tbody id='feLUJ'></tbody>

                      • <bdo id='feLUJ'></bdo><ul id='feLUJ'></ul>

                          <tfoot id='feLUJ'></tfoot>

                        1. 主站蜘蛛池模板: 日本免费在线观看视频 | 亚洲精品一区二区三区蜜桃久 | 日韩成人av在线 | 中文字幕精品一区二区三区精品 | 久久伊人在 | 国产一区 | 午夜日韩| 午夜精品久久久久久久久久久久久 | 国产一区2区 | 亚洲成人精品久久久 | av永久 | 欧美一区在线看 | 91高清视频在线观看 | 四虎影视免费在线 | 成人一区二区三区 | 中文字幕精品一区久久久久 | 国产精品久久久久久 | 国产精品中文字幕在线 | 免费在线一区二区 | 黄色在线免费播放 | 精品国产久 | 黄a在线观看 | 亚洲视频中文字幕 | 精品国产乱码久久久久久影片 | 久久69精品久久久久久久电影好 | 欧美日高清 | 羞羞涩涩在线观看 | 一区二区国产精品 | 宅女噜噜66国产精品观看免费 | 日韩欧美一区二区三区免费观看 | 国产综合视频 | 91看片在线| 日本特黄特色aaa大片免费 | 日本福利片 | 日本福利片 | 国产精品99999 | 免费在线看黄 | 久久久91精品国产一区二区三区 | 国产精品久久久久久中文字 | 久久精品一级 | 日日艹夜夜艹 |