問題描述
我正在嘗試使用 標準 python 文檔中的代碼片段來學習如何使用多處理模塊.代碼粘貼在此消息的末尾.我在四核機器上的 Ubuntu 11.04 上使用 Python 2.7.1(根據(jù)系統(tǒng)監(jiān)視器,由于超線程,它給了我八個內(nèi)核)
I'm trying out a code snippet from the standard python documentation to learn how to use the multiprocessing module. The code is pasted at the end of this message. I'm using Python 2.7.1 on Ubuntu 11.04 on a quad core machine (which according to the system monitor gives me eight cores due to hyper threading)
問題:盡管啟動了多個進程,但所有工作負載似乎都安排在一個內(nèi)核上,利用率接近 100%.有時,所有工作負載都會遷移到另一個核心,但工作負載從未在它們之間分配.
Problem: All workload seems to be scheduled to just one core, which gets close to 100% utilization, despite the fact that several processes are started. Occasionally all workload migrates to another core but the workload is never distributed among them.
任何想法為什么會這樣?
Any ideas why this is so?
最好的問候,
保羅
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' %
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(500)]
TASKS2 = [(plus, (i, 8)) for i in range(250)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print ' ', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print ' ', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
test()
推薦答案
嘗試將 time.sleep
替換為實際需要 CPU 的東西,您將看到 multiprocess
有效正好!例如:
Try replacing the time.sleep
with something that actually requires CPUs and you will see the multiprocess
works just fine! For example:
def mul(a, b):
for i in xrange(100000):
j = i**2
return a * b
def plus(a, b):
for i in xrange(100000):
j = i**2
return a + b
這篇關(guān)于Python 多處理只使用一個核心的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!