久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

    <i id='B8ykc'><tr id='B8ykc'><dt id='B8ykc'><q id='B8ykc'><span id='B8ykc'><b id='B8ykc'><form id='B8ykc'><ins id='B8ykc'></ins><ul id='B8ykc'></ul><sub id='B8ykc'></sub></form><legend id='B8ykc'></legend><bdo id='B8ykc'><pre id='B8ykc'><center id='B8ykc'></center></pre></bdo></b><th id='B8ykc'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='B8ykc'><tfoot id='B8ykc'></tfoot><dl id='B8ykc'><fieldset id='B8ykc'></fieldset></dl></div>

    <small id='B8ykc'></small><noframes id='B8ykc'>

    1. <legend id='B8ykc'><style id='B8ykc'><dir id='B8ykc'><q id='B8ykc'></q></dir></style></legend>
        <bdo id='B8ykc'></bdo><ul id='B8ykc'></ul>
      <tfoot id='B8ykc'></tfoot>
      1. 通過 Python 使用 Selenium 進行多處理時,Chrome 在幾

        Chrome crashes after several hours while multiprocessing using Selenium through Python(通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰)

              <bdo id='txTAz'></bdo><ul id='txTAz'></ul>

                  <tfoot id='txTAz'></tfoot>
                • <small id='txTAz'></small><noframes id='txTAz'>

                • <i id='txTAz'><tr id='txTAz'><dt id='txTAz'><q id='txTAz'><span id='txTAz'><b id='txTAz'><form id='txTAz'><ins id='txTAz'></ins><ul id='txTAz'></ul><sub id='txTAz'></sub></form><legend id='txTAz'></legend><bdo id='txTAz'><pre id='txTAz'><center id='txTAz'></center></pre></bdo></b><th id='txTAz'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='txTAz'><tfoot id='txTAz'></tfoot><dl id='txTAz'><fieldset id='txTAz'></fieldset></dl></div>
                • <legend id='txTAz'><style id='txTAz'><dir id='txTAz'><q id='txTAz'></q></dir></style></legend>
                    <tbody id='txTAz'></tbody>

                  本文介紹了通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  這是幾個小時抓取后的錯誤回溯:

                  This is the error traceback after several hours of scraping:

                  The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.
                  

                  這是我的 selenium python 設置:

                  This is my setup of selenium python:

                  #scrape.py
                  from selenium.common.exceptions import *
                  from selenium.webdriver.common.by import By
                  from selenium.webdriver.support import expected_conditions as EC
                  from selenium.webdriver.support.ui import WebDriverWait
                  from selenium.webdriver.chrome.options import Options
                  
                  def run_scrape(link):
                      chrome_options = Options()
                      chrome_options.add_argument('--no-sandbox')
                      chrome_options.add_argument("--headless")
                      chrome_options.add_argument('--disable-dev-shm-usage')
                      chrome_options.add_argument("--lang=en")
                      chrome_options.add_argument("--start-maximized")
                      chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
                      chrome_options.add_experimental_option('useAutomationExtension', False)
                      chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36")
                      chrome_options.binary_location = "/usr/bin/google-chrome"
                      browser = webdriver.Chrome(executable_path=r'/usr/local/bin/chromedriver', options=chrome_options)
                      browser.get(<link passed here>)
                      try:
                          #scrape process
                      except:
                          #other stuffs
                      browser.quit()
                  

                  #multiprocess.py
                  import time,
                  from multiprocessing import Pool
                  from scrape import *
                  
                  if __name__ == '__main__':
                      start_time = time.time()
                      #links = list of links to be scraped
                      pool = Pool(20)
                      results = pool.map(run_scrape, links)
                      pool.close()
                      print("Total Time Processed: "+"--- %s seconds ---" % (time.time() - start_time))
                  

                  Chrome、ChromeDriver 設置、Selenium 版本

                  Chrome, ChromeDriver Setup, Selenium Version

                  ChromeDriver 79.0.3945.36 (3582db32b33893869b8c1339e8f4d9ed1816f143-refs/branch-heads/3945@{#614})
                  Google Chrome 79.0.3945.79
                  Selenium Version: 4.0.0a3
                  

                  我想知道為什么 chrome 正在關閉但其他進程正在運行?

                  Im wondering why is the chrome is closing but other processes are working?

                  推薦答案

                  我拿了你的代碼,稍微修改了一下以適應我的測試環境,下面是執行結果:

                  I took your code, modified it a bit to suit to my Test Environment and here is the execution results:

                  • 代碼塊:

                  • Code Block:

                  • multiprocess.py:

                  import time
                  from multiprocessing import Pool
                  from multiprocessingPool.scrape import run_scrape
                  
                  if __name__ == '__main__':
                      start_time = time.time()
                      links = ["https://selenium.dev/downloads/", "https://selenium.dev/documentation/en/"] 
                      pool = Pool(2)
                      results = pool.map(run_scrape, links)
                      pool.close()
                      print("Total Time Processed: "+"--- %s seconds ---" % (time.time() - start_time)) 
                  

                • scrape.py:

                  from selenium import webdriver
                  from selenium.common.exceptions import NoSuchElementException, TimeoutException
                  from selenium.webdriver.common.by import By
                  from selenium.webdriver.chrome.options import Options
                  
                  def run_scrape(link):
                      chrome_options = Options()
                      chrome_options.add_argument('--no-sandbox')
                      chrome_options.add_argument("--headless")
                      chrome_options.add_argument('--disable-dev-shm-usage')
                      chrome_options.add_argument("--lang=en")
                      chrome_options.add_argument("--start-maximized")
                      chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
                      chrome_options.add_experimental_option('useAutomationExtension', False)
                      chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36")
                      chrome_options.binary_location=r'C:Program Files (x86)GoogleChromeApplicationchrome.exe'
                      browser = webdriver.Chrome(executable_path=r'C:UtilityBrowserDriverschromedriver.exe', options=chrome_options)
                      browser.get(link)
                      try:
                          print(browser.title)
                      except (NoSuchElementException, TimeoutException):
                          print("Error")
                      browser.quit()
                  

                • 控制臺輸出:

                  Downloads
                  The Selenium Browser Automation Project :: Documentation for Selenium
                  Total Time Processed: --- 10.248600006103516 seconds ---
                  

                  很明顯你的程序在邏輯上完美無缺.

                  It is pretty much evident your program is logically flawless and just perfect.

                  正如您在幾個小時的抓取后提到的這個錯誤,我懷疑這是因為 WebDriver 不是線程安全的.話雖如此,如果您可以序列化對底層驅動程序實例的訪問,則可以在多個線程中共享一個引用.這是不可取的.但是你總是可以實例化一個 WebDriver 每個線程的實例.

                  As you mentioned this error surfaces after several hours of scraping, I suspect this due to the fact that WebDriver is not thread-safe. Having said that, if you can serialize access to the underlying driver instance, you can share a reference in more than one thread. This is not advisable. But you can always instantiate one WebDriver instance for each thread.

                  理想情況下,線程安全的問題不在于您的代碼,而在于實際的瀏覽器綁定.他們都假設一次只有一個命令(例如,像真實用戶一樣).但另一方面,您始終可以為每個將啟動多個瀏覽選項卡/窗口的線程實例化一個 WebDriver 實例.到目前為止,您的程序似乎很完美.

                  Ideally the issue of thread-safety isn't in your code but in the actual browser bindings. They all assume there will only be one command at a time (e.g. like a real user). But on the other hand you can always instantiate one WebDriver instance for each thread which will launch multiple browsing tabs/windows. Till this point it seems your program is perfect.

                  現在,不同的線程 可以在同一個Webdriver 上運行,但是測試的結果不會是你所期望的.背后的原因是,當您使用多線程在不同的選項卡/窗口上運行不同的測試時,需要一點線程安全編碼,否則您將執行的操作如 click()send_keys() 將轉到當前具有焦點 的打開的選項卡/窗口,而不管您希望運行的線程.這實質上意味著所有測試將在具有焦點在預期選項卡/窗口上的同一選項卡/窗口上同時運行.

                  Now, different threads can be run on same Webdriver, but then the results of the tests would not be what you expect. The reason behind is, when you use multi-threading to run different tests on different tabs/windows a little bit of thread safety coding is required or else the actions you will perform like click() or send_keys() will go to the opened tab/window that is currently having the focus regardless of the thread you expect to be running. Which essentially means all the test will run simultaneously on the same tab/window that has focus but not on the intended tab/window.

                  這篇關于通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                • <tfoot id='rrBVn'></tfoot>

                  <small id='rrBVn'></small><noframes id='rrBVn'>

                    1. <i id='rrBVn'><tr id='rrBVn'><dt id='rrBVn'><q id='rrBVn'><span id='rrBVn'><b id='rrBVn'><form id='rrBVn'><ins id='rrBVn'></ins><ul id='rrBVn'></ul><sub id='rrBVn'></sub></form><legend id='rrBVn'></legend><bdo id='rrBVn'><pre id='rrBVn'><center id='rrBVn'></center></pre></bdo></b><th id='rrBVn'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='rrBVn'><tfoot id='rrBVn'></tfoot><dl id='rrBVn'><fieldset id='rrBVn'></fieldset></dl></div>
                        <bdo id='rrBVn'></bdo><ul id='rrBVn'></ul>
                          • <legend id='rrBVn'><style id='rrBVn'><dir id='rrBVn'><q id='rrBVn'></q></dir></style></legend>
                              <tbody id='rrBVn'></tbody>
                          • 主站蜘蛛池模板: 国产精品久久久久久久一区二区 | 欧美日韩亚洲一区 | 一区二区三区四区免费在线观看 | 日本超碰 | 国内自拍偷拍一区 | 亚洲精品99久久久久久 | www.国产| 久久高清精品 | 国产在线精品一区二区 | 毛片免费看| av中文字幕在线播放 | 久久久久国产一区二区 | 久久电影一区 | 亚洲一区二区久久 | 激情福利视频 | 免费亚洲成人 | 精品麻豆剧传媒av国产九九九 | 精品自拍视频 | av黄色在线播放 | 影音先锋成人资源 | 青青草在线视频免费观看 | 在线中文视频 | 午夜一区 | 综合国产 | www.久久精品视频 | 韩国av影院| 国产亚洲精品a | 久久久无码精品亚洲日韩按摩 | 国产视频日韩 | 黄色欧美大片 | 综合久久av | 国产一区二区三区 | 国产激情在线 | 五月婷婷在线视频 | 亚洲精品1 | 日韩福利| 久久国内 | 一区二区av | 午夜专区 | 久久51| 精品国产不卡一区二区三区 |