久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<i id='SjddI'><tr id='SjddI'><dt id='SjddI'><q id='SjddI'><span id='SjddI'><b id='SjddI'><form id='SjddI'><ins id='SjddI'></ins><ul id='SjddI'></ul><sub id='SjddI'></sub></form><legend id='SjddI'></legend><bdo id='SjddI'><pre id='SjddI'><center id='SjddI'></center></pre></bdo></b><th id='SjddI'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='SjddI'><tfoot id='SjddI'></tfoot><dl id='SjddI'><fieldset id='SjddI'></fieldset></dl></div>
  • <small id='SjddI'></small><noframes id='SjddI'>

        <bdo id='SjddI'></bdo><ul id='SjddI'></ul>
      <tfoot id='SjddI'></tfoot>

        <legend id='SjddI'><style id='SjddI'><dir id='SjddI'><q id='SjddI'></q></dir></style></legend>
      1. 如何使用 Debezium 從 MS SQL 將 250 個表攝取到 Kafk

        How to ingest 250 tables into Kafka from MS SQL with Debezium(如何使用 Debezium 從 MS SQL 將 250 個表攝取到 Kafka)
        <tfoot id='dawYf'></tfoot>
          • <bdo id='dawYf'></bdo><ul id='dawYf'></ul>
          • <small id='dawYf'></small><noframes id='dawYf'>

                <i id='dawYf'><tr id='dawYf'><dt id='dawYf'><q id='dawYf'><span id='dawYf'><b id='dawYf'><form id='dawYf'><ins id='dawYf'></ins><ul id='dawYf'></ul><sub id='dawYf'></sub></form><legend id='dawYf'></legend><bdo id='dawYf'><pre id='dawYf'><center id='dawYf'></center></pre></bdo></b><th id='dawYf'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='dawYf'><tfoot id='dawYf'></tfoot><dl id='dawYf'><fieldset id='dawYf'></fieldset></dl></div>
                  <tbody id='dawYf'></tbody>
                <legend id='dawYf'><style id='dawYf'><dir id='dawYf'><q id='dawYf'></q></dir></style></legend>
                • 本文介紹了如何使用 Debezium 從 MS SQL 將 250 個表攝取到 Kafka的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  我嘗試在 PostgreSQL 之間構(gòu)建 Kafka 連接管道作為源到 SQL Server 作為目標(biāo).我使用了 3 個 Kafka broker,需要消費 252 個主題(一個主題與一張 PostgreSQL 表相同).運行一個多小時后,252張表中只能拉出218張.我發(fā)現(xiàn)的錯誤是 SQL Server 中存在死鎖機(jī)制,可以將事務(wù)保存到 SQL Server 并嘗試重試,Debezium 復(fù)制槽也已存在.

                  Hi i have try to build Kafka connect pipeline between PostgreSQL as source to SQL Server as the destination. I used 3 Kafka brokers, and need to consume 252 topics (one topics same as one PostgreSQL table). After run for more than an hour, it only can pull 218 out of 252 tables. The error that i found is there's deadlock mechanism in SQL Server which can hold transaction to SQL Server and try to retry it, also Debezium replication slot has been there.

                  我在接收器上使用最多 3 個工人的分布式連接器,但也許這似乎還不夠.還可以嘗試使用更高的 offset.time_out.ms 到 60000 和更高的偏移分區(qū) (100).恐怕這不是我想要的生產(chǎn)水平.任何人都可以就此案提出建議嗎?是否有任何計算可以確定我需要的最佳工人數(shù)量?

                  I use distributed connectors with 3 max worker on sink, but maybe it seems not enough. Also try with higher offset.time_out.ms to 60000 and higher offset partition (100). I'm afraid that this is not an production level that i want. Anyone can give suggestion about this case? Is there any calculation to decide best number of workers that i need?

                  更新

                  這里出現(xiàn)了一些錯誤.我看到一些連接器被殺死了.有人告訴我 死鎖發(fā)生在 SQL SERVER 中 :

                  here some error i get. I see some connectors are killed. One tell me that deadlock happen in SQL SERVER :

                  [2020-03-26 15:06:28,494] ERROR WorkerSinkTask{id=sql_server_sink_XXA-0} RetriableException from SinkTask: (org.apache.kafka.connect.runtime.WorkerSinkTask:552)
                  org.apache.kafka.connect.errors.RetriableException: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
                  
                      at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:93)
                      at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
                      at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
                      at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
                      at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
                      at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
                      at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
                      at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                      at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                      at java.base/java.lang.Thread.run(Thread.java:834)
                  Caused by: java.sql.SQLException: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
                  

                  2020 年 4 月 14 日更新

                  我仍然有這個問題,我忘了告訴我如何部署連接器.現(xiàn)在我使用 2 個工人,一個用于源,一個用于接收器.我在 csv 中列出我的所有表和 pk 并循環(huán)遍歷行以創(chuàng)建連接器而無需睡眠或等待每分鐘.我還為每個主題使用單個主題分區(qū)和 3 個副本.但是我仍然有sql server連接死鎖

                  I still have problem with this, i forgot to tell about how i deploy the connectors. Now i use 2 workers, one for source and one for sink. I list all of my tables and pk in an csv and loop through rows to create the connectors without sleep or wait for every minutes. I also use single topics partition and 3 replica for each topics. But i still have sql server connection deadlock

                  推薦答案

                  問題可能是同時訪問多個任務(wù)的同一個 SQL 表,并導(dǎo)致同步問題,如您提到的死鎖.
                  由于您已經(jīng)擁有大量主題,并且您的連接器可以并行訪問它們,我建議您將每個主題的分區(qū)數(shù)減少到 1(減少分區(qū)數(shù)在Kafka,因此您應(yīng)該刪除并使用新的分區(qū)數(shù)重新創(chuàng)建每個主題).
                  這樣,每個主題只有一個分區(qū);每個分區(qū)只能在單個線程(/task/consumer)中訪問,因此沒有機(jī)會對同一個表進(jìn)行并行 SQL 事務(wù).

                  The problem may be accessing the same SQL table with multiple tasks in the same time and causing synchronization problems like deadlocks as you mentioned.
                  Since you already have a large number of topics, and your connector can access them in parallel, I would suggest you to reduce the number partitions for every topic to just 1 (reduce number of partitions is not supported in Kafka so you should delete and recreate every topic with the new number of partitions).
                  This way, every topic have only one partition; every partition can be accessed only in a single thread(/task/consumer) so there is no chance for parallel SQL transactions to the same table.

                  或者,更好的方法是創(chuàng)建一個包含 3 個分區(qū)的主題(與您擁有的任務(wù)/消費者數(shù)量相同),并讓 生產(chǎn)者使用 SQL 表名作為消息鍵.
                  Kafka 保證具有相同鍵的消息總是轉(zhuǎn)到同一個分區(qū),因此具有相同表的所有消息將駐留在單個分區(qū)上(單線程消耗).

                  Alternatively, a better approach is to create a single topic with 3 partitions (same as the number of tasks/consumers you have) and make the producer use the SQL table name as the message key.
                  Kafka guarantees messages with the same key to always go to the same partition, so all the messages with the same table will reside on a single partition (single thread consuming).

                  如果你覺得有用,我可以附上更多關(guān)于如何創(chuàng)建 Kafka Producer 和發(fā)送密鑰消息的信息.

                  If you find it useful, I can attach more information about how to create Kafka Producer and send keyed messages.

                  這篇關(guān)于如何使用 Debezium 從 MS SQL 將 250 個表攝取到 Kafka的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數(shù)根據(jù) N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達(dá)式的結(jié)果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數(shù)的 ignore 選項是忽略整個事務(wù)還是只是有問題的行?) - IT屋-程序員軟件開發(fā)技
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數(shù)據(jù)庫表作為 Spark 數(shù)據(jù)幀讀取?)
                  In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可以從外部數(shù)據(jù)庫獲取查詢(而不是獲取整個表)?) - IT屋-程序員軟件開
                  Break down a table to pivot in columns (SQL,PYSPARK)(分解表以按列進(jìn)行透視(SQL、PYSPARK))
                  <tfoot id='paI5E'></tfoot>

                  <small id='paI5E'></small><noframes id='paI5E'>

                    <legend id='paI5E'><style id='paI5E'><dir id='paI5E'><q id='paI5E'></q></dir></style></legend>
                      <tbody id='paI5E'></tbody>
                      <bdo id='paI5E'></bdo><ul id='paI5E'></ul>

                            <i id='paI5E'><tr id='paI5E'><dt id='paI5E'><q id='paI5E'><span id='paI5E'><b id='paI5E'><form id='paI5E'><ins id='paI5E'></ins><ul id='paI5E'></ul><sub id='paI5E'></sub></form><legend id='paI5E'></legend><bdo id='paI5E'><pre id='paI5E'><center id='paI5E'></center></pre></bdo></b><th id='paI5E'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='paI5E'><tfoot id='paI5E'></tfoot><dl id='paI5E'><fieldset id='paI5E'></fieldset></dl></div>
                          1. 主站蜘蛛池模板: 久久久久久免费毛片精品 | 日本福利视频免费观看 | 又爽又黄axxx片免费观看 | 久久精品中文字幕 | 免费在线观看成人av | 日本免费小视频 | 三区在线观看 | 午夜av一区二区 | 国产一区二区观看 | 国产精品一区二区在线 | 精品久久久一区二区 | 亚洲精品成人在线 | 久久精品亚洲精品 | 天堂成人av | 亚洲视频在线一区 | 亚洲二区在线 | 成人精品一区二区 | 久久麻豆精品 | 网站黄色av | 免费国产一区二区 | 岛国av免费看 | 中文在线a在线 | 韩日精品在线观看 | 国产成人精品免费 | 国产一区影院 | 国产精品精品久久久 | 综合一区二区三区 | 精品久久久久久亚洲精品 | 黄色大片免费网站 | 波多野结衣一区二区 | 91久久| 欧美日韩亚 | 黄色小视频入口 | 久久久噜噜噜久久中文字幕色伊伊 | av免费看片| 国产精品一区二区三区在线 | 免费观看a级毛片在线播放 黄网站免费入口 | 欧美日韩在线精品 | 中文字幕91av | 欧美激情视频网站 | 九九亚洲 |