久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

      <bdo id='VUshh'></bdo><ul id='VUshh'></ul>
    <legend id='VUshh'><style id='VUshh'><dir id='VUshh'><q id='VUshh'></q></dir></style></legend>
    <i id='VUshh'><tr id='VUshh'><dt id='VUshh'><q id='VUshh'><span id='VUshh'><b id='VUshh'><form id='VUshh'><ins id='VUshh'></ins><ul id='VUshh'></ul><sub id='VUshh'></sub></form><legend id='VUshh'></legend><bdo id='VUshh'><pre id='VUshh'><center id='VUshh'></center></pre></bdo></b><th id='VUshh'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='VUshh'><tfoot id='VUshh'></tfoot><dl id='VUshh'><fieldset id='VUshh'></fieldset></dl></div>

    <tfoot id='VUshh'></tfoot>

        <small id='VUshh'></small><noframes id='VUshh'>

      1. Camel JDBC StreamList 查詢似乎在拆分之前加載整個結

        Camel JDBC StreamList query appears to load whole resultset before splitting(Camel JDBC StreamList 查詢似乎在拆分之前加載整個結果集)
          • <i id='6nLVs'><tr id='6nLVs'><dt id='6nLVs'><q id='6nLVs'><span id='6nLVs'><b id='6nLVs'><form id='6nLVs'><ins id='6nLVs'></ins><ul id='6nLVs'></ul><sub id='6nLVs'></sub></form><legend id='6nLVs'></legend><bdo id='6nLVs'><pre id='6nLVs'><center id='6nLVs'></center></pre></bdo></b><th id='6nLVs'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='6nLVs'><tfoot id='6nLVs'></tfoot><dl id='6nLVs'><fieldset id='6nLVs'></fieldset></dl></div>
              <tbody id='6nLVs'></tbody>
          • <tfoot id='6nLVs'></tfoot>

                  <bdo id='6nLVs'></bdo><ul id='6nLVs'></ul>

                  <small id='6nLVs'></small><noframes id='6nLVs'>

                  <legend id='6nLVs'><style id='6nLVs'><dir id='6nLVs'><q id='6nLVs'></q></dir></style></legend>

                  本文介紹了Camel JDBC StreamList 查詢似乎在拆分之前加載整個結果集的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我正在運行一個 SQL 使用者來讀取表中的更改,這一切都很好.但是,有時會發生大量更改,然后我的查詢因內存不足錯誤而中斷,如您所料.

                  I'm running a SQL consumer to read changes from a table, which is all well and good. However, there are occasions where changes happen on mass, and then my query breaks with out of memory error, as you might expect.

                  不幸的是,我被困在 Camel 2.17.6 上,所以 SQL 組件的 StreamList 選項不可用.(雖然根據 Camel-SQL 為什么使用 StreamList 似乎加載所有 ResultSet? 由于 Spring JDBC 限制,這不能用作流列表.)

                  Unfortunately, I'm stuck on Camel 2.17.6, so the StreamList option for the SQL component isn't available. (Although according to Camel-SQL Why using StreamList seems to load all ResultSet? this doesn't work as a stream list due to Spring JDBC limitations.)

                  因此,我使用支持流列表的 JDBC 組件重新編寫了我的路由,并且一旦我增加要提取的記錄數,我仍然會出現內存不足異常.似乎出于某種原因,JDBC 組件試圖在傳遞給拆分器之前提取所有記錄.

                  So I've re-written my route using the JDBC component, which supports a stream list, and I'm still getting out of memory exceptions as soon as I increase the number of records to extract. It would appear that for some reason, the JDBC component is trying to extract all the records before passing to the splitter.

                  我現在的形式是:

                  from("timer:timer...")
                    .to( "language:constant:resource:classpath:pathToSqlStatement/sqlStatement.sql" )
                    .to( "jdbc:msSqlServerDataSource?outputType=StreamList" )
                    .split( body() ).streaming()
                    .setBody().simple("$body[XMLDOC]")
                    .setHeader("HeaderName").xpath("xpath/to/data")
                    .to("jms:topic:name");
                  

                  我最初確實有一個聚合策略 UseLatestAggregationStrategysplit() 之后的一個額外步驟,但我已經把它去掉了,試圖刪除所有可能的導致整個查詢被保存在內存中,但我看不到我現在還能做什么.

                  I did originally have an aggregation strategy UseLatestAggregationStrategy and an extra step after the split() but I've stripped that out in an attempt to remove everything that could possibly result in the whole query being held in memory, but I can't see what else I can do now.

                  我注意到問題 camel jdbc 內存不足異常 引發了類似的問題,而且似乎沒有解決辦法.

                  I note the question camel jdbc out of memory exception raises a similar problem, and didn't appear to have a resolution.

                  (我應該注意到我遇到的內存不足錯誤確實出現在不同的地方,并且在 WinNTFileSystem 中包含了 GC 開銷限制超過,我沒有理解,還有其他與 ZippedInputStream 有關的事情,我又不明白.)

                  (I should note that the out of memory errors I've had do appear in different places, and included GC overhead limit exceeded at WinNTFileSystem which I don't understand, and something else to do with a ZippedInputStream, which again I don't understand.)

                  這是否意味著 StreamList 也不適用于 JDBC 組件,或者我必須做一些特定的事情來確保 JDBC 組件不會嘗試緩存整個結果?

                  Does that mean that StreamList doesn't work on the JDBC component either, or do I have to do something specific to ensure that the JDBC component doesn't try to cache the whole results?

                  推薦答案

                  StreamList 從 v.18.x 版本開始,camel-sql 支持輸出類型.在早期版本中,camel-sql 組件加載 內存中的結果集作為列表.我認為在camel-sql 2.17.x版本中無法避免.

                  StreamList output type is supported in camel-sql since version v.18.x. In earlier version, camel-sql component load the result set in memory as list. I don't think it can be avoided in camel-sql version 2.17.x.

                  camel-sql 組件將結果加載到內存中后,聚合/拆分確實適用.

                  Aggregation /splitting does apply after loading the result in memory by camel-sql component.

                  另一個相關答案.

                  這篇關于Camel JDBC StreamList 查詢似乎在拆分之前加載整個結果集的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可以從外部數據庫獲取查詢(而不是獲取整個表)?) - IT屋-程序員軟件開
                  Break down a table to pivot in columns (SQL,PYSPARK)(分解表以按列進行透視(SQL、PYSPARK))
                  Spark giving Null Pointer Exception while performing jdbc save(Spark在執行jdbc保存時給出空指針異常)
                  execute query on sqlserver using spark sql(使用 spark sql 在 sqlserver 上執行查詢)

                  <small id='fpWNr'></small><noframes id='fpWNr'>

                    <tbody id='fpWNr'></tbody>
                    <tfoot id='fpWNr'></tfoot><legend id='fpWNr'><style id='fpWNr'><dir id='fpWNr'><q id='fpWNr'></q></dir></style></legend>
                        • <bdo id='fpWNr'></bdo><ul id='fpWNr'></ul>
                          • <i id='fpWNr'><tr id='fpWNr'><dt id='fpWNr'><q id='fpWNr'><span id='fpWNr'><b id='fpWNr'><form id='fpWNr'><ins id='fpWNr'></ins><ul id='fpWNr'></ul><sub id='fpWNr'></sub></form><legend id='fpWNr'></legend><bdo id='fpWNr'><pre id='fpWNr'><center id='fpWNr'></center></pre></bdo></b><th id='fpWNr'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='fpWNr'><tfoot id='fpWNr'></tfoot><dl id='fpWNr'><fieldset id='fpWNr'></fieldset></dl></div>
                            主站蜘蛛池模板: 国产精成人| 欧美一级欧美三级在线观看 | 狠狠爱一区二区三区 | 成年人视频在线免费观看 | 91精品国产91久久久久久吃药 | 国产真实乱全部视频 | 99久久久久久 | 日本在线视频中文字幕 | 国产精品美女久久久久 | 国产精品欧美一区二区三区不卡 | 成人av一区 | 久久香焦 | 国产做爰| 亚洲精品在 | 欧美色图另类 | 日本一区二区三区四区 | 亚洲第一av| a网站在线观看 | 午夜影院在线观看 | 亚洲大片在线观看 | 久久久精品 | 国产精品久久久久aaaa九色 | 国产精品高清一区二区三区 | 国产精品久久久久久吹潮 | 亚洲国产一区在线 | 在线观看你懂的网站 | 天堂素人约啪 | 日本成年免费网站 | 黄色一级大片在线观看 | 欧美午夜精品 | 91久久| 午夜免费观看体验区 | 亚洲一区二区久久 | av天天澡天天爽天天av | 日一区二区| 日日摸日日添日日躁av | 中文字幕 在线观看 | 最新黄色在线观看 | 精品欧美一区二区三区久久久 | 欧美日韩专区 | 色香婷婷 |