久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<small id='Qtwf7'></small><noframes id='Qtwf7'>

<tfoot id='Qtwf7'></tfoot>

    <bdo id='Qtwf7'></bdo><ul id='Qtwf7'></ul>

    <i id='Qtwf7'><tr id='Qtwf7'><dt id='Qtwf7'><q id='Qtwf7'><span id='Qtwf7'><b id='Qtwf7'><form id='Qtwf7'><ins id='Qtwf7'></ins><ul id='Qtwf7'></ul><sub id='Qtwf7'></sub></form><legend id='Qtwf7'></legend><bdo id='Qtwf7'><pre id='Qtwf7'><center id='Qtwf7'></center></pre></bdo></b><th id='Qtwf7'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Qtwf7'><tfoot id='Qtwf7'></tfoot><dl id='Qtwf7'><fieldset id='Qtwf7'></fieldset></dl></div>
      <legend id='Qtwf7'><style id='Qtwf7'><dir id='Qtwf7'><q id='Qtwf7'></q></dir></style></legend>

        為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 var

        Why Kafka jdbc connect insert data as BLOB instead of varchar(為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接)

            <small id='vvJhL'></small><noframes id='vvJhL'>

            <legend id='vvJhL'><style id='vvJhL'><dir id='vvJhL'><q id='vvJhL'></q></dir></style></legend>
            • <bdo id='vvJhL'></bdo><ul id='vvJhL'></ul>

              <tfoot id='vvJhL'></tfoot>
                <tbody id='vvJhL'></tbody>

                  <i id='vvJhL'><tr id='vvJhL'><dt id='vvJhL'><q id='vvJhL'><span id='vvJhL'><b id='vvJhL'><form id='vvJhL'><ins id='vvJhL'></ins><ul id='vvJhL'></ul><sub id='vvJhL'></sub></form><legend id='vvJhL'></legend><bdo id='vvJhL'><pre id='vvJhL'><center id='vvJhL'></center></pre></bdo></b><th id='vvJhL'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='vvJhL'><tfoot id='vvJhL'></tfoot><dl id='vvJhL'><fieldset id='vvJhL'></fieldset></dl></div>
                  本文介紹了為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我正在使用 Java 生成器在我的 Kafka 主題頂部插入數據.然后我使用 Kafka jdbc connect 將數據插入到我的 Oracle 表中.下面是我的生產者代碼.

                  I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code.

                  package producer.serialized.avro;
                  
                  import org.apache.avro.Schema;
                  import org.apache.avro.generic.GenericData;
                  import org.apache.avro.generic.GenericRecord;
                  import org.apache.kafka.clients.producer.KafkaProducer;
                  import org.apache.kafka.clients.producer.ProducerConfig;
                  import org.apache.kafka.clients.producer.ProducerRecord;
                  
                  import java.util.Properties;
                  
                  
                  public class Sender4 {
                  
                      public static void main(String[] args) {
                  
                          String flightSchema = "{\"type\":\"record\"," + "\"name\":\"Flight\","
                  
                                  + "\"fields\":[{\"name\":\"flight_id\",\"type\":\"string\"},{\"name\":\"flight_to\",\"type\":\"string\"},{\"name\":\"flight_from\",\"type\":\"string\"}]}";                
                  
                          Properties props = new Properties();
                  
                          props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
                          props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
                          props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);    
                          props.put("schema.registry.url", "http://192.168.0.1:8081");            
                  
                          KafkaProducer producer = new KafkaProducer(props);    
                  
                          Schema.Parser parser = new Schema.Parser();
                  
                          Schema schema = parser.parse(flightSchema);            
                  
                          GenericRecord avroRecord = new GenericData.Record(schema);
                  
                          avroRecord.put("flight_id", "myflight");
                          avroRecord.put("flight_to", "QWE");
                          avroRecord.put("flight_from", "RTY");    
                  
                          ProducerRecord<String, GenericRecord> record = new ProducerRecord<>("topic9",avroRecord);
                  
                          producer.send(record);
                      }
                  }
                  

                  下面是我的 Kafka 連接屬性

                  Below is my Kafka connect properties

                  name=test-sink-6
                  connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
                  tasks.max=1
                  topics=topic9
                  connection.url=jdbc:oracle:thin:@192.168.0.1:1521:usera
                  connection.user=usera
                  connection.password=usera
                  auto.create=true
                  table.name.format=FLIGHTS4
                  key.converter=io.confluent.connect.avro.AvroConverter
                  key.converter.schema.registry.url=http://192.168.0.1:8081
                  value.converter=io.confluent.connect.avro.AvroConverter
                  value.converter.schema.registry.url=http://192.168.0.1:8081
                  

                  根據我的架構,我希望插入到我的 Oracle 表中的值是 varchar2.我創建了一個包含 3 個 varchar2 列的表.當我啟動我的連接器時,沒有插入任何東西.然后我刪除了表并在表自動創建模式下運行連接器.那個時候,表被創建并且值被插入.但問題是,列數據類型是 CLOB.我希望它是 varchar2,因為它使用的數據較少.

                  From my schema, I am expecting the values inserted to my Oracle table to be varchar2. I have created a table having 3 varchar2 columns. When i started my connector, nothing got inserted. Then i deleted the table and ran the connector with table auto create mode on. That time, the table got created and values got inserted. But the problem is, the column data type is CLOB. I want it to be varchar2 since it use less data.

                  為什么會發生這種情況,我該如何解決?謝謝你.

                  Why is this happening and how can i fix this? Thank you.

                  推薦答案

                  貌似Kafka的String映射到Oracle的NCLOB:

                  Looks like Kafka's String is mapped to Oracle's NCLOB:

                  <table border="1">
                  <tr>
                  <th>Schema Type</th><th>MySQL</th><th>Oracle</th><th>PostgreSQL</th><th>SQLite</th>
                  </tr>
                  <tr>
                  <td>INT8</td><td>TINYINT</td><td>NUMBER(3,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT16</td><td>SMALLINT</td><td>NUMBER(5,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT32</td><td>INT</td><td>NUMBER(10,0)</td><td>INT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT64</td><td>BIGINT</td><td>NUMBER(19,0)</td><td>BIGINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>FLOAT32</td><td>FLOAT</td><td>BINARY_FLOAT</td><td>REAL</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>FLOAT64</td><td>DOUBLE</td><td>BINARY_DOUBLE</td><td>DOUBLE PRECISION</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>BOOLEAN</td><td>TINYINT</td><td>NUMBER(1,0)</td><td>BOOLEAN</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>STRING</td><td>VARCHAR(256)</td><td>NCLOB</td><td>TEXT</td><td>TEXT</td>
                  </tr>
                  <tr>
                  <td>BYTES</td><td>VARBINARY(1024)</td><td>BLOB</td><td>BYTEA</td><td>BLOB</td>
                  </tr>
                  <tr>
                  <td>'Decimal'</td><td>DECIMAL(65,s)</td><td>NUMBER(*,s)</td><td>DECIMAL</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Date'</td><td>DATE</td><td>DATE</td><td>DATE</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Time'</td><td>TIME(3)</td><td>DATE</td><td>TIME</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Timestamp'</td><td>TIMESTAMP(3)</td><td>TIMESTAMP</td><td>TIMESTAMP</td><td>NUMERIC</td>
                  </tr>
                  </table>

                  來源:https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kafka_jdbc_sink.html

                  https://docs.confluent.io/current/connect/connect-jdbc/docs/sink_connector.html

                  更新

                  OracleDialect 類(https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) 具有硬編碼的 CLOB 值,只需使用您自己的類擴展它,更改映射將無濟于事,因為方言類型是在 CLOB 中的靜態方法中定義的代碼>JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  OracleDialect class (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) has hardcoded CLOB value and simply extend it with your own class and change that mapping will not help as type of dialect is defined in static method in JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  final DbDialect dbDialect = DbDialect.fromConnectionString(config.connectionUrl);
                  

                  這篇關于為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可以從外部數據庫獲取查詢(而不是獲取整個表)?) - IT屋-程序員軟件開
                  Spark giving Null Pointer Exception while performing jdbc save(Spark在執行jdbc保存時給出空指針異常)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  No suitable driver found for jdbc in Spark(在 Spark 中找不到適合 jdbc 的驅動程序)
                  • <small id='TXa25'></small><noframes id='TXa25'>

                        <i id='TXa25'><tr id='TXa25'><dt id='TXa25'><q id='TXa25'><span id='TXa25'><b id='TXa25'><form id='TXa25'><ins id='TXa25'></ins><ul id='TXa25'></ul><sub id='TXa25'></sub></form><legend id='TXa25'></legend><bdo id='TXa25'><pre id='TXa25'><center id='TXa25'></center></pre></bdo></b><th id='TXa25'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='TXa25'><tfoot id='TXa25'></tfoot><dl id='TXa25'><fieldset id='TXa25'></fieldset></dl></div>
                      • <legend id='TXa25'><style id='TXa25'><dir id='TXa25'><q id='TXa25'></q></dir></style></legend>
                        <tfoot id='TXa25'></tfoot>

                          <bdo id='TXa25'></bdo><ul id='TXa25'></ul>
                              <tbody id='TXa25'></tbody>

                            主站蜘蛛池模板: 国产免费人成xvideos视频 | 人人爽日日躁夜夜躁尤物 | 国产精品爱久久久久久久 | 欧美综合一区二区 | 中文字幕 亚洲一区 | 资源首页二三区 | 狠狠草视频| 午夜影晥 | 欧美日韩精品免费观看 | 日本人爽p大片免费看 | 欧美不卡一区二区三区 | 日本a视频| 免费看一级毛片 | av一二三区| cao在线| 国产一级免费视频 | 香蕉久久a毛片 | 中文精品视频 | 国产高清精品网站 | 91精品国产综合久久久久蜜臀 | 久久aⅴ乱码一区二区三区 91综合网 | 成人久久18免费网站麻豆 | 成人在线免费观看 | 成人久久18免费网站图片 | 精品一区二区三区在线视频 | 一区二区三区在线 | 欧 | 国产免费一区二区三区 | 欧美黄色免费网站 | 超碰av在线 | 一区日韩 | 欧美激情亚洲 | 少妇一级淫片免费放播放 | www日韩| 久久久久久久电影 | 99久久婷婷 | 久久精品国产久精国产 | 最新国产视频 | 欧美日韩不卡合集视频 | 99视频在线免费观看 | 亚洲在线视频 | 日本h片在线观看 |