久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<legend id='vnUzr'><style id='vnUzr'><dir id='vnUzr'><q id='vnUzr'></q></dir></style></legend>
      • <bdo id='vnUzr'></bdo><ul id='vnUzr'></ul>

    1. <small id='vnUzr'></small><noframes id='vnUzr'>

        <i id='vnUzr'><tr id='vnUzr'><dt id='vnUzr'><q id='vnUzr'><span id='vnUzr'><b id='vnUzr'><form id='vnUzr'><ins id='vnUzr'></ins><ul id='vnUzr'></ul><sub id='vnUzr'></sub></form><legend id='vnUzr'></legend><bdo id='vnUzr'><pre id='vnUzr'><center id='vnUzr'></center></pre></bdo></b><th id='vnUzr'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='vnUzr'><tfoot id='vnUzr'></tfoot><dl id='vnUzr'><fieldset id='vnUzr'></fieldset></dl></div>
        <tfoot id='vnUzr'></tfoot>

        為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 var

        Why Kafka jdbc connect insert data as BLOB instead of varchar(為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接)
      1. <tfoot id='0Fr8Y'></tfoot>

              <tbody id='0Fr8Y'></tbody>
            <legend id='0Fr8Y'><style id='0Fr8Y'><dir id='0Fr8Y'><q id='0Fr8Y'></q></dir></style></legend>

                <small id='0Fr8Y'></small><noframes id='0Fr8Y'>

                <i id='0Fr8Y'><tr id='0Fr8Y'><dt id='0Fr8Y'><q id='0Fr8Y'><span id='0Fr8Y'><b id='0Fr8Y'><form id='0Fr8Y'><ins id='0Fr8Y'></ins><ul id='0Fr8Y'></ul><sub id='0Fr8Y'></sub></form><legend id='0Fr8Y'></legend><bdo id='0Fr8Y'><pre id='0Fr8Y'><center id='0Fr8Y'></center></pre></bdo></b><th id='0Fr8Y'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='0Fr8Y'><tfoot id='0Fr8Y'></tfoot><dl id='0Fr8Y'><fieldset id='0Fr8Y'></fieldset></dl></div>
                  <bdo id='0Fr8Y'></bdo><ul id='0Fr8Y'></ul>
                  本文介紹了為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我正在使用 Java 生成器在我的 Kafka 主題頂部插入數據.然后我使用 Kafka jdbc connect 將數據插入到我的 Oracle 表中.下面是我的生產者代碼.

                  I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code.

                  package producer.serialized.avro;
                  
                  import org.apache.avro.Schema;
                  import org.apache.avro.generic.GenericData;
                  import org.apache.avro.generic.GenericRecord;
                  import org.apache.kafka.clients.producer.KafkaProducer;
                  import org.apache.kafka.clients.producer.ProducerConfig;
                  import org.apache.kafka.clients.producer.ProducerRecord;
                  
                  import java.util.Properties;
                  
                  
                  public class Sender4 {
                  
                      public static void main(String[] args) {
                  
                          String flightSchema = "{\"type\":\"record\"," + "\"name\":\"Flight\","
                  
                                  + "\"fields\":[{\"name\":\"flight_id\",\"type\":\"string\"},{\"name\":\"flight_to\",\"type\":\"string\"},{\"name\":\"flight_from\",\"type\":\"string\"}]}";                
                  
                          Properties props = new Properties();
                  
                          props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
                          props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
                          props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);    
                          props.put("schema.registry.url", "http://192.168.0.1:8081");            
                  
                          KafkaProducer producer = new KafkaProducer(props);    
                  
                          Schema.Parser parser = new Schema.Parser();
                  
                          Schema schema = parser.parse(flightSchema);            
                  
                          GenericRecord avroRecord = new GenericData.Record(schema);
                  
                          avroRecord.put("flight_id", "myflight");
                          avroRecord.put("flight_to", "QWE");
                          avroRecord.put("flight_from", "RTY");    
                  
                          ProducerRecord<String, GenericRecord> record = new ProducerRecord<>("topic9",avroRecord);
                  
                          producer.send(record);
                      }
                  }
                  

                  下面是我的 Kafka 連接屬性

                  Below is my Kafka connect properties

                  name=test-sink-6
                  connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
                  tasks.max=1
                  topics=topic9
                  connection.url=jdbc:oracle:thin:@192.168.0.1:1521:usera
                  connection.user=usera
                  connection.password=usera
                  auto.create=true
                  table.name.format=FLIGHTS4
                  key.converter=io.confluent.connect.avro.AvroConverter
                  key.converter.schema.registry.url=http://192.168.0.1:8081
                  value.converter=io.confluent.connect.avro.AvroConverter
                  value.converter.schema.registry.url=http://192.168.0.1:8081
                  

                  根據我的架構,我希望插入到我的 Oracle 表中的值是 varchar2.我創建了一個包含 3 個 varchar2 列的表.當我啟動我的連接器時,沒有插入任何東西.然后我刪除了表并在表自動創建模式下運行連接器.那個時候,表被創建并且值被插入.但問題是,列數據類型是 CLOB.我希望它是 varchar2,因為它使用的數據較少.

                  From my schema, I am expecting the values inserted to my Oracle table to be varchar2. I have created a table having 3 varchar2 columns. When i started my connector, nothing got inserted. Then i deleted the table and ran the connector with table auto create mode on. That time, the table got created and values got inserted. But the problem is, the column data type is CLOB. I want it to be varchar2 since it use less data.

                  為什么會發生這種情況,我該如何解決?謝謝你.

                  Why is this happening and how can i fix this? Thank you.

                  推薦答案

                  貌似Kafka的String映射到Oracle的NCLOB:

                  Looks like Kafka's String is mapped to Oracle's NCLOB:

                  <table border="1">
                  <tr>
                  <th>Schema Type</th><th>MySQL</th><th>Oracle</th><th>PostgreSQL</th><th>SQLite</th>
                  </tr>
                  <tr>
                  <td>INT8</td><td>TINYINT</td><td>NUMBER(3,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT16</td><td>SMALLINT</td><td>NUMBER(5,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT32</td><td>INT</td><td>NUMBER(10,0)</td><td>INT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT64</td><td>BIGINT</td><td>NUMBER(19,0)</td><td>BIGINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>FLOAT32</td><td>FLOAT</td><td>BINARY_FLOAT</td><td>REAL</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>FLOAT64</td><td>DOUBLE</td><td>BINARY_DOUBLE</td><td>DOUBLE PRECISION</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>BOOLEAN</td><td>TINYINT</td><td>NUMBER(1,0)</td><td>BOOLEAN</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>STRING</td><td>VARCHAR(256)</td><td>NCLOB</td><td>TEXT</td><td>TEXT</td>
                  </tr>
                  <tr>
                  <td>BYTES</td><td>VARBINARY(1024)</td><td>BLOB</td><td>BYTEA</td><td>BLOB</td>
                  </tr>
                  <tr>
                  <td>'Decimal'</td><td>DECIMAL(65,s)</td><td>NUMBER(*,s)</td><td>DECIMAL</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Date'</td><td>DATE</td><td>DATE</td><td>DATE</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Time'</td><td>TIME(3)</td><td>DATE</td><td>TIME</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Timestamp'</td><td>TIMESTAMP(3)</td><td>TIMESTAMP</td><td>TIMESTAMP</td><td>NUMERIC</td>
                  </tr>
                  </table>

                  來源:https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kafka_jdbc_sink.html

                  https://docs.confluent.io/current/connect/connect-jdbc/docs/sink_connector.html

                  更新

                  OracleDialect 類(https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) 具有硬編碼的 CLOB 值,只需使用您自己的類擴展它,更改映射將無濟于事,因為方言類型是在 CLOB 中的靜態方法中定義的代碼>JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  OracleDialect class (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) has hardcoded CLOB value and simply extend it with your own class and change that mapping will not help as type of dialect is defined in static method in JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  final DbDialect dbDialect = DbDialect.fromConnectionString(config.connectionUrl);
                  

                  這篇關于為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可以從外部數據庫獲取查詢(而不是獲取整個表)?) - IT屋-程序員軟件開
                  Spark giving Null Pointer Exception while performing jdbc save(Spark在執行jdbc保存時給出空指針異常)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  No suitable driver found for jdbc in Spark(在 Spark 中找不到適合 jdbc 的驅動程序)

                    <tbody id='LnrNt'></tbody>
                  • <tfoot id='LnrNt'></tfoot>

                          <bdo id='LnrNt'></bdo><ul id='LnrNt'></ul>
                          • <legend id='LnrNt'><style id='LnrNt'><dir id='LnrNt'><q id='LnrNt'></q></dir></style></legend>

                            <small id='LnrNt'></small><noframes id='LnrNt'>

                          • <i id='LnrNt'><tr id='LnrNt'><dt id='LnrNt'><q id='LnrNt'><span id='LnrNt'><b id='LnrNt'><form id='LnrNt'><ins id='LnrNt'></ins><ul id='LnrNt'></ul><sub id='LnrNt'></sub></form><legend id='LnrNt'></legend><bdo id='LnrNt'><pre id='LnrNt'><center id='LnrNt'></center></pre></bdo></b><th id='LnrNt'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='LnrNt'><tfoot id='LnrNt'></tfoot><dl id='LnrNt'><fieldset id='LnrNt'></fieldset></dl></div>
                            主站蜘蛛池模板: 亚洲精品自拍视频 | 香蕉视频91 | 久久精品国产免费高清 | 男人天堂网av | 成人av在线播放 | 亚洲在线一区 | 免费黄视频网站 | 欧美一级片在线观看 | 做a视频 | 亚洲视频免费 | 亚洲一区二区三区免费视频 | 中文字幕在线免费视频 | 成人欧美一区二区三区 | 一区二区三区视频 | 国产精彩视频 | 黄色一级片aaa | 在线视频一区二区 | 91在线视频 | 欧美一区视频 | 日本视频免费 | 欧美激情国产日韩精品一区18 | 欧美成人一级视频 | 久久里面有精品 | 成人久久网 | www.久久久| 欧美自拍第一页 | 福利视频一区二区 | 成人性生交大片免费看中文带字幕 | 美美女高清毛片视频免费观看 | 日韩av在线一区 | 精品一区av | 91激情电影 | 一区二区三区精品在线 | 国产精品夜间视频香蕉 | 亚洲一区二区在线免费观看 | 日本在线播放一区二区 | 国产一区不卡 | 中文字幕亚洲欧美日韩在线不卡 | 国产一区不卡在线观看 | 亚洲一区导航 | 夜夜爽99久久国产综合精品女不卡 |