久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

  • <tfoot id='GR7DV'></tfoot>

      <small id='GR7DV'></small><noframes id='GR7DV'>

    1. <i id='GR7DV'><tr id='GR7DV'><dt id='GR7DV'><q id='GR7DV'><span id='GR7DV'><b id='GR7DV'><form id='GR7DV'><ins id='GR7DV'></ins><ul id='GR7DV'></ul><sub id='GR7DV'></sub></form><legend id='GR7DV'></legend><bdo id='GR7DV'><pre id='GR7DV'><center id='GR7DV'></center></pre></bdo></b><th id='GR7DV'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='GR7DV'><tfoot id='GR7DV'></tfoot><dl id='GR7DV'><fieldset id='GR7DV'></fieldset></dl></div>
        <bdo id='GR7DV'></bdo><ul id='GR7DV'></ul>
      <legend id='GR7DV'><style id='GR7DV'><dir id='GR7DV'><q id='GR7DV'></q></dir></style></legend>
      1. Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄

        Kafka connect setup to send record from Aurora using AWS MSK(Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄)
        <legend id='Cb3h0'><style id='Cb3h0'><dir id='Cb3h0'><q id='Cb3h0'></q></dir></style></legend>
        1. <small id='Cb3h0'></small><noframes id='Cb3h0'>

        2. <tfoot id='Cb3h0'></tfoot>

              <tbody id='Cb3h0'></tbody>

            • <bdo id='Cb3h0'></bdo><ul id='Cb3h0'></ul>

              <i id='Cb3h0'><tr id='Cb3h0'><dt id='Cb3h0'><q id='Cb3h0'><span id='Cb3h0'><b id='Cb3h0'><form id='Cb3h0'><ins id='Cb3h0'></ins><ul id='Cb3h0'></ul><sub id='Cb3h0'></sub></form><legend id='Cb3h0'></legend><bdo id='Cb3h0'><pre id='Cb3h0'><center id='Cb3h0'></center></pre></bdo></b><th id='Cb3h0'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Cb3h0'><tfoot id='Cb3h0'></tfoot><dl id='Cb3h0'><fieldset id='Cb3h0'></fieldset></dl></div>

                • 本文介紹了Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我必須將記錄從 Aurora/Mysql 發送到 MSK,然后再從那里發送到 Elastic 搜索服務

                  Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->彈性搜索

                  極光表結構中的記錄是這樣的
                  我認為記錄將以這種格式發送到 AWS MSK.

                  "o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?0xml version=""1"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-38d179749-setwr23424","","","",,"","",""

                  因此,為了通過彈性搜索使用,我需要使用正確的架構,因此我必須使用架構注冊表.

                  我的問題

                  問題 1

                  對于需要上述類型的消息架構注冊表,我應該如何使用架構注冊表?.我是否必須為此創建 JSON 結構,如果是,我將其保留在哪里.這里需要更多幫助才能理解這一點?

                  我已經編輯了

                  vim/usr/local/confluent/etc/schema-registry/schema-registry.properties

                  提到了zookeper,但我沒有提到什么是kafkastore.topic=_schema如何將其鏈接到自定義架構.

                  即使我開始并收到此錯誤

                  Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas 在 60000 ms 后不存在于元數據中.

                  這是我所期待的,因為我沒有對架構做任何事情.

                  我確實安裝了 jdbc 連接器,當我啟動時出現以下錯誤

                  無效值 java.sql.SQLException:找不到適合 jdbc 的驅動程序:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration 無法打開與 jdbc 的連接:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=歡迎123無效值 java.sql.SQLException:找不到適合 jdbc 的驅動程序:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123用于配置無法打開與 jdbc 的連接:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表

                  問題 2我可以在一個 ec2 上創建兩個連接器嗎(jdbc 和彈性 serach 一個).如果是,我是否必須在 sepearte cli 中同時啟動?

                  問題 3當我打開 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties我只看到像下面這樣的屬性值

                  name=test-source-sqlite-jdbc-autoincrementconnector.class=io.confluent.connect.jdbc.JdbcSourceConnector任務.max=1connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123模式=遞增遞增.column.name=idtopic.prefix=trf-aurora-fspaudit-

                  在上面的屬性文件中,我可以提到架構名稱和表名稱嗎?

                  根據答案,我正在更新我的 Kafka 連接 JDBC 配置

                  --------------啟動JDBC連接彈性搜索------------------------------

                  wget/usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/須藤 mv ~/Downloads/confluent-5.2.0/usr/local/confluentwget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gztar -xzf mysql-connector-java-5.1.48.tar.gz須藤 mv mysql-connector-java-5.1.48 mv/usr/local/confluent/share/java/kafka-connect-jdbc

                  然后

                  vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  然后我修改了下面的屬性

                  connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf模式=遞增connection.user=adminconnection.password=Welcome123table.whitelist=PANStatementInstanceLogschema.pattern=dbo

                  最后我修改了

                  vim/usr/local/confluent/etc/kafka/connect-standalone.properties

                  在這里我修改了以下屬性

                  bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-East-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092key.converter.schemas.enable=truevalue.converter.schemas.enable=trueoffset.storage.file.filename=/tmp/connect.offsetsoffset.flush.interval.ms=10000plugin.path=/usr/local/confluent/share/java

                  當我列出主題時,我沒有看到任何為表名列出的主題.

                  錯誤信息的堆棧跟蹤

                  [2020-01-03 07:40:57,169] 錯誤未能為/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache) 創建作業.kafka.connect.cli.ConnectStandalone:108)[2020-01-03 07:40:57,169] 連接器錯誤后停止錯誤 (org.apache.kafka.connect.cli.ConnectStandalone:119)java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: 連接器配置無效并包含以下 2 個錯誤:無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表在 org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)在 org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)在 org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)引起:org.apache.kafka.connect.runtime.rest.errors.BadRequestException:連接器配置無效并包含以下2個錯誤:無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表在 org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)在 org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)在 org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/-d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'

                  解決方案

                  是否需要架構注冊表?

                  沒有.您可以在 json 記錄中啟用模式.JDBC 源可以根據表信息為您創建

                  value.converter=org.apache.kafka...JsonConvertervalue.converter.schemas.enable=true

                  <塊引用>

                  提到了zookeper,但我不知道什么是kafkastore.topic=_schema

                  如果你想使用 Schema Registry,你應該使用 kafkastore.bootstrap.servers.with Kafka 地址,而不是 Zookeeper.所以刪除 kafkastore.connection.url

                  請閱讀文檔 所有屬性的解釋

                  <塊引用>

                  我沒有對架構做任何事情.

                  沒關系.模式主題在注冊表第一次啟動時被創建

                  <塊引用>

                  我可以在一個 ec2 上創建兩個連接器嗎

                  是(忽略可用的 JVM 堆空間).同樣,這在 Kafka Connect 文檔中有詳細說明.

                  使用獨立模式,您首先傳遞連接工作器配置,然后在一個命令中最多傳遞 N 個連接器屬性

                  使用分布式模式,您使用 Kafka Connect REST API

                  https://docs.confluent.io/current/connect/managing/configuring.html

                  <塊引用>

                  當我打開 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  首先,這是針對 Sqlite,而不是針對 Mysql/Postgres.您不需要使用快速入門文件,它們僅供參考

                  同樣,所有屬性都有詳細記錄

                  https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc

                  <塊引用>

                  我確實安裝了 jdbc 連接器,當我啟動時出現以下錯誤

                  這里有更多關于如何調試的信息

                  https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/

                  <小時>

                  如前所述,我個人建議盡可能使用 Debezium/CDC

                  用于 RDS Aurora 的 Debezium 連接器

                  I have to send records from Aurora/Mysql to MSK and from there to Elastic search service

                  Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->Elastic search

                  The record in Aurora table structure is something like this
                  I think record will go to AWS MSK in this format.

                  "o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-5d17-138e-9749-setwr23424","","","",,"","",""
                  

                  So in order to consume by elastic search i need to use proper schema so schema registry i have to use.

                  My question

                  Question 1

                  How should i use schema registry for above type of message schema registry is required ?. Do i have to create JSON structure for this and if yes where i have keep that. More help required here to understand this ?

                  I have edited

                  vim /usr/local/confluent/etc/schema-registry/schema-registry.properties
                  

                  Mentioned zookeper but i did not what is kafkastore.topic=_schema How to link this to custom schema .

                  Even i started and got this error

                  Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas not present in metadata after 60000 ms.
                  

                  Which i was expecting because i did not do anything about schema .

                  I do have jdbc connector installed and when i start i get below error

                  Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                  

                  Question 2 Can i create two onnector on one ec2 (jdbc and elastic serach one ).If yes do i have to start both in sepearte cli ?

                  Question 3 When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties I see only propeties value like below

                  name=test-source-sqlite-jdbc-autoincrement
                  connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
                  tasks.max=1
                  connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  mode=incrementing
                  incrementing.column.name=id
                  topic.prefix=trf-aurora-fspaudit-
                  

                  In the above properties file where i can mention schema name and table name?

                  Based on answer i am updating my configuration for Kafka connect JDBC

                  ---------------start JDBC connect elastic search -----------------------------

                  wget /usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/
                  tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/
                  sudo mv ~/Downloads/confluent-5.2.0 /usr/local/confluent
                  
                  wget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz
                  tar -xzf  mysql-connector-java-5.1.48.tar.gz
                  sudo mv mysql-connector-java-5.1.48 mv /usr/local/confluent/share/java/kafka-connect-jdbc
                  

                  And then

                  vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
                  

                  Then i modified below properties

                  connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf
                  mode=incrementing
                  connection.user=admin
                  connection.password=Welcome123
                  table.whitelist=PANStatementInstanceLog
                  schema.pattern=dbo
                  

                  Last i modified

                  vim /usr/local/confluent/etc/kafka/connect-standalone.properties
                  

                  and here i modified below properties

                  bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-east-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092
                  key.converter.schemas.enable=true
                  value.converter.schemas.enable=true
                  offset.storage.file.filename=/tmp/connect.offsets
                  offset.flush.interval.ms=10000
                  plugin.path=/usr/local/confluent/share/java
                  

                  When i list topic i do not see any topic listed for table name .

                  Stack trace for the error message

                  [2020-01-03 07:40:57,169] ERROR Failed to create job for /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache.kafka.connect.cli.ConnectStandalone:108)
                  [2020-01-03 07:40:57,169] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
                  java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                          at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
                          at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
                          at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)
                  Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                          at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
                          at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
                          at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)
                  
                          curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/ -d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'
                  

                  解決方案

                  schema registry is required ?

                  No. You can enable schemas in json records. JDBC source can create them for you based on the table information

                  value.converter=org.apache.kafka...JsonConverter 
                  value.converter.schemas.enable=true
                  

                  Mentioned zookeper but i did not what is kafkastore.topic=_schema

                  If you want to use Schema Registry, you should be using kafkastore.bootstrap.servers.with the Kafka address, not Zookeeper. So remove kafkastore.connection.url

                  Please read the docs for explanations of all properties

                  i did not do anything about schema .

                  Doesn't matter. The schemas topic gets created when the Registry first starts

                  Can i create two onnector on one ec2

                  Yes (ignoring available JVM heap space). Again, this is detailed in the Kafka Connect documentation.

                  Using standalone mode, you first pass the connect worker configuration, then up to N connector properties in one command

                  Using distributed mode, you use the Kafka Connect REST API

                  https://docs.confluent.io/current/connect/managing/configuring.html

                  When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  First of all, that's for Sqlite, not Mysql/Postgres. You don't need to use the quickstart files, they are only there for reference

                  Again, all properties are well documented

                  https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc

                  I do have jdbc connector installed and when i start i get below error

                  Here's more information about how you can debug that

                  https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/


                  As stated before, I would personally suggest using Debezium/CDC where possible

                  Debezium Connector for RDS Aurora

                  這篇關于Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數根據 N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達式的結果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時出錯,使用 for 循環數組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數據庫表作為 Spark 數據幀讀取?)

                  <i id='ZriVq'><tr id='ZriVq'><dt id='ZriVq'><q id='ZriVq'><span id='ZriVq'><b id='ZriVq'><form id='ZriVq'><ins id='ZriVq'></ins><ul id='ZriVq'></ul><sub id='ZriVq'></sub></form><legend id='ZriVq'></legend><bdo id='ZriVq'><pre id='ZriVq'><center id='ZriVq'></center></pre></bdo></b><th id='ZriVq'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='ZriVq'><tfoot id='ZriVq'></tfoot><dl id='ZriVq'><fieldset id='ZriVq'></fieldset></dl></div>

                    <tbody id='ZriVq'></tbody>
                  • <legend id='ZriVq'><style id='ZriVq'><dir id='ZriVq'><q id='ZriVq'></q></dir></style></legend>

                    1. <small id='ZriVq'></small><noframes id='ZriVq'>

                      • <tfoot id='ZriVq'></tfoot>

                        • <bdo id='ZriVq'></bdo><ul id='ZriVq'></ul>
                          1. 主站蜘蛛池模板: 中文字幕免费在线 | 欧美精品一区二区三区四区 | 综合五月 | 日韩在线不卡 | 久久亚洲欧美日韩精品专区 | 色视频在线观看 | 亚洲综合天堂 | 国产精品久久久久久久久污网站 | 麻豆av在线 | 国产中文视频 | 国产一区二区精品在线 | 亚洲精品二区 | 国产伦精品一区二区三区视频金莲 | 天堂资源最新在线 | 日日噜噜噜夜夜爽爽狠狠视频97 | 欧美成人一区二免费视频软件 | 国产一在线观看 | 中文字幕亚洲精品 | 国产精品视频久久久 | 99久久精品国产一区二区三区 | 免费a大片 | 国产精品一区二区av | 亚洲一区二区在线播放 | 91爱爱·com| 欧美高清视频一区 | 97人人干 | 国产精品视频免费观看 | 毛片视频免费 | 黄色免费在线观看网址 | 日韩在线一区二区 | www免费视频 | 精品在线 | 99爱国产 | 熟女毛片| 日韩aⅴ片| 高清黄色网址 | 久久99精品视频 | 黄片毛片免费看 | 色屁屁在线观看 | 国产日韩欧美一区二区 | 日本精品一区二区三区视频 |