久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<small id='cPXNx'></small><noframes id='cPXNx'>

    1. <tfoot id='cPXNx'></tfoot>
        <legend id='cPXNx'><style id='cPXNx'><dir id='cPXNx'><q id='cPXNx'></q></dir></style></legend>

      1. <i id='cPXNx'><tr id='cPXNx'><dt id='cPXNx'><q id='cPXNx'><span id='cPXNx'><b id='cPXNx'><form id='cPXNx'><ins id='cPXNx'></ins><ul id='cPXNx'></ul><sub id='cPXNx'></sub></form><legend id='cPXNx'></legend><bdo id='cPXNx'><pre id='cPXNx'><center id='cPXNx'></center></pre></bdo></b><th id='cPXNx'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='cPXNx'><tfoot id='cPXNx'></tfoot><dl id='cPXNx'><fieldset id='cPXNx'></fieldset></dl></div>

          <bdo id='cPXNx'></bdo><ul id='cPXNx'></ul>

      2. pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合

        pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
          <bdo id='Hhwjl'></bdo><ul id='Hhwjl'></ul>
            <tbody id='Hhwjl'></tbody>
          <i id='Hhwjl'><tr id='Hhwjl'><dt id='Hhwjl'><q id='Hhwjl'><span id='Hhwjl'><b id='Hhwjl'><form id='Hhwjl'><ins id='Hhwjl'></ins><ul id='Hhwjl'></ul><sub id='Hhwjl'></sub></form><legend id='Hhwjl'></legend><bdo id='Hhwjl'><pre id='Hhwjl'><center id='Hhwjl'></center></pre></bdo></b><th id='Hhwjl'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Hhwjl'><tfoot id='Hhwjl'></tfoot><dl id='Hhwjl'><fieldset id='Hhwjl'></fieldset></dl></div>

          <small id='Hhwjl'></small><noframes id='Hhwjl'>

              <legend id='Hhwjl'><style id='Hhwjl'><dir id='Hhwjl'><q id='Hhwjl'></q></dir></style></legend>

                • <tfoot id='Hhwjl'></tfoot>
                  本文介紹了pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

                  問(wèn)題描述

                  我在 Mac 上使用 docker image sequenceiq/spark 來(lái)研究這些spark examples,在學(xué)習(xí)過(guò)程中,我根據(jù)這個(gè)答案,當(dāng)我啟動(dòng)Simple Data Operations 例子,這里是發(fā)生了什么:

                  I use docker image sequenceiq/spark on my Mac to study these spark examples, during the study process, I upgrade the spark inside that image to 1.6.1 according to this answer, and the error occurred when I start the Simple Data Operations example, here is what happened:

                  當(dāng)我運(yùn)行 df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() 它引發(fā)錯(cuò)誤,與pyspark控制臺(tái)的完整堆棧如下:

                  when I run df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() it raise a error, and the full stack with the pyspark console is as followed:

                  Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
                  [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
                  Type "help", "copyright", "credits" or "license" for more information.
                  16/04/12 22:45:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
                  Welcome to
                        ____              __
                       / __/__  ___ _____/ /__
                      _\ \/ _ \/ _ `/ __/  '_/
                     /__ / .__/\_,_/_/ /_/\_\   version 1.6.1
                        /_/
                  
                  Using Python version 2.6.6 (r266:84292, Jul 23 2015 15:22:56)
                  SparkContext available as sc, HiveContext available as sqlContext.
                  >>> url = "jdbc:mysql://localhost:3306/test?user=root;password=myPassWord"
                  >>> df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  16/04/12 22:46:05 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:06 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
                  16/04/12 22:46:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
                  16/04/12 22:46:16 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:17 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                    File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
                      return self._df(self._jreader.load())
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
                    File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
                      return f(*a, **kw)
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
                  py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
                  : java.sql.SQLException: No suitable driver
                      at java.sql.DriverManager.getDriver(DriverManager.java:278)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at scala.Option.getOrElse(Option.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
                      at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
                      at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
                      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
                      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                      at java.lang.reflect.Method.invoke(Method.java:606)
                      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
                      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
                      at py4j.Gateway.invoke(Gateway.java:259)
                      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
                      at py4j.commands.CallCommand.execute(CallCommand.java:79)
                      at py4j.GatewayConnection.run(GatewayConnection.java:209)
                      at java.lang.Thread.run(Thread.java:744)
                  
                  >>>
                  

                  這是我迄今為止嘗試過(guò)的:

                  Here is what I have tried till now:

                  1. 下載mysql-connector-java-5.0.8-bin.jar,放入/usr/local/spark/lib/.還是一樣的錯(cuò)誤.

                  1. Download mysql-connector-java-5.0.8-bin.jar, and put it in to /usr/local/spark/lib/. It still the same error.

                  像這樣創(chuàng)建t.py:

                  from pyspark import SparkContext  
                  from pyspark.sql import SQLContext  
                  
                  sc = SparkContext(appName="PythonSQL")  
                  sqlContext = SQLContext(sc)  
                  df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()  
                  
                  df.printSchema()  
                  countsByAge = df.groupBy("age").count()  
                  countsByAge.show()  
                  countsByAge.write.format("json").save("file:///usr/local/mysql/mysql-connector-java-5.0.8/db.json")  
                  

                  然后,我嘗試了 spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py.結(jié)果還是一樣.

                  then, I tried spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py. The result is still the same.

                  1. 然后我嘗試了 pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py,有和沒(méi)有下面的t.py,還是一樣.
                  1. Then I tried pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py, both with and without the following t.py, still the same.

                  在此期間,mysql 正在運(yùn)行.這是我的操作系統(tǒng)信息:

                  During all of this, the mysql is running. And here is my os info:

                  # rpm --query centos-release  
                  centos-release-6-5.el6.centos.11.2.x86_64
                  

                  hadoop 版本是 2.6.

                  And the hadoop version is 2.6.

                  現(xiàn)在不知道下一步該去哪里,希望有大神幫忙指點(diǎn)一下,謝謝!

                  Now I don't where to go next, so I hope some one can help give some advice, thanks!

                  推薦答案

                  當(dāng)我嘗試將腳本寫(xiě)入 MySQL 時(shí),我遇到了java.sql.SQLException:沒(méi)有合適的驅(qū)動(dòng)程序".

                  I ran into "java.sql.SQLException: No suitable driver" when I tried to have my script write to MySQL.

                  這是我為解決這個(gè)問(wèn)題所做的.

                  Here's what I did to fix that.

                  在 script.py 中

                  In script.py

                  df.write.jdbc(url="jdbc:mysql://localhost:3333/my_database"
                                    "?user=my_user&password=my_password",
                                table="my_table",
                                mode="append",
                                properties={"driver": 'com.mysql.jdbc.Driver'})
                  

                  然后我以這種方式運(yùn)行 spark-submit

                  Then I ran spark-submit this way

                  SPARK_HOME=/usr/local/Cellar/apache-spark/1.6.1/libexec spark-submit --packages mysql:mysql-connector-java:5.1.39 ./script.py
                  

                  請(qǐng)注意,SPARK_HOME 特定于安裝 spark 的位置.對(duì)于您的環(huán)境,這個(gè) https://github.com/sequenceiq/docker-spark/blob/master/README.md 可能會(huì)有所幫助.

                  Note that SPARK_HOME is specific to where spark is installed. For your environment this https://github.com/sequenceiq/docker-spark/blob/master/README.md might help.

                  如果以上所有內(nèi)容都令人困惑,請(qǐng)嘗試以下操作:
                  在 t.py 中替換

                  In case all the above is confusing, try this:
                  In t.py replace

                  sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  

                  sqlContext.read.format("jdbc").option("dbtable","people").option("driver", 'com.mysql.jdbc.Driver').load()
                  

                  然后運(yùn)行

                  spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local[4] t.py
                  

                  這篇關(guān)于pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數(shù)根據(jù) N 個(gè)先前值來(lái)決定接下來(lái)的 N 個(gè)行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達(dá)式的結(jié)果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數(shù)的 ignore 選項(xiàng)是忽略整個(gè)事務(wù)還是只是有問(wèn)題的行?) - IT屋-程序員軟件開(kāi)發(fā)技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時(shí)出錯(cuò),使用 for 循環(huán)數(shù)組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數(shù)據(jù)庫(kù)表作為 Spark 數(shù)據(jù)幀讀取?)
                • <i id='H7lsK'><tr id='H7lsK'><dt id='H7lsK'><q id='H7lsK'><span id='H7lsK'><b id='H7lsK'><form id='H7lsK'><ins id='H7lsK'></ins><ul id='H7lsK'></ul><sub id='H7lsK'></sub></form><legend id='H7lsK'></legend><bdo id='H7lsK'><pre id='H7lsK'><center id='H7lsK'></center></pre></bdo></b><th id='H7lsK'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='H7lsK'><tfoot id='H7lsK'></tfoot><dl id='H7lsK'><fieldset id='H7lsK'></fieldset></dl></div>

                    • <tfoot id='H7lsK'></tfoot>

                        • <bdo id='H7lsK'></bdo><ul id='H7lsK'></ul>
                          <legend id='H7lsK'><style id='H7lsK'><dir id='H7lsK'><q id='H7lsK'></q></dir></style></legend>

                          <small id='H7lsK'></small><noframes id='H7lsK'>

                              <tbody id='H7lsK'></tbody>
                            主站蜘蛛池模板: 亚洲精品一区二区 | 日本在线免费视频 | 亚洲精品久久久久久首妖 | 国产中文字幕在线观看 | 天天噜天天干 | 99亚洲精品| 欧美日韩三级 | 一区二区高清 | 精品视频在线免费观看 | 国产精品一二三区 | 91看片官网 | 午夜视频免费在线观看 | 欧美国产日韩一区二区三区 | 精国产品一区二区三区四季综 | 国产91久久精品一区二区 | 91一区二区 | av在线成人 | 久久久久久免费免费 | 国产日韩欧美精品 | 国产欧美一区二区三区另类精品 | 狠狠干网站 | 仙人掌旅馆在线观看 | 国产色爽 | 不卡视频一区 | 色婷婷一区二区三区四区 | 欧美高清视频一区 | 精品久久久久久亚洲综合网 | 一级黄色片免费 | 久久久国产一区二区三区 | 人人干97 | 中文字幕欧美一区 | 久久九九网站 | 91成人午夜性a一级毛片 | 亚洲热在线视频 | 欧美一级免费看 | 天天色天天射天天干 | 日韩精品一区二区三区视频播放 | 国产精品免费在线 | 日韩免费视频一区二区 | 黄色网络在线观看 | 天天操操|