久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

  • <legend id='FVhgh'><style id='FVhgh'><dir id='FVhgh'><q id='FVhgh'></q></dir></style></legend>

  • <i id='FVhgh'><tr id='FVhgh'><dt id='FVhgh'><q id='FVhgh'><span id='FVhgh'><b id='FVhgh'><form id='FVhgh'><ins id='FVhgh'></ins><ul id='FVhgh'></ul><sub id='FVhgh'></sub></form><legend id='FVhgh'></legend><bdo id='FVhgh'><pre id='FVhgh'><center id='FVhgh'></center></pre></bdo></b><th id='FVhgh'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='FVhgh'><tfoot id='FVhgh'></tfoot><dl id='FVhgh'><fieldset id='FVhgh'></fieldset></dl></div>

  • <small id='FVhgh'></small><noframes id='FVhgh'>

  • <tfoot id='FVhgh'></tfoot>

        <bdo id='FVhgh'></bdo><ul id='FVhgh'></ul>

        “插入忽略"與“插入......在重復的密鑰更新

        quot;INSERT IGNOREquot; vs quot;INSERT ... ON DUPLICATE KEY UPDATEquot;(“插入忽略與“插入......在重復的密鑰更新上)
          <bdo id='89DF9'></bdo><ul id='89DF9'></ul>
            <i id='89DF9'><tr id='89DF9'><dt id='89DF9'><q id='89DF9'><span id='89DF9'><b id='89DF9'><form id='89DF9'><ins id='89DF9'></ins><ul id='89DF9'></ul><sub id='89DF9'></sub></form><legend id='89DF9'></legend><bdo id='89DF9'><pre id='89DF9'><center id='89DF9'></center></pre></bdo></b><th id='89DF9'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='89DF9'><tfoot id='89DF9'></tfoot><dl id='89DF9'><fieldset id='89DF9'></fieldset></dl></div>
            <tfoot id='89DF9'></tfoot>

            <small id='89DF9'></small><noframes id='89DF9'>

          • <legend id='89DF9'><style id='89DF9'><dir id='89DF9'><q id='89DF9'></q></dir></style></legend>
                <tbody id='89DF9'></tbody>

                  本文介紹了“插入忽略"與“插入......在重復的密鑰更新上"的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  在執行包含多行的 INSERT 語句時,我想跳過會導致失敗的重復條目.經過一些研究,我的選擇似乎是使用:

                  While executing an INSERT statement with many rows, I want to skip duplicate entries that would otherwise cause failure. After some research, my options appear to be the use of either:

                  • ON DUPLICATE KEY UPDATE 這意味著需要付出一些代價進行不必要的更新,或者
                  • INSERT IGNORE 暗示邀請其他類型的失敗不經通知就溜進來.
                  • ON DUPLICATE KEY UPDATE which implies an unnecessary update at some cost, or
                  • INSERT IGNORE implies an invitation for other kinds of failure to slip in unannounced.

                  我的這些假設是否正確?簡單地跳過可能導致重復的行并繼續處理其他行的最佳方法是什么?

                  Am I right in these assumptions? What's the best way to simply skip the rows that might cause duplicates and just continue on to the other rows?

                  推薦答案

                  我建議使用 INSERT...ON DUPLICATE KEY UPDATE.

                  如果您使用INSERT IGNORE,那么如果導致重復鍵,該行實際上不會被插入.但該語句不會產生錯誤.相反,它會生成警告.這些情況包括:

                  If you use INSERT IGNORE, then the row won't actually be inserted if it results in a duplicate key. But the statement won't generate an error. It generates a warning instead. These cases include:

                  • 在具有 PRIMARY KEYUNIQUE 約束的列中插入重復鍵.
                  • 將 NULL 插入具有 NOT NULL 約束的列中.
                  • 向分區表插入一行,但您插入的值未映射到分區.
                  • Inserting a duplicate key in columns with PRIMARY KEY or UNIQUE constraints.
                  • Inserting a NULL into a column with a NOT NULL constraint.
                  • Inserting a row to a partitioned table, but the values you insert don't map to a partition.

                  如果你使用REPLACE,MySQL實際上在內部做了一個DELETE后跟一個INSERT,這有一些意想不到的副作用:

                  If you use REPLACE, MySQL actually does a DELETE followed by an INSERT internally, which has some unexpected side effects:

                  • 分配了一個新的自增 ID.
                  • 具有外鍵的相關行可能會被刪除(如果您使用級聯外鍵),否則會阻止 REPLACE.
                  • 不必要地執行在 DELETE 上觸發的觸發器.
                  • 副作用也會傳播到副本.

                  更正:REPLACEINSERT...ON DUPLICATE KEY UPDATE 都是 MySQL 特有的非標準專有發明.ANSI SQL 2003 定義了一個 MERGE 語句,可以解決同樣的(甚至更多)需求,但是 MySQL 不支持 MERGE 語句.

                  correction: both REPLACE and INSERT...ON DUPLICATE KEY UPDATE are non-standard, proprietary inventions specific to MySQL. ANSI SQL 2003 defines a MERGE statement that can solve the same need (and more), but MySQL does not support the MERGE statement.

                  有用戶試圖編輯此帖子(編輯被版主拒絕).該編輯試圖添加一個聲明,即 INSERT...ON DUPLICATE KEY UPDATE 會導致分配一個新的自動遞增 ID.確實,新的id是生成的,但是在改變的行中并沒有使用.

                  A user tried to edit this post (the edit was rejected by moderators). The edit tried to add a claim that INSERT...ON DUPLICATE KEY UPDATE causes a new auto-increment id to be allocated. It's true that the new id is generated, but it is not used in the changed row.

                  請參閱下面的演示,使用 Percona Server 5.5.28 進行測試.配置變量innodb_autoinc_lock_mode=1(默認):

                  See demonstration below, tested with Percona Server 5.5.28. The configuration variable innodb_autoinc_lock_mode=1 (the default):

                  mysql> create table foo (id serial primary key, u int, unique key (u));
                  mysql> insert into foo (u) values (10);
                  mysql> select * from foo;
                  +----+------+
                  | id | u    |
                  +----+------+
                  |  1 |   10 |
                  +----+------+
                  
                  mysql> show create table foo\G
                  CREATE TABLE `foo` (
                    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
                    `u` int(11) DEFAULT NULL,
                    PRIMARY KEY (`id`),
                    UNIQUE KEY `u` (`u`)
                  ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
                  
                  mysql> insert into foo (u) values (10) on duplicate key update u = 20;
                  mysql> select * from foo;
                  +----+------+
                  | id | u    |
                  +----+------+
                  |  1 |   20 |
                  +----+------+
                  
                  mysql> show create table foo\G
                  CREATE TABLE `foo` (
                    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
                    `u` int(11) DEFAULT NULL,
                    PRIMARY KEY (`id`),
                    UNIQUE KEY `u` (`u`)
                  ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
                  

                  上面演示了IODKU語句檢測到重復,并調用更新來改變u的值.請注意,AUTO_INCREMENT=3 表示生成了 id,但未在行中使用.

                  The above demonstrates that the IODKU statement detects the duplicate, and invokes the update to change the value of u. Note the AUTO_INCREMENT=3 indicates an id was generated, but not used in the row.

                  REPLACE 確實刪除原始行并插入新行,生成存儲一個新的自增 ID:

                  Whereas REPLACE does delete the original row and inserts a new row, generating and storing a new auto-increment id:

                  mysql> select * from foo;
                  +----+------+
                  | id | u    |
                  +----+------+
                  |  1 |   20 |
                  +----+------+
                  mysql> replace into foo (u) values (20);
                  mysql> select * from foo;
                  +----+------+
                  | id | u    |
                  +----+------+
                  |  3 |   20 |
                  +----+------+
                  

                  這篇關于“插入忽略"與“插入......在重復的密鑰更新上"的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數根據 N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達式的結果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時出錯,使用 for 循環數組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數據庫表作為 Spark 數據幀讀取?)

                      • <bdo id='B3srJ'></bdo><ul id='B3srJ'></ul>

                          <small id='B3srJ'></small><noframes id='B3srJ'>

                            <i id='B3srJ'><tr id='B3srJ'><dt id='B3srJ'><q id='B3srJ'><span id='B3srJ'><b id='B3srJ'><form id='B3srJ'><ins id='B3srJ'></ins><ul id='B3srJ'></ul><sub id='B3srJ'></sub></form><legend id='B3srJ'></legend><bdo id='B3srJ'><pre id='B3srJ'><center id='B3srJ'></center></pre></bdo></b><th id='B3srJ'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='B3srJ'><tfoot id='B3srJ'></tfoot><dl id='B3srJ'><fieldset id='B3srJ'></fieldset></dl></div>
                              <tbody id='B3srJ'></tbody>
                            <tfoot id='B3srJ'></tfoot>
                          1. <legend id='B3srJ'><style id='B3srJ'><dir id='B3srJ'><q id='B3srJ'></q></dir></style></legend>
                          2. 主站蜘蛛池模板: 九九精品在线 | 日韩美av | 久久亚洲国产精品 | 高清久久久| 一区二区三区在线播放 | 久久精品国产亚洲 | 国产成人精品一区二区三区 | 一级a性色生活片久久毛片 午夜精品在线观看 | 精品国产黄a∨片高清在线 成人区精品一区二区婷婷 日本一区二区视频 | 日韩精品免费视频 | 国产欧美精品区一区二区三区 | 亚洲精品成人av久久 | 在线成人av| 91视频91| 日韩在线h | 欧美成人a| 久久精品青青大伊人av | 老司机成人在线 | 日本免费在线 | 日本黄色大片免费看 | 一级做受毛片免费大片 | 亚洲精品一区av在线播放 | 精品毛片 | 色爱综合 | 我想看国产一级毛片 | 欧美三区在线观看 | 亚洲视频1区 | wwwww在线观看 | 青草福利 | 激情久久av一区av二区av三区 | 国产www.| 2019精品手机国产品在线 | 五月婷婷亚洲 | 精品无码久久久久久国产 | 黄色网址在线播放 | 国产精品久久精品 | 日韩一区二区三区在线视频 | 精品欧美一区二区久久久伦 | 污污免费网站 | 狠狠av | 日韩在线免费视频 |