MySQL 锁总数超过锁表大小

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6901108/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 20:42:59  来源:igfitidea点击:

The total number of locks exceeds the lock table size

mysqlsql

提问by maxman92

I'm running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error:

我正在 MySQL 中运行报告。其中一个查询涉及向临时表中插入大量行。当我尝试运行它时,出现此错误:

Error code 1206: The number of locks exceeds the lock table size.

错误代码 1206:锁的数量超过了锁表的大小。

The queries in question are:

有问题的查询是:

create temporary table SkusBought(
customerNum int(11),
sku int(11),
typedesc char(25),
key `customerNum` (customerNum)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into skusBought
select t1.* from
    (select customer, sku, typedesc from transactiondatatransit
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondatadelaware
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondataprestige
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku) t1
join
(select customernum from topThreetransit group by customernum) t2
on t1.customer = t2.customernum;

I've read that changing the configuration file to increase the buffer pool size will help, but that does nothing. What would be the way to fix this, either as a temporary workaround or a permanent fix?

我读过更改配置文件以增加缓冲池大小会有所帮助,但这没有任何作用。解决这个问题的方法是什么,无论是作为临时解决方法还是永久修复?

EDIT: changed part of the query. Shouldn't affect it, but I did a find-replace all and didn't realize it screwed that up. Doesn't affect the question.

编辑:更改了查询的一部分。不应该影响它,但我做了一个查找替换所有并没有意识到它搞砸了。不影响提问。

EDIT 2: Added typedesc to t1. I changed it in the query but not here.

编辑 2:将 typedesc 添加到 t1。我在查询中更改了它,但不是在这里。

回答by PHP Bugs

This issue can be resolved by setting the higher values for the MySQL variable innodb_buffer_pool_size. The default value for innodb_buffer_pool_sizewill be 8,388,608.

这个问题可以通过为 MySQL 变量设置更高的值来解决innodb_buffer_pool_size。作为默认值innodb_buffer_pool_size8,388,608

To change the settings value for innodb_buffer_pool_sizeplease see the below set.

要更改设置值,innodb_buffer_pool_size请参阅以下设置。

  1. Locate the file my.cnffrom the server. For Linux servers this will be mostly at /etc/my.cnf
  2. Add the line innodb_buffer_pool_size=64MBto this file
  3. Restart the MySQL server
  1. my.cnf从服务器找到该文件。对于 Linux 服务器,这主要是在/etc/my.cnf
  2. 将行添加innodb_buffer_pool_size=64MB到此文件
  3. 重启 MySQL 服务器

To restart the MySQL server, you can use anyone of the below 2 options:

要重新启动 MySQL 服务器,您可以使用以下 2 个选项中的任何一个:

  1. service mysqld restart
  2. /etc/init.d/mysqld restart
  1. 服务 mysqld 重启
  2. /etc/init.d/mysqld 重启

Reference The total number of locks exceeds the lock table size

参考锁总数超过锁表大小

回答by Michael Goltsman

I found another way to solve it - use Table Lock. Sure, it can be unappropriate for your application - if you need to update table at same time.

我找到了另一种解决方法 - 使用 Table Lock。当然,它可能不适合您的应用程序 - 如果您需要同时更新表。

See: Try using LOCK TABLESto lock the entire table, instead of the default action of InnoDB's MVCC row-level locking. If I'm not mistaken, the "lock table" is referring to the InnoDB internal structure storing row and version identifiers for the MVCC implementation with a bit identifying the row is being modified in a statement, and with a table of 60 million rows, probably exceeds the memory allocated to it. The LOCK TABLEScommand should alleviate this problem by setting a table-level lock instead of row-level:

请参阅:尝试使用LOCK TABLES锁定整个表,而不是 InnoDB 的 MVCC 行级锁定的默认操作。如果我没记错的话,“锁表”指的是存储 MVCC 实现的行和版本标识符的 InnoDB 内部结构,其中有一个位标识正在修改语句中的行,以及一个包含 6000 万行的表,可能超过分配给它的内存。该LOCK TABLES命令应该通过设置表级锁而不是行级锁来缓解这个问题:

SET @@AUTOCOMMIT=0;
LOCK TABLES avgvol WRITE, volume READ;
INSERT INTO avgvol(date,vol)
SELECT date,avg(vol) FROM volume
GROUP BY date;
UNLOCK TABLES;

Jay Pipes, Community Relations Manager, North America, MySQL Inc.

MySQL Inc. 北美社区关系经理 Jay Pipes

回答by MicSim

From the MySQL documentation(that you already have read as I see):

MySQL 文档(如我所见,您已经阅读过):

1206 (ER_LOCK_TABLE_FULL)

The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a large INSERT, perform several smaller INSERT operations.

1206 (ER_LOCK_TABLE_FULL)

锁总数超过锁表大小。要避免此错误,请增加 innodb_buffer_pool_size 的值。在单个应用程序中,解决方法可能是将大型操作分解为较小的部分。例如,如果大 INSERT 发生错误,则执行几个较小的 INSERT 操作。

If increasing innodb_buffer_pool_sizedoesnt help, then just follow the indication on the bolded part and split up your INSERT into 3. Skip the UNIONs and make 3 INSERTs, each with a JOIN to the topThreetransittable.

如果增加innodb_buffer_pool_size没有帮助,那么只需按照粗体部分的指示并将您的 INSERT 分成 3 个。跳过 UNION 并进行 3 个 INSERT,每个都与topThreetransit表的连接。

回答by dejoma

I am running MySQL windows with MySQL workbench. Go to Server > Server status At the top it says configuration file: "path" (C:\ProgramData\MySQL\...\my.ini)

我正在使用 MySQL 工作台运行 MySQL 窗口。转到服务器 > 服务器状态在顶部它说配置文件:“路径”(C:\ProgramData\MySQL\...\my.ini

Then in the file "my.ini" press control+F and find buffer_pool_size. Set the value higher, I would recommend 64?MB (default is 8?MB).

然后在文件“my.ini”中按 control+F 并找到buffer_pool_size. 将值设置得更高,我建议使用 64?MB(默认为 8?MB)。

Restart the server by going to Instance>Startup/Shutdown > Stop server (and then later start server again)

通过转到实例>启动/关闭>停止服务器重新启动服务器(然后再次启动服务器)

In my case I could not delete entries from my table.

就我而言,我无法从我的表中删除条目。

回答by Brad

First, you can use sql command show global variables like 'innodb_buffer%';to check the buffer size.

首先,您可以使用 sql 命令show global variables like 'innodb_buffer%';来检查缓冲区大小。

Solution is find your my.cnffile and add,

解决方案是找到您的my.cnf文件并添加,

[mysqld]
innodb_buffer_pool_size=1G # depends on your data and machine

DO NOTforget to add [mysqld], otherwise, it won't work.

不要忘记添加[mysqld],否则,它将不起作用。

In my case, ubuntu 16.04, my.cnfis located under the folder /etc/mysql/.

在我的情况下,Ubuntu的16.04my.cnf位于文件夹下/etc/mysql/

回答by Ch HaXam

in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:

在 Windows 中:如果您有 mysql 工作台。转到服务器状态。在我的情况下,找到运行服务器文件的位置是:

C:\ProgramData\MySQL\MySQL Server 5.7

open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M. This is how i fixed this problem

打开 my.ini 文件并找到 buffer_pool_size。将值设置高。默认值为 8M。这就是我解决这个问题的方法

回答by Forrest Pugh

If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert -

如果您已正确构建您的表,以便每个表都包含相对唯一的值,那么执行此操作的强度较低的方法是执行 3 个单独的插入语句,每个表 1 个,并为每个插入设置连接过滤器 -

INSERT INTO SkusBought...

SELECT t1.customer, t1.SKU, t1.TypeDesc
FROM transactiondatatransit AS T1
LEFT OUTER JOIN topThreetransit AS T2
ON t1.customer = t2.customernum
WHERE T2.customernum IS NOT NULL

Repeat this for the other two tables - copy/paste is a fine method, simply change the FROM table name. ** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.

对其他两个表重复此操作 - 复制/粘贴是一种很好的方法,只需更改 FROM 表名即可。** 如果您试图防止 SkusBought 表中出现重复条目​​,您可以在 WHERE 子句之前的每个部分中添加以下连接代码。

LEFT OUTER JOIN SkusBought AS T3
ON  t1.customer = t3.customer
AND t1.sku = t3.sku

-and then the last line of WHERE clause-

- 然后是 WHERE 子句的最后一行 -

AND t3.customer IS NULL

Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.

您的初始代码使用了许多子查询,并且 UNION 语句可能很昂贵,因为它会首先创建自己的临时表来填充来自三个单独源的数据,然后再插入到您想要的表中并运行另一个子查询查询过滤结果。

回答by Antony

It is worth saying that the figure used for this setting is in BYTES - found that out the hard way!

值得一提的是,用于此设置的数字以 BYTES 为单位 - 很难找到!

回答by Joseph Shih

This answer below does not directly answer the OP's question. However, I'm adding this answer here because this page is the first result when you Google "The total number of locks exceeds the lock table size".

下面的这个答案没有直接回答 OP 的问题。但是,我在这里添加这个答案是因为当你谷歌“锁的总数超过锁表大小”时,这个页面是第一个结果。



If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.

如果您正在运行的查询正在解析跨越数百万行的整个表,您可以尝试使用 while 循环而不是更改配置中的限制。

The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.

while 外观会将其分解成碎片。下面是一个循环遍历 DATETIME 索引列的示例。

# Drop
DROP TABLE IF EXISTS
new_table;

# Create (we will add keys later)
CREATE TABLE
new_table
(
    num INT(11),
    row_id VARCHAR(255),
    row_value VARCHAR(255),
    row_date DATETIME
);

# Change the delimimter
DELIMITER //

# Create procedure
CREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)
BEGIN

    # Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.
    WHILE current_loop_date <= NOW() DO

        # Do something
        INSERT INTO
            user_behavior_search_tagged_keyword_statistics_with_type
            (
                num,
                row_id,
                row_value,
                row_date
            )
        SELECT
            # Do something interesting here
            num,
            row_id,
            row_value,
            row_date
        FROM
            old_table
        WHERE
            row_date >= current_loop_date AND
            row_date < current_loop_date + INTERVAL 1 WEEK;

        # Increment
        SET current_loop_date = current_loop_date + INTERVAL 1 WEEK;

    END WHILE;

END//

# Run
CALL do_repeat('2017-01-01');

# Cleanup
DROP PROCEDURE IF EXISTS do_repeat//

# Change the delimimter back
DELIMITER ;

# Add keys
ALTER TABLE
    new_table
MODIFY COLUMN
    num int(11) NOT NULL,
ADD PRIMARY KEY
    (num),
ADD KEY
    row_id (row_id) USING BTREE,
ADD KEY
    row_date (row_date) USING BTREE;

You can also adapt it to loop over the "num" column if your table doesn't use a date.

如果您的表不使用日期,您还可以调整它以循环“num”列。

Hope this helps someone!

希望这可以帮助某人!