MySQL,错误 126:表的密钥文件不正确
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/19003106/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
MySQL, Error 126: Incorrect key file for table
提问by superhero
I read the following question that has relevance, but the replies didn't satify me: MySQL: #126 - Incorrect key file for table
我阅读了以下相关问题,但答复并没有让我满意:MySQL:#126 - 表的密钥文件不正确
The problem
问题
When running a query I get this error
运行查询时出现此错误
ERROR 126 (HY000): Incorrect key file for table`
错误 126 (HY000):表的密钥文件不正确
The question
问题
When I'm trying to find the problem I cant't find one, so I don't know how to fix it with the repair command. Is there any pointers to how I can find the problem causing this issue in any other way then I already have tried?
当我试图找到问题时,我找不到问题,所以我不知道如何使用修复命令修复它。是否有任何指示我如何以任何其他方式找到导致此问题的问题,然后我已经尝试过?
The query
查询
mysql> SELECT
-> Process.processId,
-> Domain.id AS domainId,
-> Domain.host,
-> Process.started,
-> COUNT(DISTINCT Joppli.id) AS countedObjects,
-> COUNT(DISTINCT Page.id) AS countedPages,
-> COUNT(DISTINCT Rule.id) AS countedRules
-> FROM Domain
-> JOIN CustomScrapingRule
-> AS Rule
-> ON Rule.Domain_id = Domain.id
-> LEFT JOIN StructuredData_Joppli
-> AS Joppli
-> ON Joppli.CustomScrapingRule_id = Rule.id
-> LEFT JOIN Domain_Page
-> AS Page
-> ON Page.Domain_id = Domain.id
-> LEFT JOIN Domain_Process
-> AS Process
-> ON Process.Domain_id = Domain.id
-> WHERE Rule.CustomScrapingRule_id IS NULL
-> GROUP BY Domain.id
-> ORDER BY Domain.host;
ERROR 126 (HY000): Incorrect key file for table '/tmp/#sql_2b5_4.MYI'; try to repair it
mysqlcheck
mysql检查
root@scraper:~# mysqlcheck -p scraper
Enter password:
scraper.CustomScrapingRule OK
scraper.Domain OK
scraper.Domain_Page OK
scraper.Domain_Page_Rank OK
scraper.Domain_Process OK
scraper.Log OK
scraper.StructuredData_Joppli OK
scraper.StructuredData_Joppli_Product OK
counted rows
计数行
mysql> select count(*) from CustomScrapingRule;
+----------+
| count(*) |
+----------+
| 26 |
+----------+
1 row in set (0.04 sec)
mysql> select count(*) from Domain;
+----------+
| count(*) |
+----------+
| 2 |
+----------+
1 row in set (0.01 sec)
mysql> select count(*) from Domain_Page;
+----------+
| count(*) |
+----------+
| 134288 |
+----------+
1 row in set (0.17 sec)
mysql> select count(*) from Domain_Page_Rank;
+----------+
| count(*) |
+----------+
| 4671111 |
+----------+
1 row in set (11.69 sec)
mysql> select count(*) from Domain_Process;
+----------+
| count(*) |
+----------+
| 2 |
+----------+
1 row in set (0.02 sec)
mysql> select count(*) from Log;
+----------+
| count(*) |
+----------+
| 41 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from StructuredData_Joppli;
+----------+
| count(*) |
+----------+
| 11433 |
+----------+
1 row in set (0.16 sec)
mysql> select count(*) from StructuredData_Joppli_Product;
+----------+
| count(*) |
+----------+
| 130784 |
+----------+
1 row in set (0.20 sec)
Update
更新
Disk usage
磁盘使用情况
root@scraper:/tmp# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 4.7G 15G 26% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 237M 4.0K 237M 1% /dev
tmpfs 49M 188K 49M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 245M 0 245M 0% /run/shm
none 100M 0 100M 0% /run/user
回答by Cillier
It appears that your query is returning a large intermediate result set requiring the creation of a temporary table and that the configured location for mysql temporary disk tables (/tmp) is not large enough for the resulting temporary table.
看来您的查询正在返回一个需要创建临时表的大型中间结果集,并且为 mysql 临时磁盘表 (/tmp) 配置的位置对于生成的临时表来说不够大。
You could try increasing the tmpfs partition size by remounting it:
您可以尝试通过重新安装来增加 tmpfs 分区大小:
mount -t tmpfs -o remount,size=1G tmpfs /tmp
You can make this change permanent by editing /etc/fstab
您可以通过编辑 /etc/fstab 使此更改永久化
If you are unable to do this you could try changing the location of disk temporary tables by editing the "tmpdir" entry in your my.cnf file (or add it if it is not already there). Remember that the directory you choose should be writable by the mysql user
如果您不能这样做,您可以尝试通过编辑 my.cnf 文件中的“tmpdir”条目来更改磁盘临时表的位置(如果它还没有,则添加它)。记住你选择的目录应该是mysql用户可写的
You could also try preventing the creation of an on disk temporary table by increasing the values for the mysql configuration options:
您还可以尝试通过增加 mysql 配置选项的值来阻止创建磁盘临时表:
tmp_table_size
max_heap_table_size
to larger values. You will need to increase both of the above parameters
到更大的值。您将需要增加上述两个参数
Example:
例子:
set global tmp_table_size = 1G;
set global max_heap_table_size = 1G;
回答by Carrie Kendall
If your /tmp
mount on a linux filesystem is mounted as overflow, often sized at 1MB, ie
如果您/tmp
在 linux 文件系统上的挂载是作为溢出挂载的,通常大小为 1MB,即
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 348K 1.6G 1% /run
/dev/xvda1 493G 6.9G 466G 2% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 4.0K 1020K 1% /tmp <------
this is likely due to you not specifying /tmp
as its own partition and your root filesystem filled up and /tmp
was remounted as a fallback.
这可能是因为您没有将/tmp
其指定为自己的分区,并且您的根文件系统已填满并/tmp
作为后备重新安装。
I ran into this issue after running out of space on an EC2 volume. Once I resized the volume, I ran into the /tmp
overflow partition filling up while executing a complicated view.
在 EC2 卷上的空间不足后,我遇到了这个问题。一旦我调整了卷的大小,我/tmp
在执行一个复杂的视图时遇到了溢出分区。
要在清除空间/调整大小后解决此问题,只需卸载回退,它应该在其原始点(通常是您的根分区)重新安装:
sudo umount -l /tmp
Note: -l
will lazily unmount the disk.
注意:-l
将延迟卸载磁盘。
回答by John
Splitting a complex query into multiple ones would be faster without needing to increase the temp table size
将复杂查询拆分为多个查询会更快,而无需增加临时表的大小
回答by SA Soibal
In my case I just cleared temp files form temp location:
就我而言,我只是从临时位置清除了临时文件:
my.ini
我的.ini
tmpdir = "D:/xampp/tmp"
tmpdir = "D:/xampp/tmp"
And it worked for me.
它对我有用。
回答by Kaushal Sachan
You just need to repair your table which use in search query. this problem generally occur on search query.
您只需要修复在搜索查询中使用的表。这个问题通常发生在搜索查询上。
go to "table_name" -> operation- > repair(just one click) effect may take some time to apply
转到“ table_name” ->操作->修复(只需单击)效果可能需要一些时间才能应用