MySQL fetch 时间优化
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15547040/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
MySQL fetch time optimization
提问by andriy
oI have a table with 2 millions of registers, but it will grow much more soon. Basically this table contains points of interest of an image with respective descriptors. When I'm trying to execute query that selects points that are spatially near to the query points, total execution time takes too long. More precisely Duration / Fetch = 0.484 sec / 27.441 sec. And the query is quite simple, which returns only ~17000 rows.
o我有一个包含 200 万个寄存器的表,但它很快就会增长得更多。基本上,该表包含具有相应描述符的图像的兴趣点。当我尝试执行选择空间上靠近查询点的点的查询时,总执行时间太长。更准确地说,Duration / Fetch = 0.484 sec / 27.441 sec。查询非常简单,只返回约 17000 行。
My query is:
我的查询是:
SELECT fp.fingerprint_id, fp.coord_x, fp.coord_y, fp.angle,
fp.desc1, fp.desc2, fp.desc3, fp.desc4, fp.desc5, fp.desc6, fp.desc7, fp.desc8, fp.desc9, fp.desc10,
fp.desc11, fp.desc12, fp.desc13, fp.desc14, fp.desc15, fp.desc16, fp.desc17, fp.desc18, fp.desc19,
fp.desc20, fp.desc21, fp.desc22, fp.desc23, fp.desc24, fp.desc25, fp.desc26, fp.desc27, fp.desc28,
fp.desc29, fp.desc30, fp.desc31, fp.desc32
FROM fingerprint fp
WHERE
fp.is_strong_point = 1 AND
(coord_x BETWEEN 193-40 AND 193+40) AND (coord_y BETWEEN 49-15 AND 49+15 )
LIMIT 1,1000000;
That is what I've done.
这就是我所做的。
- I've tried to change
key_buffer_size
inmy.ini
, but didn't see much changes. - In addition I've tried to set coord_x and coord_y as indexes, but query time became slower.
- The table is partitioned by range of coord_x field, which gave me better results.
- 我试图改变
key_buffer_size
的my.ini
,但没有看到太大的变化。 - 此外,我尝试将 coord_x 和 coord_y 设置为索引,但查询时间变慢了。
- 该表按 coord_x 字段的范围进行分区,这给了我更好的结果。
How I can reduce the Fetch time? Is it possible to reduce it to milliseconds?
如何减少 Fetch 时间?是否可以将其减少到毫秒?
采纳答案by ITroubs
If i am right the query is really fast but what is slow is the fetching of the data from your database. It takes 27 seconds to load the 170000 results from your storage.
如果我是对的,查询真的很快,但从数据库中获取数据的速度很慢。从您的存储中加载 170000 个结果需要 27 秒。
It looks like you use the wrong database type. Try switching the table from one database engine to another.
看起来您使用了错误的数据库类型。尝试将表从一种数据库引擎切换到另一种。
For maximum speed you can use the MEMORY engine. The only drawback would be that you would have to store a copy of that table in another engine if you have to do dynamic changes to it and after any change you would have to reload the differences or the entire table.
为了获得最大速度,您可以使用 MEMORY 引擎。唯一的缺点是,如果您必须对其进行动态更改,并且在进行任何更改之后,您将不得不在另一个引擎中存储该表的副本,并且必须重新加载差异或整个表。
Also you would have to make a script that fires when you restart your server so that your memory table would be loaded on startup of your mysql server
此外,您还必须制作一个在重新启动服务器时触发的脚本,以便在启动 mysql 服务器时加载内存表
回答by andy
I faced slow fetch issue too (MySQL, InnoDB). Finally I found that innodb_buffer_pool_size is set to 8MB by default for my system which is not enough to handle the query. After increasing it to 1GB performance seems fine:
我也遇到了慢取问题(MySQL、InnoDB)。最后我发现 innodb_buffer_pool_size 在我的系统中默认设置为 8MB,这不足以处理查询。将其增加到 1GB 性能后似乎没问题:
Duration / Fetch
353 row(s) returned 34.422 sec / 125.797 sec (8MB innodb buffer)
353 row(s) returned 0.500 sec / 1.297 sec (1GB innodb buffer)
UPDATE:
更新:
To change innodb_buffer_pool_size add this to your my.cnf
要更改 innodb_buffer_pool_size 将此添加到您的 my.cnf
innodb_buffer_pool_size=1G
restart your mysql to make it effect
重启你的mysql使其生效
Reference: How to change value for innodb_buffer_pool_size in MySQL on Mac OS?
回答by StephenZer0-
I found a Fix, Just disable your AVG or any antivuris in your system and then restart your workbench
我找到了一个修复程序,只需禁用系统中的 AVG 或任何 antivuri,然后重新启动工作台
回答by MANISH SINGH CHOUHAN
Make sure that the line is not written in your pom.xml.
确保该行没有写在你的 pom.xml 中。
<property name="hbm2ddl.auto">create</property>
If it is written than remove it.
如果它被写入比删除它。