MySQL 大表优化

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5557838/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 19:25:39  来源:igfitidea点击:

MySQL optimization of huge table

mysqloptimization

提问by lordstyx

I've been trying to get some speed improvements on a certain SELECT query. The situation is as follows: There is a (in my eyes) huge crossing table. It currently has about 20 million rows, but I expect this to grow alot. Based on this crossing table I need to create another table. To do so, I need to execute the following query:

我一直在尝试对某个 SELECT 查询进行一些速度改进。情况如下:有一个(在我眼中)巨大的交叉表。它目前有大约 2000 万行,但我预计这会增长很多。基于这个交叉表,我需要创建另一个表。为此,我需要执行以下查询:

SELECT hugeCrossingTable.field3, otherTable.field1, hugeCrossingTable.field2 * otherTable.field3 AS someName
FROM hugeCrossingTable 
INNER JOIN otherTable ON hugeCrossingTable.field1 = otherTable.field2

Now this currently results in about a million rows. I already have indexes on both the field1 in the 2 tables, but it still takes 18 minutes to finish.. I thought about splitting the table, but then I'd need to find a way on how to split the data, and since it's just a crossing table nothing comes to mind on how to do this.

现在这目前导致大约一百万行。我已经在 2 个表中的两个 field1 上都有索引,但它仍然需要 18 分钟才能完成。只是一个交叉表没有想到如何做到这一点。

Any ideas on how this can be optimized?

关于如何优化的任何想法?

Thanks.

谢谢。

On request here's the create statement:

根据要求,这里是 create 语句:

CREATE TABLE  `hugeCrossingTable` (
  `field` int(11) NOT NULL,
  `field1` int(11) NOT NULL,
  `field2` double(10,5) DEFAULT NULL,
  `field3` int(4) DEFAULT NULL,
  KEY `field1` (`field1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE  `otherTable` (
  `field` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `field1` int(10) unsigned NOT NULL,
  `field2` int(10) unsigned NOT NULL,
  `field3` decimal(5,2) NOT NULL,
  PRIMARY KEY (`field`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;

And the explain output:

和解释输出:

id, select_type, table              , type , possible_keys, key     , key_len, ref                , rows, Extra
1 , 'SIMPLE'   , 'otherTable'       , 'ALL', ''           , ''      , ''     , ''                 , 294 , ''
1 , 'SIMPLE'   , 'hugeCrossingTable', 'ref', 'field1'     , 'field1', '4'    , 'otherTable.field2', 69  , 'Using where'

回答by Jon Black

Here are some innodb examples that work on large tables of approx. 60 to 500 million rows that demonstrate the advantages of a well designed innodb table and how best to use clustered indexes (only available with innodb)

以下是一些适用于约 60 到 5 亿行,展示了精心设计的 innodb 表的优势以及如何最好地使用聚集索引(仅适用于 innodb)

MySQL and NoSQL: Help me to choose the right one

MySQL 和 NoSQL:帮助我选择正确的

60 million entries, select entries from a certain month. How to optimize database?

6000万个条目,选择某个月份的条目。如何优化数据库?

Rewriting mysql select to reduce time and writing tmp to disk

重写mysql select以减少时间并将tmp写入磁盘

You will also want to read the following:

您还需要阅读以下内容:

http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html

http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html

http://www.xaprb.com/blog/2006/07/04/how-to-exploit-mysql-index-optimizations/

http://www.xaprb.com/blog/2006/07/04/how-to-exploit-mysql-index-optimizations/

Once you've sorted out your table designs and optimised your innodb config:

一旦你整理了你的表设计并优化了你的 innodb 配置:

http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/

http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/

http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/

http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/

You can try something like:

您可以尝试以下操作:

start transaction;

insert into target_table (x,y) select x,y from source_table order by x,y;

commit;

Hope this helps.

希望这可以帮助。