database sqlite 表中的最大行数

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1546947/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-08 07:29:42  来源:igfitidea点击:

Maximum number of rows in a sqlite table

databasesqlitesizing

提问by volatilevoid

Give an simple sqlite3 table (create table data (key PRIMARY KEY,value)) with key size of 256 bytes and value size of 4096 bytes, what is the limit (ignoring disk space limits) on the maximum number of rows in this sqlite3 table? Are their limits associated with OS (win32, linux or Mac)

给出一个简单的 sqlite3 表 ( create table data (key PRIMARY KEY,value)),键大小为 256 字节,值大小为 4096 字节,此 sqlite3 表中最大行数的限制(忽略磁盘空间限制)是多少?它们的限制是否与操作系统(win32、linux 或 Mac)相关

采纳答案by Nick Dandoulakis

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

在 SQLite3 中,字段大小不固定。引擎将为每个单元提供所需的空间。

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?

对于文件限制,请参阅此问题:
具有非常大的数据库文件的 sqlite 的性能特征是什么?

回答by aculich

As of Jan 2017 the sqlite3 limits pagedefines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

截至 2017 年 1 月,sqlite3 限制页面根据数据库的最大大小(140 TB)定义了对此问题的实际限制:

Maximum Number Of Rows In A Table

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

表中的最大行数

表中的理论最大行数为 2^64(18446744073709551616 或约 1.8e+19)。由于将首先达到 140 TB 的最大数据库大小,因此无法达到此限制。一个 140 TB 的数据库可以容纳不超过大约 1e+13 行,并且只有在没有索引并且每行包含很少数据的情况下。

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.

因此,对于 140 TB 的最大数据库大小,您很幸运能获得大约 1 万亿行,因为如果您确实有一个包含数据的有用表,那么行数将受到数据大小的限制。在 140 TB 的数据库中,您可能拥有多达数十亿行。

回答by John

I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.

我有 3.3 GB 大小的 SQLite 数据库,其中存储了 2500 万行数字日志并对其进行了计算,它运行速度快且运行良好。

回答by Fidel

I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes afterall rows have been inserted. Otherwise the insert speed is quite slow.

我有一个 7.5GB 的 SQLite 数据库,其中存储了 1050 万行。只要您有正确的索引,查询就会很快。为了让插入快速运行,您应该使用事务。此外,我发现最好在插入所有行创建索引。否则插入速度很慢。

回答by DigitalRoss

The answer you want is right here.

您想要的答案就在这里

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

您提到的每个操作系统都支持多种文件系统类型。实际限制将是每个文件系统,而不是每个操作系统。很难总结 SO 上的约束矩阵,但是虽然一些文件系统对文件大小施加了限制,但今天所有主要的操作系统内核都支持具有超大文件的文件系统。

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.

sqlite3 db 的最大页面大小非常大,2^32768,尽管这需要一些配置。我认为索引必须指定页码,但结果很可能是首先达到操作系统或环境限制。

回答by Martin Beckett

Essentially no real limits

基本上没有真正的限制

see http://www.sqlite.org/limits.htmlfor details

有关详细信息,请参阅http://www.sqlite.org/limits.html

回答by Erick

No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

没有限制,但基本上在某个时间点之后,sqlite 数据库将变得无用。PostgreSQL 是迄今为止最大的免费数据库,适用于大型数据库。就我而言,它在我的 Linux 64 位四核双处理器计算机上大约有 100 万行,带有 8GB RAM 和 Raptor 硬盘。PostgreSQL 是无与伦比的,即使是经过优化的 MySQL 数据库。(发表于 2011 年)。