php 读取文件内容的最快方法

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2749441/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-25 07:33:01  来源:igfitidea点击:

Fastest way possible to read contents of a file

phpfile-io

提问by SoLoGHoST

Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible.

好的,我正在寻找通过服务器上的文件路径通过 php 读取文件所有内容的最快方法,这些文件也可能很大。因此,尽可能快地对其进行 READ ONLY 非常重要。

Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?

逐行阅读比阅读整个内容更快吗?不过,我记得读过一些,阅读整个内容可能会产生大文件错误。这是真的?

回答by Pascal MARTIN

If you want to load the full-content of a file to a PHP variable, the easiest (and, probably fastest)way would be file_get_contents.

如果要将文件的全部内容加载到 PHP 变量中,最简单(也可能是最快)的方法是file_get_contents.

But, if you are working with big files, loading the whole file into memory might not be such a good idea : you'll probably end up with a memory_limiterror, as PHP will not allow your script to use more than (usually)a couple mega-bytes of memory.

但是,如果您正在处理大文件,将整个文件加载到内存中可能不是一个好主意:您最终可能会遇到memory_limit错误,因为 PHP 不允许您的脚本使用超过(通常)几个兆字节的内存。


So, even if it's not the fastest solution, reading the file line by line (fopen+fgets+fclose), and working with those lines on the fly, without loading the whole file into memory, might be necessary...


所以,即使它不是最快的解决方案,逐行读取文件中的行fopen+ fgets+ fclose,并与上飞的线路工作,而不加载整个文件到内存中,可能有必要...

回答by Alix Axel

file_get_contents()is the most optimized way to read files in PHP, however - since you're reading files in memory you're always limited to the amount of memory available.

file_get_contents()是在 PHP 中读取文件的最优化方法,但是 - 由于您在内存中读取文件,因此您总是受到可用内存量的限制

You can issue a ini_set('memory_limit', -1)if you have the right permissions but you'll still be limited by the amount of memory available on your system, this is common to all programming languages.

ini_set('memory_limit', -1)如果您有正确的权限,您可以发出 a但您仍然会受到系统可用内存量的限制,这对于所有编程语言都是通用的。

The only solution is to read the file in chunks, for that you can use file_get_contents()with the fourth and fifth arguments ($offsetand $maxlen- specified in bytes):

唯一的解决方案是以chunks 读取文件,为此您可以使用file_get_contents()第四个和第五个参数($offset$maxlen- 以字节指定):

string file_get_contents(string $filename[, bool $use_include_path = false[, resource $context[, int $offset = -1[, int $maxlen = -1]]]])

Here is an example where I use this technique to serve large download files:

这是我使用此技术提供大型下载文件的示例:

public function Download($path, $speed = null)
{
    if (is_file($path) === true)
    {
        set_time_limit(0);

        while (ob_get_level() > 0)
        {
            ob_end_clean();
        }

        $size = sprintf('%u', filesize($path));
        $speed = (is_int($speed) === true) ? $size : intval($speed) * 1024;

        header('Expires: 0');
        header('Pragma: public');
        header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
        header('Content-Type: application/octet-stream');
        header('Content-Length: ' . $size);
        header('Content-Disposition: attachment; filename="' . basename($path) . '"');
        header('Content-Transfer-Encoding: binary');

        for ($i = 0; $i <= $size; $i = $i + $speed)
        {
            ph()->HTTP->Flush(file_get_contents($path, false, null, $i, $speed));
            ph()->HTTP->Sleep(1);
        }

        exit();
    }

    return false;
}

Another option is the use the less optimized fopen(), feof(), fgets()and fclose()functions, specially if you care about getting whole lines at once, here is another example I provided in another StackOverflow question for importing large SQL queries into the database:

另一种选择是使用更少的优化fopen()feof()fgets()fclose()功能,特别是如果你在乎一次获得整行,这里是我在另一个StackOverflow的问题,提供了导入大型SQL查询到数据库中另一个例子

function SplitSQL($file, $delimiter = ';')
{
    set_time_limit(0);

    if (is_file($file) === true)
    {
        $file = fopen($file, 'r');

        if (is_resource($file) === true)
        {
            $query = array();

            while (feof($file) === false)
            {
                $query[] = fgets($file);

                if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
                {
                    $query = trim(implode('', $query));

                    if (mysql_query($query) === false)
                    {
                        echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
                    }

                    else
                    {
                        echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
                    }

                    while (ob_get_level() > 0)
                    {
                        ob_end_flush();
                    }

                    flush();
                }

                if (is_string($query) === true)
                {
                    $query = array();
                }
            }

            return fclose($file);
        }
    }

    return false;
}

Which technique you use will really depend on what you're trying to do (as you can see with the SQL import function and the download function), but you'll always have to read the data in chunks.

您使用哪种技术实际上取决于您要执行的操作(正如您在 SQL 导入函数和下载函数中看到的那样),但您始终必须以块的形式读取数据

回答by Sanjay Khatri

$file_handle = fopen("myfile", "r");
while (!feof($file_handle)) {
   $line = fgets($file_handle);
   echo $line;
}
fclose($file_handle);
  1. Open the file and stores in $file_handleas reference to the file itself.
  2. Check whether you are already at the end of the file.
  3. Keep reading the file until you are at the end, printing each line as you read it.
  4. Close the file.
  1. 打开文件并将其存储$file_handle为对文件本身的引用。
  2. 检查您是否已经在文件末尾。
  3. 继续阅读文件直到读到最后,在阅读时打印每一行。
  4. 关闭文件。

回答by Sarfraz

You could use file_get_contents

你可以用 file_get_contents

Example:

例子:

$homepage = file_get_contents('http://www.example.com/');
echo $homepage;

回答by ACME Squares

Use fpassthru or readfile. Both use constant memory with increasing file size.

使用 fpassthru 或 readfile。两者都使用不断增加的文件大小的恒定内存。

http://raditha.com/wiki/Readfile_vs_include

http://raditha.com/wiki/Readfile_vs_include

回答by ashraf mohammed

foreach (new SplFileObject($filepath) as $lineNumber => $lineContent) {

    echo $lineNumber."==>".$lineContent;  
    //process your operations here
}

回答by ppostma1

If you're not worried about memory and file size,

如果你不担心内存和文件大小,

$lines = file($path);

$lines is then the array of the file.

$lines 然后是文件的数组。

回答by zaf

Reading the whole file in one go is faster.

一口气读取整个文件更快。

But huge files may eat up all your memory and cause problems. Then your safest bet is to read line by line.

但是大文件可能会占用您所有的内存并导致问题。那么你最安全的选择是逐行阅读。

回答by Keka_Umans

You Could Try cURL (http://php.net/manual/en/book.curl.php).

您可以尝试 cURL ( http://php.net/manual/en/book.curl.php)。

Altho You Might Want To Check, It Has Its Limits As Well

尽管您可能想要检查,但它也有其局限性

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch); // Whole Page As String
curl_close ($ch);