如何对 PHP 脚本的效率进行基准测试

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/8291366/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-26 04:22:03  来源:igfitidea点击:

How to benchmark efficiency of PHP script

phpperformancebenchmarkingmicrotime

提问by eric

I want to know what is the best way to benchmark my PHP scripts. Does not matter if a cron job, or webpage or web service.

我想知道对我的 PHP 脚本进行基准测试的最佳方法是什么。无论是 cron 作业、网页还是 Web 服务都无关紧要。

I know i can use microtime but is it really giving me the real time of a PHP script?

我知道我可以使用微时间,但它真的给了我 PHP 脚本的实时时间吗?

I want to test and benchmark different functions in PHP that do the same thing. For example, preg_matchvs strposor domdocumentvs preg_matchor preg_replace vs str_replace`

我想测试和基准测试 PHP 中执行相同操作的不同函数。例如,preg_matchvsstrposdomdocumentvspreg_match或 preg_replace vs str_replace`

Example of a webpage:

网页示例:

<?php
// login.php

$start_time = microtime(TRUE);

session_start(); 
// do all my logic etc...

$end_time = microtime(TRUE);

echo $end_time - $start_time;

This will output: 0.0146126717 (varies all the time - but that is the last one I got). This means it took 0.015 or so to execute the PHP script.

这将输出:0.0146126717(一直变化 - 但这是我得到的最后一个)。这意味着执行 PHP 脚本需要 0.015 左右。

Is there a better way?

有没有更好的办法?

采纳答案by James Butler

If you actually want to benchmark real world code, use tools like Xdebugand XHProf.

如果您真的想对现实世界的代码进行基准测试,请使用XdebugXHProf等工具。

Xdebug is great for when you're working in dev/staging, and XHProf is a great tool for production and it's safe to run it there (as long as you read the instructions). The results of any one single page load aren't going to be as relevant as seeing how your code performs while the server is getting hammered to do a million other things as well and resources become scarce. This raises another question: are you bottlenecking on CPU? RAM? I/O?

Xdebug 非常适合在开发/登台工作,而 XHProf 是用于生产的绝佳工具,在那里运行它是安全的(只要您阅读说明)。任何单个页面加载的结果都不会像查看您的代码如何在服务器被锤击以执行其他一百万件其他事情并且资源变得稀缺时的执行情况那样相关。这就提出了另一个问题:你在 CPU 上遇到瓶颈了吗?内存?输入输出?

You also need to look beyond just the code you are running in your scripts to how your scripts/pages are being served. What web server are you using? As an example, I can make nginx + PHP-FPM seriously out perform mod_php + Apache, which in turn gets trounced for serving static content by using a good CDN.

您还需要不仅仅关注您在脚本中运行的代码,还需要关注脚本/页面的服务方式。你使用的是什么网络服务器?举个例子,我可以让 nginx + PHP-FPM 认真地执行 mod_php + Apache,而这反过来又因为使用一个好的 CDN 来提供静态内容而被击败。

The next thing to consider is what you are trying to optimise for?

接下来要考虑的是您要优化的目标是什么?

  • Is the speed with which the page renders in the users browser the number one priority?
  • Is getting each request to the server thrown back out as quickly as possible with smallest CPU consumption the goal?
  • 页面在用户浏览器中呈现的速度是否是第一要务?
  • 是否以最小的 CPU 消耗尽快将每个对服务器的请求抛出?

The former can be helped by doing things like gzipping all resources sent to the browser, yet doing so could (in some circumstances) push you further away from the achieving the latter.

前者可以通过执行 gzip 压缩发送到浏览器的所有资源之类的事情来帮助,但这样做可能(在某些情况下)使您远离实现后者。

Hopefully all of the above can help show that carefully isolated 'lab' testing will not reflect the variables and problems that you will encounter in production, and that you must identify what your high level goal is and then what you can do to get there, before heading off down the micro/premature-optimisation route to hell.

希望以上所有内容都可以帮助表明,精心隔离的“实验室”测试不会反映您在生产中将遇到的变量和问题,并且您必须确定您的高级目标是什么,然后您可以做些什么来达到目标​​,在沿着微观/过早优化路线走向地狱之前

回答by Book Of Zeus

To benchmark how fast your complete script runs on the server, there are plenty of tools you can use. First make sure your script (preg_match vs strpos for example) has to output the same results in order to qualify your test.

为了衡量您的完整脚本在服务器上的运行速度,您可以使用许多工具。首先确保您的脚本(例如 preg_match 与 strpos)必须输出相同的结果才能使您的测试合格。

You can use:

您可以使用:

回答by Alec Gorge

You will want to look at Xdebugand more specifically, Xdebug's profiling capabilities.

您将需要查看Xdebug,更具体地说,Xdebug 的分析功能

Basically, you enable the profiler, and every time you load a webpage it creates a cachegrind file that can be read with WinCacheGrindor KCacheGrind.

基本上,您启用分析器,并且每次加载网页时,它都会创建一个可以使用WinCacheGrindKCacheGrind读取的 cachegrind 文件。

Xdebug can be a bit tricky to configure so here is the relevant section of my php.inifor reference:

Xdebug 的配置可能有点棘手,所以这里是我的相关部分以php.ini供参考:

[XDebug]
zend_extension = h:\xampp\php\ext\php_xdebug-2.1.1-5.3-vc6.dll
xdebug.remote_enable=true
xdebug.profiler_enable_trigger=1
xdebug.profiler_output_dir=h:\xampp\cachegrind
xdebug.profiler_output_name=callgrind.%t_%R.out

And here is a screenshot of a .outfile in WinCacheGrind:

这是WinCacheGrind 中.out文件的屏幕截图:

enter image description here

在此处输入图片说明

That should provide ample details about how efficent your PHP script it. You want to target the things that take the most amount of time. For example, you could optimize one function to take half the amount of time, but your efforts would be better served optimizing a function that is called dozens if not hundreds of times during a page load.

这应该提供有关您的 PHP 脚本的效率的充足详细信息。您希望针对花费最多时间的事情。例如,您可以优化一个函数以花费一半的时间,但是优化一个在页面加载期间被调用数十次甚至数百次的函数会更好地服务于您的努力。

If you are curious, this is just an old version of a CMS I wrote for my own use.

如果您好奇,这只是我为自己使用而编写的旧版 CMS。

回答by fotuzlab

Try https://github.com/fotuzlab/appgati

试试https://github.com/fotuzlab/appgati

It allows to define steps in the code and reports time, memory usage, server load etc between two steps.

它允许在代码中定义步骤并报告两个步骤之间的时间、内存使用情况、服务器负载等。

Something like:

就像是:

    $appgati->Step('1');

    // Do some code ...

    $appgati->Step('2');

    $report = $appgati->Report('1', '2');
    print_r($report);

Sample output array:

示例输出数组:

Array
(
    [Clock time in seconds] => 1.9502429962158
    [Time taken in User Mode in seconds] => 0.632039
    [Time taken in System Mode in seconds] => 0.024001
    [Total time taken in Kernel in seconds] => 0.65604
    [Memory limit in MB] => 128
    [Memory usage in MB] => 18.237907409668
    [Peak memory usage in MB] => 19.579357147217
    [Average server load in last minute] => 0.47
    [Maximum resident shared size in KB] => 44900
    [Integral shared memory size] => 0
    [Integral unshared data size] => 0
    [Integral unshared stack size] => 
    [Number of page reclaims] => 12102
    [Number of page faults] => 6
    [Number of block input operations] => 192
    [Number of block output operations] => 
    [Number of messages sent] => 0
    [Number of messages received] => 0
    [Number of signals received] => 0
    [Number of voluntary context switches] => 606
    [Number of involuntary context switches] => 99
)

回答by Till

I'd look into xhprof. It doesn't matter if it's run on the cli or via another sapi (like fpm or fcgi or even the Apache module).

我会研究xhprof。它是在 cli 上运行还是通过另一个 sapi(如 fpm 或 fcgi 甚至是 Apache 模块)运行都没有关系。

The best part about xhprof is that it's even fit enough to be run in production. Something that doesn't work as well with xdebug (last time I checked). xdebug has an impact on performance and xhprof (I wouldn't say there is none) manages a lot better.

xhprof 最好的部分是它甚至足以在生产中运行。一些不适用于 xdebug 的东西(我上次检查过)。xdebug 对性能有影响,而 xhprof(我不会说没有)管理得更好。

We frequently use xhprof to collect samples with real traffic and then analyze the code from there.

我们经常使用 xhprof 来收集具有真实流量的样本,然后从那里分析代码。

It's not really a benchmark in terms that it gets you a timeand all that, though it does that as well. It just makes it very easy to analyze production traffic and then drill down to the php function level in the callgraph collected.

就它为您提供时间和所有这些而言,它并不是真正的基准,尽管它也能做到这一点。它只是让分析生产流量变得非常容易,然后在收集的调用图中深入到 php 函数级别。

Once the extension is compiled and loaded you start profiling in the code with:

一旦扩展被编译并加载,你就开始在代码中分析:

xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);

To stop:

停止:

$xhprof_data = xhprof_disable();

Then save the data to a file, or database - whatever floats your boath and doesn't interupt usual runtime. We asynchronously push this to S3 to centralize the data (to be able to see all runs from all of our servers).

然后将数据保存到文件或数据库中 - 任何漂浮在您的船上并且不会中断通常的运行时的东西。我们将其异步推送到 S3 以集中数据(以便能够查看我们所有服务器的所有运行情况)。

The code on githubcontains an xhprof_html folder which you dump on the server and with minimal configuration, you can visualize the data collected and start drilling down.

github 上代码包含一个 xhprof_html 文件夹,您可以将其转储到服务器上,只需最少的配置,您就可以可视化收集到的数据并开始向下钻取。

HTH!

哼!

回答by Alasdair

Put it in a forloop to do each thing 1,000,000 times to get a more realistic number. And only start the timer just before the code you actually want to benchmark, then record the end time just after (i.e. don't start the timer before the session_start().

把它放在一个for循环中,每件事做 1,000,000 次以获得更现实的数字。并且仅在您实际要进行基准测试的代码之前启动计时器,然后记录之后的结束时间(即不要在session_start().

Also make sure the code is identical for each function you want to benchmark, with the exception of the function you are timing.

还要确保您要进行基准测试的每个函数的代码都相同,但您要计时的函数除外。

How the script is executed (cronjob, php from commandline, Apache, etc.) should not make a difference since you are only timing the relative difference between the speed of the different functions. So this ratio should remain the same.

脚本的执行方式(cronjob、命令行中的 php、Apache 等)不应该有什么不同,因为您只是对不同功能速度之间的相对差异进行计时。所以这个比例应该保持不变。

If the computer on which you are running the benchmark has many other things going on, this could affect the benchmark results if there happens to be a spike in CPU or memory usage from another application while your benchmark is running. But as long as you have a lot of resources to spare on the computer then I don't think this will be a problem.

如果您正在运行基准测试的计算机有许多其他事情正在运行,并且当您的基准测试运行时,另一个应用程序的 CPU 或内存使用量出现峰值,这可能会影响基准测试结果。但是只要您在计算机上有很多资源可以空闲,那么我认为这不会成为问题。

回答by goat

A good start is using xdebugs profiler http://xdebug.org/docs/profiler

一个好的开始是使用 xdebugs 分析器 http://xdebug.org/docs/profiler

Maybe not the easiest thing to set up and use, but once you get it going the sheer volumes of data and ease of viewing is irreplaceable.

也许不是最容易设置和使用的东西,但是一旦你开始使用它,庞大的数据量和易于查看是不可替代的。

回答by Ritesh Aryal

It is also good to keep your eyes on your PHP code and cross check with this link, in order to make sure that your coding itself is not potentially disturbing the performance of the app.

密切关注您的 PHP 代码并与此链接交叉检查也很好,以确保您的编码本身不会潜在地干扰应用程序的性能。

回答by TerryE

Eric,

埃里克,

You are asking yourself the wrong question. If your script is executing in ~15 mSec then its time is largely irrelevant. If you run on a shared service then PHP image activation will take ~100 mSec, reading in the script files ~30-50 mSec if fully cached on the server, possibly 1 or more seconds if being loaded in from a backend NAS farm. Network delays on loading the page furniture can add lots of seconds.

你问自己错误的问题。如果您的脚本在 ~15 毫秒内执行,那么它的时间在很大程度上无关紧要。如果您在共享服务上运行,那么 PHP 映像激活将需要大约 100 毫秒,如果完全缓存在服务器上,则读取脚本文件大约需要 30-50 毫秒,如果从后端 NAS 场加载,则可能需要 1 秒或更长时间。加载页面家具的网络延迟可能会增加很多秒。

The main issue here is the users perception of load time: how long does he or she have to wait between clicking on the Link and getting a fully rendered page. Have a look at Google Page Speedwhich you can use as Ff or chrome extension, and the Pagespeed documentation which discusses in depth how to get good page performance. Follow these guidelines and try to get your page scores better than 90/100. (The google home page scores 99/100 as does my blog). This is the best way to get good user-perceived performance.

这里的主要问题是用户对加载时间的看法:他或她在单击链接和获得完全呈现的页面之间必须等待多长时间。查看可以用作 Ff 或 chrome 扩展的Google Page Speed,以及深入讨论如何获得良好页面性能的 Pagespeed 文档。遵循这些准则并尝试使您的页面得分高于 90/100。(谷歌主页和我的博客一样得分 99/100)。这是获得良好的用户感知性能的最佳方式。