C++ double 和 stringstream 格式

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/12894824/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 16:44:01  来源:igfitidea点击:

double and stringstream formatting

c++double

提问by Guillaume07

double val = 0.1;
std::stringstream ss;
ss << val;
std::string strVal= ss.str();

In the Visual Studio debugger, valhas the value 0.10000000000000001 (because 0.1 can't be represented). When valis converted using stringstream, strValis equal to "0.1". However, when using boost::lexical_cast, the resulting strValis "0.10000000000000001".

在 Visual Studio 调试器中,val值为 0.10000000000000001(因为无法表示 0.1)。当val使用 stringstream 转换时,strVal等于"0.1"。但是,当使用 boost::lexical_cast 时,结果strVal"0.10000000000000001".

Another example is the following:

另一个例子如下:

double val = 12.12305000012;

double val = 12.12305000012;

Under visual studio valappears as 12.123050000119999, and using stringstream and default precision (6) it becomes 12.1231. I don't really understand why it is not 12.12305(...).

在 Visual Studio 下val显示为 12.123050000119999,使用 stringstream 和默认精度 (6) 则变为 12.1231。我真的不明白为什么它不是 12.12305(...)。

Is there a default precision, or does stringstream have a particular algorithm to convert a double value which can't be exactly represented?

是否有默认精度,或者 stringstream 是否有特定算法来转换无法准确表示的双精度值?

Thanks.

谢谢。

回答by nickolayratchev

You can change the floating-point precision of a stringstreamas follows:

您可以stringstream按如下方式更改 a 的浮点精度:

double num = 2.25149;
std::stringstream ss(stringstream::in | stringstream::out);
ss << std::setprecision(5) << num << endl;
ss << std::setprecision(4) << num << endl;

Output:

输出:

2.2515
2.251

Note how the numbers are also rounded when appropriate.

注意数字如何在适当的时候四舍五入。

回答by YuZ

For anyone who gets "error: ‘setprecision' is not a member of ‘std'" you must #include <iomanip>else setprecision(17)will not work!

对于任何人谁得到“ error: ‘setprecision' is not a member of ‘std'”你一定#include <iomanip>否则setprecision(17)将无法正常工作!

回答by David Hammen

The problem occurs at the stream insertion ss << 0.1;rather than at the conversion to string. If you want non-default precision you need to specify this prior to inserting the double:

问题发生在流插入ss << 0.1;而不是转换为字符串时。如果你想要非默认精度,你需要在插入双精度之前指定它:

ss << std::setprecision(17) << val;

On my computer, if I just use setprecision(16)I still get "0.1"rather than "0.10000000000000001". I need a (slightly bogus) precision of 17 to see that final 1.

在我的电脑上,如果我只是使用setprecision(16)我仍然得到"0.1"而不是"0.10000000000000001". 我需要 17 的(稍微虚假的)精度才能看到最后的 1。

Addendum
A better demonstration arises with a value of 1.0/3.0. With the default precision you get a string representation of "0.333333". This is not the string equivalent of a double precision 1/3. Using setprecision(16)makes the string "0.3333333333333333"; a precision of 17 yields "0.33333333333333331".

附录
值 1.0/3.0 时会出现更好的演示。使用默认精度,您将获得"0.333333". 这不是双精度 1/3 的等效字符串。使用setprecision(16)使字符串"0.3333333333333333";17 的精度产生"0.33333333333333331".

回答by James Kanze

There are two issues you have to consider. The first is the precision parameter, which defaults to 6 (but which you can set to whatever you like). The second is what this parameter means, and that depends on the format option you are using: if you are using fixed or scientific format, then it means the number of digits after the decimal (which in turn has a different effect on what is usually meant by precision in the two formats); if you are using the default precision, however (ss.setf( std::ios_base::fmtflags(), std::ios_base::formatfield ), it means the number of digitsin the output, regardless of whether the output was actually formatted using scientific or fixed notation. This explains why your display is 12.1231, for example; you're using both the default precision and the default formattting.

您必须考虑两个问题。第一个是精度参数,默认为 6(但您可以设置为任何您喜欢的值)。第二个是此参数的含义,这取决于您使用的格式选项:如果您使用的是固定格式或科学格式,则它表示小数点后的位数(这反过来对通常的格式有不同的影响)表示两种格式的精度);但是,如果您使用默认精度 ( ss.setf( std::ios_base::fmtflags(), std::ios_base::formatfield ),则表示输出中的位数,无论输出实际上是使用科学记数法还是固定记数法进行格式化。这解释了为什么您的显示是12.1231,例如;您同时使用默认精度和默认格式。

You might want to try the following with different values (and maybe different precisions):

您可能想尝试使用不同的值(可能还有不同的精度)进行以下操作:

std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield );
std::cout << "default:    " << value[i] << std::endl;
std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield );
std::cout << "fixed:      " << value[i] << std::endl;
std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield );
std::cout << "scientific: " << value[i] << std::endl;

Seeing the actual output will probably be clearer than any detailed description:

看到实际输出可能比任何详细描述都清楚:

default:    0.1
fixed:      0.100000
scientific: 1.000000e-01