为什么在 C/C++ 中使用 uint_8 等?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/5054979/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Why is uint_8 etc. used in C/C++?
提问by user855
I've seen some code where they don't use primitive types int, float, double etc. directly. They usually typedef it and use it or use things like uint_8 etc.
我看过一些代码,它们不直接使用原始类型 int、float、double 等。他们通常对它进行 typedef 并使用它或使用诸如 uint_8 之类的东西。
Is it really necessary even these days? Or is C/C++ standardized enough that it is preferable to use int, float etc directly.
现在真的有必要吗?或者 C/C++ 是否足够标准化,最好直接使用 int、float 等。
回答by Charlie Martin
Because the types like char
, short
, int
, long
, and so forth, are ambiguous: they depend on the underlying hardware. Back in the days when C was basically considered an assembler language for people in a hurry, this was okay. Now, in order to write programs that are portable -- which means "programs that mean the same thing on any machine" -- people have built special libraries of typedefs
and #defines
that allow them to make machine-independent definitions.
因为像char
、short
、int
、等类型long
是不明确的:它们依赖于底层硬件。回到 C 基本上被认为是匆忙的人的汇编语言的时代,这没关系。现在,为了可移植编写程序-这意味着“这意味着同样的事情在任何机器上的程序” -人们已经建立的专业图书馆typedefs
,并#defines
允许他们做出独立于机器的定义。
The secret code is really quite straight-forward. Here, you have uint_8, which is interpreted
密码真的很简单。在这里,你有 uint_8,它被解释
u
forunsigned
int
to say it's treated as a number_8
for the size in bits.
u
为了unsigned
int
说它被视为一个数字_8
以位为单位的大小。
In other words, this is an unsigned integer with 8 bits (minimum) or what we used to call, in the mists of C history, an "unsigned char".
换句话说,这是一个 8 位(最小)的无符号整数,或者我们过去在 C 历史的迷雾中称之为“无符号字符”。
回答by R.. GitHub STOP HELPING ICE
uint8_t
is rather useless, because due to other requirements in the standard, it exists if and only if unsigned char
is 8-bit, in which case you could just use unsigned char
. The others, however, are extremely useful. int
is (and will probably always be) 32-bit on most modern platforms, but on some ancient stuff it's 16-bit, and on a few rare early 64-bit systems, int
is 64-bit. It could also of course be various odd sizes on DSPs.
uint8_t
相当无用,因为由于标准中的其他要求,它存在当且仅当unsigned char
是 8 位时,在这种情况下,您可以使用unsigned char
. 然而,其他的非常有用。int
在大多数现代平台上是(并且可能永远是)32 位,但在一些古老的东西上它是 16 位,而在一些罕见的早期 64 位系统上,它int
是 64 位。它当然也可以是 DSP 上的各种奇数大小。
If you want a 32-bit type, use int32_t
or uint32_t
, and so on. It's a lot cleaner and easier than all the nasty legacy hacks of detecting the sizes of types and trying to use the right one yourself...
如果您需要 32 位类型,请使用int32_t
或uint32_t
等。它比检测类型大小并尝试自己使用正确的所有令人讨厌的遗留黑客更清洁和更容易......
回答by Havoc P
Most code I read, and write, uses the fixed-size typedefs only when the size is an important assumption in the code.
我阅读和编写的大多数代码仅在大小是代码中的重要假设时才使用固定大小的 typedef。
For example if you're parsing a binary protocol that has two 32-bit fields, you should use a typedef guaranteed to be 32-bit, if only as documentation.
例如,如果您正在解析具有两个 32 位字段的二进制协议,您应该使用保证为 32 位的 typedef,如果只是作为文档。
I'd only use int16 or int64 when the size mustbe that, say for a binary protocol or to avoid overflow or keep a struct small. Otherwise just use int.
我只会在大小必须是这样的情况下使用 int16 或 int64 ,比如说对于二进制协议或避免溢出或保持结构较小。否则就使用int。
If you're just doing "int i" to use i in a for loop, then I would not write "int32" for that. I would never expect any "typical" (meaning "not weird embedded firmware") C/C++ code to see a 16-bit "int," and the vast majority of C/C++ code out there would implode if faced with 16-bit ints. So if you start to care about "int" being 16 bit, either you're writing code that cares about weird embedded firmware stuff, or you're sort of a language pedant. Just assume "int" is the best int for the platform at hand and don't type extra noise in your code.
如果您只是在 for 循环中使用“int i”来使用 i,那么我不会为此编写“int32”。我永远不会期望任何“典型的”(意思是“不是奇怪的嵌入式固件”)C/C++ 代码看到 16 位的“int”,而且如果面对 16 位的 C/C++ 代码,那里的绝大多数 C/C++ 代码都会崩溃整数。因此,如果您开始关心“int”是否为 16 位,那么您要么正在编写关心奇怪的嵌入式固件内容的代码,要么您是某种语言学究。只需假设“int”是手头平台的最佳 int,不要在代码中输入额外的噪音。
回答by Edwin Buck
C and C++ purposefully don't define the exact size of an int. This is because of a number of reasons, but that's not important in considering this problem.
C 和 C++ 故意不定义 int 的确切大小。这是由于多种原因造成的,但这在考虑这个问题时并不重要。
Since int isn't set to a standard size, those who want a standard size must do a bit of work to guarantee a certain number of bits. The code that defines uint_8 does that work, and without it (or a technique like it) you wouldn't have a means of defining an unsigned 8 bit number.
由于 int 未设置为标准大小,因此想要标准大小的人必须做一些工作以保证一定数量的位。定义 uint_8 的代码可以完成这项工作,如果没有它(或类似的技术),您将无法定义无符号 8 位数字。
回答by Jeremiah Willcock
The sizes of types in C are not particularly well standardized. 64-bit integers are one example: a 64-bit integer could be long long
, __int64
, or even int
on some systems. To get better portability, C99 introduced the <stdint.h>
header, which has types like int32_t
to get a signed type that is exactly 32 bits; many programs had their own, similar sets of typedefs before that.
C 中类型的大小没有特别好的标准化。64 位整数就是一个例子:64 位整数可以是long long
、__int64
,甚至int
在某些系统上。为了获得更好的可移植性,C99 引入了<stdint.h>
标头,它的类型类似于int32_t
获得恰好为 32 位的有符号类型;在此之前,许多程序都有自己的类似类型定义集。
回答by Seth Johnson
The width of primitive types often depends on the system, not just the C++ standard or compiler. If you want true consistency across platforms when you're doing scientific computing, for example, you should use the specific uint_8
or whatever so that the same errors (or precision errors for floats) appear on different machines, so that the memory overhead is the same, etc.
原始类型的宽度通常取决于系统,而不仅仅是 C++ 标准或编译器。例如,如果您在进行科学计算时想要跨平台的真正一致性,则应该使用特定的uint_8
或其他任何内容,以便在不同的机器上出现相同的错误(或浮点数的精度错误),以便内存开销相同, 等等。
回答by Tim Martin
C and C++ don't restrict the exact size of the numeric types, the standards only specify a minimum range of values that has to be represented. This means that int
can be larger than you expect.
C 和 C++ 不限制数字类型的确切大小,标准仅指定必须表示的值的最小范围。这意味着它int
可能比您预期的要大。
The reason for this is that often a particular architecture will have a size for which arithmetic works faster than other sizes. Allowing the implementor to use this size for int
and not forcing it to use a narrower type may make arithmetic with ints faster.
这样做的原因是,通常特定架构的大小会比其他大小更快地进行算术运算。允许实现者使用这个大小int
而不是强迫它使用更窄的类型可能会使整数算术更快。
This isn't going to go away any time soon. Even once servers and desktops are all fully transitioned to 64-bit platforms, mobile and embedded platforms may well be operating with a different integer size. Apart from anything else, you don't know what architectures might be released in the future. If you want your code to be portable, you have to use a fixed-size typedef anywhere that the type size is important to you.
这不会很快消失。即使服务器和台式机都完全过渡到 64 位平台,移动和嵌入式平台也很可能以不同的整数大小运行。除此之外,您不知道将来可能会发布哪些架构。如果你希望你的代码是可移植的,你必须在类型大小对你很重要的任何地方使用固定大小的 typedef。