C语言 声明字符时char和int的区别

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/37241364/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 10:25:25  来源:igfitidea点击:

Difference between char and int when declaring character

c

提问by TruthOrDare

I just started learning C and am rather confused over declaring characters using int and char.

我刚开始学习 C,对使用 int 和 char 声明字符感到很困惑。

I am well aware that any characters are made up of integers in the sense that the "integers" of characters are the characters' respective ASCII decimals.

我很清楚任何字符都是由整数组成的,因为字符的“整数”是字符各自的 ASCII 小数。

That said, I learned that it's perfectly possible to declare a character using intwithout using the ASCII decimals. Eg. declaring variable testas a character 'X'can be written as:

也就是说,我了解到完全有可能在int不使用 ASCII 小数的情况下声明一个字符。例如。将变量声明test为字符'X'可以写成:

char test = 'X';

and

int test = 'X';

And for both declaration of character, the conversion characters are %c(even though test is defined as int).

对于两个字符声明,转换字符都是%c(即使 test 定义为int)。

Therefore, my question is/are the difference(s) between declaring character variables using charand intand when to use intto declare a character variable?

因此,我的问题是/是差(S)使用声明字符变量之间charint以及何时使用int声明一个字符变量?

回答by Serge Ballesta

The difference is the size in byte of the variable, and from there the different values the variable can hold.

不同之处在于变量的字节大小,以及变量可以容纳的不同值。

A char is required to accept all values between 0 and 127 (included). So in common environments it occupies exactly one byte (8 bits). It is unspecified by the standard whether it is signed (-128 - 127) or unsigned (0 - 255).

需要一个字符来接受 0 到 127(包括)之间的所有值。所以在普通环境中它只占用一个字节(8 位)。标准未指定它是有符号 (-128 - 127) 还是无符号 (0 - 255)。

An int is required to be at least a 16 bits signed word, and to accept all values between -32767 and 32767. That means that an int can accept all values from a char, be the latter signed or unsigned.

int 必须是至少 16 位的有符号字,并接受 -32767 和 32767 之间的所有值。这意味着 int 可以接受来自 char 的所有值,无论后者是有符号还是无符号。

If you want to store only characters in a variable, you should declare it as char. Using an intwould just waste memory, and could mislead a future reader. One common exception to that rule is when you want to process a wider value for special conditions. For example the function fgetcfrom the standard library is declared as returning int:

如果只想在变量中存储字符,则应将其声明为char. 使用 anint只会浪费内存,并可能误导未来的读者。该规则的一个常见例外是当您想要为特殊条件处理更广泛的值时。例如,fgetc标准库中的函数被声明为返回int

int fgetc(FILE *fd);

because the special value EOF(for End Of File) is defined as the intvalue -1 (all bits to one in a 2-complement system) that means more than the size of a char. That way no char (only 8 bits on a common system) can be equal to the EOF constant. If the function was declared to return a simple char, nothing could distinguish the EOF value from the (valid) char 0xFF.

因为特殊值EOF(用于文件结尾)被定义为int值 -1(在 2 补码系统中所有位为 1),这意味着大于一个字符的大小。这样就没有字符(在公共系统上只有 8 位)可以等于 EOF 常量。如果函数被声明为返回一个 simple char,则没有任何东西可以将 EOF 值与(有效的)char 0xFF 区分开来。

That's the reason why the following code is badand should never be used:

这就是为什么以下代码很糟糕并且永远不应该使用的原因:

char c;    // a terrible memory saving...
...
while ((c = fgetc(stdin)) != EOF) {   // NEVER WRITE THAT!!!
    ...
}

Inside the loop, a char would be enough, but for the test not to succeed when reading character 0xFF, the variable needs to be an int.

在循环内部,一个字符就足够了,但是为了在读取字符 0xFF 时测试不成功,变量需要是一个 int。

回答by MicroVirus

The chartype has multiple roles.

char类型具有多个角色。

The first is that it is simply part of the chain of integer types, char, short, int, long, etc., so it's just another container for numbers.

第一个是它只是整数类型链的一部分charshort, int, long, 等,所以它只是数字的另一个容器。

The second is that its underlying storage is the smallest unit, and all other objects have a size that is a multiple of the size of char(sizeofreturns a number that is in units of char, so sizeof char == 1).

第二个是它的底层存储是最小单位,所有其他对象的大小都是大小的倍数char(sizeof返回一个以 为单位的数字char,所以sizeof char == 1)。

The third is that it plays the role of a character in a string, certainly historically. When seen like this, the value of a charmaps to a specified character, for instance via the ASCII encoding, but it can also be used with multi-byte encodings (one or more chars together map to one character).

第三是它在字符串中扮演一个字符的角色,这在历史上是肯定的。当这样看时,a 的值char映射到指定的字符,例如通过 ASCII 编码,但它也可以用于多字节编码(一个或多个chars 一起映射到一个字符)。

回答by Henrik Carlqvist

Usually you should declare characters as char and use int for integers being capable of holding bigger values. On most systems a char occupies a byte which is 8 bits. Depending on your system this char might be signed or unsigned by default, as such it will be able to hold values between 0-255 or -128-127.

通常你应该将字符声明为 char 并使用 int 表示能够保存更大值的整数。在大多数系统上,一个字符占用一个 8 位字节。根据您的系统,默认情况下该字符可能是有符号或无符号的,因此它可以保存 0-255 或 -128-127 之间的值。

An int might be 32 bits long, but if you really want exactly 32 bits for your integer you should declare it as int32_t or uint32_t instead.

int 可能有 32 位长,但如果您真的想要整数为 32 位,则应将其声明为 int32_t 或 uint32_t。

回答by Viktor Simkó

Size of an intis 4 bytes on most architectures, while the size of a charis 1 byte.

int在大多数体系结构上,an 的大小为 4 个字节,而 a 的大小char为 1 个字节。

回答by cdonts

I think there's no difference, but you're allocating extra memory you're not going to use. You can also do const long a = 1;, but it will be more suitable to use const char a = 1;instead.

我认为没有区别,但是您正在分配您不会使用的额外内存。你也可以做const long a = 1;,但用它来const char a = 1;代替会更合适。