C# 为什么标志枚举通常用十六进制值定义

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/13222671/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-10 07:47:36  来源:igfitidea点击:

Why are flag enums usually defined with hexadecimal values

c#.netenumsenum-flags

提问by Adi Lester

A lot of times I see flag enum declarations that use hexadecimal values. For example:

很多时候我看到使用十六进制值的标志枚举声明。例如:

[Flags]
public enum MyEnum
{
    None  = 0x0,
    Flag1 = 0x1,
    Flag2 = 0x2,
    Flag3 = 0x4,
    Flag4 = 0x8,
    Flag5 = 0x10
}

When I declare an enum, I usually declare it like this:

当我声明一个枚举时,我通常是这样声明的:

[Flags]
public enum MyEnum
{
    None  = 0,
    Flag1 = 1,
    Flag2 = 2,
    Flag3 = 4,
    Flag4 = 8,
    Flag5 = 16
}

Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16instead of Flag5 = 0x10.

为什么有些人选择用十六进制而不是十进制写值有什么理由或理由?在我看来,使用十六进制值时更容易混淆,并且不小心写入Flag5 = 0x16而不是Flag5 = 0x10.

采纳答案by exists-forall

Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.

原理可能有所不同,但我看到的一个优点是十六进制提醒你:“好吧,我们不再处理人类发明的以十为基数的任意世界中的数字。我们正在处理位 - 机器的世界 - 而我们会按照它的规则行事。” 除非您正在处理数据的内存布局很重要的相对较低级别的主题,否则很少使用十六进制。使用它暗示这就是我们现在所处的情况。

Also, i'm not sure about C#, but I know that in C x << yis a valid compile-time constant. Using bit shifts seems the most clear:

另外,我不确定 C#,但我知道在 C 中x << y是一个有效的编译时常量。使用位移似乎最清楚:

[Flags]
public enum MyEnum
{
    None  = 0,
    Flag1 = 1 << 0,
    Flag2 = 1 << 1,
    Flag3 = 1 << 2,
    Flag4 = 1 << 3,
    Flag5 = 1 << 4
}

回答by Oded

It makes it easy to see that these are binaryflags.

很容易看出这些是二进制标志。

None  = 0x0,  // == 00000
Flag1 = 0x1,  // == 00001
Flag2 = 0x2,  // == 00010
Flag3 = 0x4,  // == 00100
Flag4 = 0x8,  // == 01000
Flag5 = 0x10  // == 10000

Though the progressionmakes it even clearer:

虽然进展使它更加清晰:

Flag6 = 0x20  // == 00100000
Flag7 = 0x40  // == 01000000
Flag8 = 0x80  // == 10000000

回答by Jonathon Reinhart

Because [Flags]means that the enum is really a bitfield. With [Flags]you can use the bitwise AND (&) and OR (|) operators to combine the flags. When dealing with binary values like this, it is almost always more clear to use hexadecimal values. This is the very reason we use hexadecimalin the first place. Each hex character corresponds to exactlyone nibble (four bits). With decimal, this 1-to-4 mapping does not hold true.

因为[Flags]意味着枚举实际上是一个位域。有了[Flags]你可以使用按位AND( &)和OR( |)运营商的标志结合使用。在处理这样的二进制值时,使用十六进制值几乎总是更清晰。这就是我们首先使用十六进制的原因。每个十六进制字符正好对应一个半字节(四位)。对于十进制,这种 1 到 4 的映射不成立。

回答by usr

Because there is a mechanical, simple way to double a power-of-two in hex. In decimal, this is hard. It requires long multiplication in your head. In hex it is a simple change. You can carry this out all the way up to 1UL << 63which you can't do in decimal.

因为有一种机械的、简单的方法可以将十六进制的 2 的幂加倍。在十进制中,这很难。它需要在你的头脑中进行长时间的乘法运算。在十六进制中,这是一个简单的更改。您可以一直执行此1UL << 63操作,而在十进制中无法执行此操作。

回答by Only You

Because it is easier to follow for humans where the bits are in the flag. Each hexadecimal digit can fit a 4 bit binary.

因为对于位在标志中的人类来说更容易遵循。每个十六进制数字可以适合 4 位二进制。

0x0 = 0000
0x1 = 0001
0x2 = 0010
0x3 = 0011

... and so on

0xF = 1111

Typically you want your flags to not overlap bits, the easiest way of doing and visualizing it is using hexadecimal values to declare your flags.

通常,您希望标志不与位重叠,最简单的实现和可视化的方法是使用十六进制值来声明您的标志。

So, if you need flags with 16 bits you will use 4 digit hexadecimal values and that way you can avoid erroneous values:

因此,如果您需要 16 位标志,您将使用 4 位十六进制值,这样您就可以避免错误值:

0x0001 //= 1 = 000000000000 0001
0x0002 //= 2 = 000000000000 0010
0x0004 //= 4 = 000000000000 0100
0x0008 //= 8 = 000000000000 1000
...
0x0010 //= 16 = 0000 0000 0001 0000
0x0020 //= 32 = 0000 0000 0010 0000
...
0x8000 //= 32768 = 1000 0000 0000 0000

回答by VRonin

I think it's just because the sequence is always 1,2,4,8 and then add a 0.
As you can see:

我认为这只是因为序列总是 1,2,4,8 然后添加一个 0。
如您所见:

0x1 = 1 
0x2 = 2
0x4 = 4
0x8 = 8
0x10 = 16
0x20 = 32
0x40 = 64
0x80 = 128
0x100 = 256
0x200 = 512
0x400 = 1024
0x800 = 2048

and so on, as long as you remember the sequence 1-2-4-8 you can build all the subsequent flags without having to remember the powers of 2

依此类推,只要记住序列 1-2-4-8 就可以构建所有后续标志,而无需记住 2 的幂