C++ “在编译时分配的内存”的真正含义是什么?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/21350478/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
What does "Memory allocated at compile time" really mean?
提问by Talha Sayed
In programming languages like C and C++, people often refer to static and dynamic memory allocation. I understand the concept but the phrase "All memory was allocated (reserved) during compile time" always confuses me.
在 C 和 C++ 等编程语言中,人们经常提到静态和动态内存分配。我理解这个概念,但是“在编译时分配(保留)所有内存”这句话总是让我感到困惑。
Compilation, as I understand it, converts high level C/C++ code to machine language and outputs an executable file. How is memory "allocated" in a compiled file ? Isn't memory always allocated in the RAM with all the virtual memory management stuff ?
据我所知,编译将高级 C/C++ 代码转换为机器语言并输出一个可执行文件。如何在编译文件中“分配”内存?内存不是总是在 RAM 中分配所有虚拟内存管理的东西吗?
Isn't memory allocation by definition a runtime concept ?
根据定义,内存分配不是运行时概念吗?
If I make a 1KB statically allocated variable in my C/C++ code, will that increase the size of the executable by the same amount ?
如果我在我的 C/C++ 代码中创建一个 1KB 静态分配的变量,这会增加可执行文件的大小吗?
This is one of the pages where the phrase is used under the heading "Static allocation".
这是在“静态分配”标题下使用该短语的页面之一。
回答by Manu343726
Memory allocated at compile-time means the compiler resolves at compile-time where certain things will be allocated inside the process memory map.
在编译时分配的内存意味着编译器在编译时解析某些东西将在进程内存映射内分配。
For example, consider a global array:
例如,考虑一个全局数组:
int array[100];
The compiler knows at compile-time the size of the array and the size of an int
, so it knows the entire size of the array at compile-time. Also a global variable has static storage duration by default: it is allocated in the static memory area of the process memory space (.data/.bss section). Given that information, the compiler decides during compilation in what address of that static memory area the array will be.
编译器在编译时知道数组的大小和 an 的大小int
,因此它在编译时知道数组的整个大小。此外,全局变量默认具有静态存储持续时间:它被分配在进程内存空间的静态内存区域(.data/.bss 部分)。鉴于该信息,编译器在编译期间决定该数组将位于该静态内存区域的哪个地址。
Of course that memory addresses are virtual addresses. The program assumes that it has its own entire memory space (From 0x00000000 to 0xFFFFFFFF for example). That's why the compiler could do assumptions like "Okay, the array will be at address 0x00A33211". At runtime that addresses are translated to real/hardware addresses by the MMU and OS.
当然,内存地址是虚拟地址。该程序假定它有自己的整个内存空间(例如从 0x00000000 到 0xFFFFFFFF)。这就是为什么编译器可以做这样的假设:“好吧,数组将位于地址 0x00A33211”。在运行时,MMU 和操作系统将地址转换为真实/硬件地址。
Value initialized static storage things are a bit different. For example:
值初始化静态存储的东西有点不同。例如:
int array[] = { 1 , 2 , 3 , 4 };
In our first example, the compiler only decided where the array will be allocated, storing that information in the executable.
In the case of value-initialized things, the compiler also injects the initial value of the array into the executable, and adds code which tells the program loader that after the array allocation at program start, the array should be filled with these values.
在我们的第一个例子中,编译器只决定数组的分配位置,并将该信息存储在可执行文件中。
在值初始化的情况下,编译器还将数组的初始值注入到可执行文件中,并添加代码告诉程序加载器在程序启动时分配数组后,应该用这些值填充数组。
Here are two examples of the assembly generated by the compiler (GCC4.8.1 with x86 target):
以下是编译器生成的程序集的两个示例(带有 x86 目标的 GCC4.8.1):
C++ code:
C++代码:
int a[4];
int b[] = { 1 , 2 , 3 , 4 };
int main()
{}
Output assembly:
输出组件:
a:
.zero 16
b:
.long 1
.long 2
.long 3
.long 4
main:
pushq %rbp
movq %rsp, %rbp
movl char a[32];
char b;
char c;
, %eax
popq %rbp
ret
As you can see, the values are directly injected into the assembly. In the array a
, the compiler generates a zero initialization of 16 bytes, because the Standard says that static stored things should be initialized to zero by default:
如您所见,这些值直接注入到程序集中。在 array 中a
,编译器生成 16 字节的零初始化,因为标准说静态存储的东西默认应该初始化为零:
8.5.9 (Initializers) [Note]:
Every object of static storage duration is zero-initialized at program startup before any other initial- ization takes place. In some cases, additional initialization is done later.
8.5.9 (Initializers) [注意]:
在任何其他初始化发生之前,每个静态存储期的对象在程序启动时都被零初始化。在某些情况下,额外的初始化会在稍后完成。
I always suggest people to disassembly their code to see what the compiler really does with the C++ code. This applies from storage classes/duration (like this question) to advanced compiler optimizations. You could instruct your compiler to generate the assembly, but there are wonderful tools to do this on the Internet in a friendly manner. My favourite is GCC Explorer.
我总是建议人们反汇编他们的代码,看看编译器对 C++ 代码做了什么。这适用于从存储类/持续时间(如这个问题)到高级编译器优化。您可以指示编译器生成程序集,但 Internet 上有一些很棒的工具可以友好地执行此操作。我最喜欢的是GCC Explorer。
回答by mah
Memory allocated at compile time simply means there will be no further allocation at run time -- no calls to malloc, new, or other dynamic allocation methods. You'll have a fixed amount of memory usage even if you don't need all of that memory all of the time.
在编译时分配的内存仅仅意味着在运行时不会有进一步的分配——不会调用 malloc、new 或其他动态分配方法。即使您并不总是需要所有内存,您也会使用固定数量的内存。
Isn't memory allocation by definition a runtime concept ?
根据定义,内存分配不是运行时概念吗?
The memory is not in useprior to run time, but immediately prior to execution starting its allocation is handled by the system.
内存在运行时间之前未被使用,但在执行开始之前,它的分配由系统处理。
If I make a 1KB statically allocated variable in my C/C++ code, will that increase the size of the executable by the same amount ?
如果我在我的 C/C++ 代码中创建一个 1KB 静态分配的变量,这会增加可执行文件的大小吗?
Simply declaring the static will not increase the size of your executable more than a few bytes. Declaring it with an initial value that is non-zero will (in order to hold that initial value). Rather, the linker simply adds this 1KB amount to the memory requirement that the system's loader creates for you immediately prior to execution.
简单地声明静态不会增加可执行文件的大小超过几个字节。使用非零的初始值声明它(以保持该初始值)。相反,链接器只是将这 1KB 的数量添加到系统加载程序在执行前立即为您创建的内存需求中。
回答by fede1024
Memory allocated in compile time means that when you load the program, some part of the memory will be immediately allocated and the size and (relative) position of this allocation is determined at compile time.
编译时分配的内存是指在加载程序时,会立即分配一部分内存,并且该分配的大小和(相对)位置在编译时确定。
const char *c_string = "Here goes a thousand chars...999";//implicit int big_arr[1000];
for (int i=0;i<1000;++i) big_arr[i] = some_computation_func(i);
at end
Those 3 variables are "allocated at compile time", it means that the compiler calculates their size (which is fixed) at compile time. The variable a
will be an offset in memory, let's say, pointing to address 0, b
will point at address 33 and c
at 34 (supposing no alignment optimization). So, allocating 1Kb of static data will not increase the size of your code, since it will just change an offset inside it. The actual space will be allocated at load time.
这 3 个变量是“在编译时分配的”,这意味着编译器在编译时计算它们的大小(这是固定的)。该变量a
将是内存中的偏移量,比方说,指向地址 0,b
将指向地址 33 和c
34(假设没有对齐优化)。因此,分配 1Kb 的静态数据不会增加代码的大小,因为它只会更改其中的偏移量。实际空间将在加载时分配。
Real memory allocation always happens in run time, because the kernel needs to keep track of it and to update its internal data structures (how much memory is allocated for each process, pages and so on). The difference is that the compiler already knows the size of each data you are going to use and this is allocated as soon as your program is executed.
真正的内存分配总是发生在运行时,因为内核需要跟踪它并更新它的内部数据结构(为每个进程、页面等分配了多少内存)。不同之处在于编译器已经知道您将要使用的每个数据的大小,并且在您的程序执行后立即分配。
Remember also that we are talking about relative addresses. The real address where the variable will be located will be different. At load time the kernel will reserve some memory for the process, lets say at address x
, and all the hard coded addresses contained in the executable file will be incremented by x
bytes, so that variable a
in the example will be at address x
, b at address x+33
and so on.
还请记住,我们正在谈论相对地址。变量所在的真实地址会有所不同。在加载时,内核会为进程保留一些内存,比如说 at address x
,并且可执行文件中包含的所有硬编码地址将按x
字节递增,因此a
示例中的变量将位于 address x
, b at addressx+33
和很快。
回答by Elias Van Ootegem
Adding variables on the stack that take up N bytes doesn't (necessarily) increase the bin's size by N bytes. It will, in fact, add but a few bytes most of the time.
Let's start off with an example of how adding a 1000 chars to your code willincrease the bin's size in a linear fashion.
在堆栈上添加占用 N 个字节的变量不会(必然)将 bin 的大小增加 N 个字节。事实上,大多数时候它只会添加几个字节。
让我们开始对如何添加1000个字符到你的代码的例子将增加以线性方式垃圾桶的大小。
If the 1k is a string, of a thousand chars, which is declared like so
如果 1k 是一千个字符的字符串,则声明如下
int a;
const int b[6] = {1,2,3,4,5,6};
char c[200];
const int d = 23;
int e[4] = {1,2,3,4};
int f;
and you then were to vim your_compiled_bin
, you'd actually be able to see that string in the bin somewhere. In that case, yes: the executable will be 1 k bigger, because it contains the string in full.
If, however you allocate an array of int
s, char
s or long
s on the stack and assign it in a loop, something along these lines
然后你就可以了vim your_compiled_bin
,你实际上可以在垃圾箱中的某个地方看到那个字符串。在这种情况下,是的:可执行文件将大 1 k,因为它包含完整的字符串。
但是,如果您在堆栈上分配了int
s、char
s 或long
s的数组并在循环中分配它,则沿着这些行
int c;
then, no: it won't increase the bin... by 1000*sizeof(int)
Allocation at compile time means what you've now come to understand it means (based on your comments): the compiled bin contains information the system requires to know how much memory what function/block will need when it gets executed, along with information on the stack size your application requires. That's what the system will allocate when it executes your bin, and your program becomes a process (well, the executing of your bin is the process that... well, you get what I'm saying).
Of course, I'm not painting the full picture here: The bin contains information about how big a stack the bin will actually be needing. Based on this information (among other things), the system will reserve a chunk of memory, called the stack, that the program gets sort of free reign over. Stack memory still is allocated by the system, when the process (the result of your bin being executed) is initiated. The process then manages the stack memory for you. When a function or loop (any type of block) is invoked/gets executed, the variables local to that block are pushed to the stack, and they are removed (the stack memory is "freed"so to speak) to be used by other functions/blocks. So declaring int some_array[100]
will only add a few bytes of additional information to the bin, that tells the system that function X will be requiring 100*sizeof(int)
+ some book-keeping space extra.
那么,不:它不会增加 bin...1000*sizeof(int)
编译时分配意味着你现在已经理解它的意思(根据你的评论):编译的 bin 包含系统需要知道多少内存的信息执行时需要什么功能/块,以及您的应用程序所需的堆栈大小信息。这就是系统在执行您的 bin 时将分配的内容,并且您的程序将成为一个进程(好吧,您的 bin 的执行就是……嗯,您明白我在说什么了)。
当然,我不是在这里描绘全貌:垃圾箱包含有关垃圾箱实际需要多大堆栈的信息。基于这些信息(除其他外),系统将保留一块内存,称为堆栈,程序可以自由支配。当进程(正在执行的 bin 的结果)启动时,堆栈内存仍由系统分配。然后该进程为您管理堆栈内存。当一个函数或循环(任何类型的块)被调用/执行时,该块的局部变量被推入堆栈,并且它们被移除(可以说是堆栈内存被“释放”)以供其他人使用功能/块。所以声明int some_array[100]
只会向 bin 添加几个字节的附加信息,这告诉系统函数 X 将需要100*sizeof(int)
+ 一些额外的簿记空间。
回答by supercat
On many platforms, all of the global or static allocations within each module will be consolidated by the compiler into three or fewer consolidated allocations (one for uninitialized data (often called "bss"), one for initialized writable data (often called "data"), and one for constant data ("const")), and all of the global or static allocations of each type within a program will be consolidated by the linker into one global for each type. For example, assuming int
is four bytes, a module has the following as its only static allocations:
在许多平台上,每个模块中的所有全局或静态分配都将由编译器合并为三个或更少的合并分配(一个用于未初始化的数据(通常称为“bss”),一个用于已初始化的可写数据(通常称为“数据”) ),一个用于常量数据(“const”)),并且程序中每种类型的所有全局或静态分配将由链接器合并为每种类型的一个全局分配。例如,假设int
是四个字节,一个模块只有以下静态分配:
global _c
section .bss
_c: resb 4
it would tell the linker that it needed 208 bytes for bss, 16 bytes for "data", and 28 bytes for "const". Further, any reference to a variable would be replaced with an area selector and offset, so a, b, c, d, and e, would be replaced by bss+0, const+0, bss+4, const+24, data+0, or bss+204, respectively.
它会告诉链接器它需要 208 个字节用于 bss,16 个字节用于“数据”,28 个字节用于“const”。此外,对变量的任何引用都将替换为区域选择器和偏移量,因此 a、b、c、d 和 e 将替换为 bss+0、const+0、bss+4、const+24、data分别为 +0 或 bss+204。
When a program is linked, all of the bss areas from all the modules are be concatenated together; likewise the data and const areas. For each module, the address of any bss-relative variables will be increased by the size of all preceding modules' bss areas (again, likewise with data and const). Thus, when the linker is done, any program will have one bss allocation, one data allocation, and one const allocation.
当一个程序被链接时,所有模块的所有 bss 区域都连接在一起;同样的数据和常量区域。对于每个模块,任何与 bss 相关的变量的地址都将增加所有前面模块的 bss 区域的大小(同样,对于 data 和 const 也是如此)。因此,当链接器完成时,任何程序都会有一个 bss 分配、一个数据分配和一个 const 分配。
When a program is loaded, one of four things will generally happen depending upon the platform:
加载程序时,通常会根据平台发生以下四种情况之一:
The executable will indicate how many bytes it needs for each kind of data and--for the initialized data area, where the initial contents may be found. It will also include a list of all the instructions which use a bss-, data-, or const- relative address. The operating system or loader will allocate the appropriate amount of space for each area and then add the starting address of that area to each instruction which needs it.
The operating system will allocate a chunk of memory to hold all three kinds of data, and give the application a pointer to that chunk of memory. Any code which uses static or global data will dereference it relative to that pointer (in many cases, the pointer will be stored in a register for the lifetime of an application).
The operating system will initially not allocate any memory to the application, except for what holds its binary code, but the first thing the application does will be to request a suitable allocation from the operating system, which it will forevermore keep in a register.
The operating system will initially not allocate space for the application, but the application will request a suitable allocation on startup (as above). The application will include a list of instructions with addresses that need to be updated to reflect where memory was allocated (as with the first style), but rather than having the application patched by the OS loader, the application will include enough code to patch itself.
可执行文件将指示每种数据需要多少字节,并且对于初始化数据区域,可以在其中找到初始内容。它还将包括使用 bss-、数据-或常量-相对地址的所有指令的列表。操作系统或加载程序将为每个区域分配适当的空间量,然后将该区域的起始地址添加到需要它的每条指令中。
操作系统将分配一块内存来保存所有三种数据,并为应用程序提供一个指向该内存块的指针。任何使用静态或全局数据的代码都将相对于该指针取消引用它(在许多情况下,该指针将在应用程序的生命周期内存储在寄存器中)。
操作系统最初不会为应用程序分配任何内存,除了保存其二进制代码的内存,但应用程序所做的第一件事将是向操作系统请求合适的分配,它将永远保存在寄存器中。
操作系统最初不会为应用程序分配空间,但应用程序会在启动时请求适当的分配(如上所述)。应用程序将包含一个带有地址的指令列表,这些指令需要更新以反映内存分配的位置(与第一种风格一样),但不是由操作系统加载程序修补应用程序,应用程序将包含足够的代码来修补自身.
All four approaches have advantages and disadvantages. In every case, however, the compiler will consolidate an arbitrary number of static variables into a fixed small number of memory requests, and the linker will consolidate all of those into a small number of consolidated allocations. Even though an application will have to receive a chunk of memory from the operating system or loader, it is the compiler and linker which are responsible for allocating individual pieces out of that big chunk to all the individual variables that need it.
所有四种方法都有优点和缺点。但是,在每种情况下,编译器都会将任意数量的静态变量合并到固定的少量内存请求中,而链接器会将所有这些合并到少量的合并分配中。尽管应用程序必须从操作系统或加载程序接收一块内存,但编译器和链接器负责将大块中的各个部分分配给需要它的所有单独变量。
回答by Jules
The core of your question is this: "How is memory "allocated" in a compiled file? Isn't memory always allocated in the RAM with all the virtual memory management stuff? Isn't memory allocation by definition a runtime concept?"
您的问题的核心是:“如何在编译的文件中“分配”内存?内存不是总是在 RAM 中分配所有虚拟内存管理的东西吗?根据定义,内存分配不是一个运行时概念吗?
I think the problem is that there are two different concepts involved in memory allocation. At its basic, memory allocation is the process by which we say "this item of data is stored in this specific chunk of memory". In a modern computer system, this involves a two step process:
我认为问题在于内存分配涉及两个不同的概念。从根本上讲,内存分配是我们所说的“该数据项存储在该特定内存块中”的过程。在现代计算机系统中,这涉及两个步骤:
- Some system is used to decide the virtual address at which the item will be stored
- The virtual address is mapped to a physical address
- 一些系统用于决定项目将被存储的虚拟地址
- 虚拟地址映射到物理地址
The latter process is purely run time, but the former can be done at compile time, if the data have a known size and a fixed number of them is required. Here's basically how it works:
后一个过程纯粹是运行时,但前者可以在编译时完成,如果数据大小已知并且需要固定数量。以下是它的基本工作原理:
The compiler sees a source file containing a line that looks a bit like this:
int c;
It produces output for the assembler that instructs it to reserve memory for the variable 'c'. This might look like this:
global _c section .bss _c: resb 4
When the assembler runs, it keeps a counter that tracks offsets of each item from the start of a memory 'segment' (or 'section'). This is like the parts of a very large 'struct' that contains everything in the entire file it doesn't have any actual memory allocated to it at this time, and could be anywhere. It notes in a table that
_c
has a particular offset (say 510 bytes from the start of the segment) and then increments its counter by 4, so the next such variable will be at (e.g.) 514 bytes. For any code that needs the address of_c
, it just puts 510 in the output file, and adds a note that the output needs the address of the segment that contains_c
adding to it later.The linker takes all of the assembler's output files, and examines them. It determines an address for each segment so that they won't overlap, and adds the offsets necessary so that instructions still refer to the correct data items. In the case of uninitialized memory like that occupied by
c
(the assembler was told that the memory would be uninitialized by the fact that the compiler put it in the '.bss' segment, which is a name reserved for uninitialized memory), it includes a header field in its output that tells the operating system how much needs to be reserved. It may be relocated (and usually is) but is usually designed to be loaded more efficiently at one particular memory address, and the OS will try to load it at this address. At this point, we have a pretty good idea what the virtual address is that will be used byc
.The physical address will not actually be determined until the program is running. However, from the programmer's perspective the physical address is actually irrelevant—we'll never even find out what it is, because the OS doesn't usually bother telling anyone, it can change frequently (even while the program is running), and a main purpose of the OS is to abstract this away anyway.
编译器看到一个源文件,其中包含如下一行:
static char[1024];
它为汇编器生成输出,指示它为变量“c”保留内存。这可能如下所示:
static char[1024] = { 1, 2, 3, 4, ... };
当汇编器运行时,它会保留一个计数器来跟踪每个项目从内存“段”(或“节”)开始的偏移量。这就像一个非常大的“结构”的一部分,它包含整个文件中的所有内容,此时它没有分配任何实际内存,并且可能在任何地方。它在一个表中记录了
_c
一个特定的偏移量(比如从段的开头开始 510 个字节),然后将其计数器增加 4,因此下一个这样的变量将在(例如)514 个字节处。对于任何需要 的地址的代码_c
,它只是将 510 放在输出文件中,并添加一个注释,即输出需要包含_c
稍后添加的段的地址。链接器获取所有汇编器的输出文件,并检查它们。它为每个段确定一个地址,以便它们不会重叠,并添加必要的偏移量,以便指令仍然引用正确的数据项。在像这样占用的未初始化内存的情况下
c
(汇编器被告知,由于编译器将内存放在了“.bss”段中,这是一个为未初始化的内存保留的名称,这一事实告诉汇编器将未初始化内存),它在其输出中包含一个标头字段,告诉操作系统需要预留多少。它可能会被重定位(通常是),但通常被设计为在一个特定的内存地址更有效地加载,操作系统将尝试在这个地址加载它。在这一点上,我们非常清楚c
.直到程序运行时才真正确定物理地址。然而,从程序员的角度来看,物理地址实际上是无关紧要的——我们甚至永远不会知道它是什么,因为操作系统通常不会费心告诉任何人,它会经常改变(即使在程序运行时),并且无论如何,操作系统的主要目的是将其抽象化。
回答by meaning-matters
An executable describes what space to allocate for static variables. This allocation is done by the system, when you run the executable. So your 1kB static variable won't increase the size of the executable with 1kB:
可执行文件描述了为静态变量分配的空间。当您运行可执行文件时,此分配由系统完成。所以你的 1kB 静态变量不会用 1kB 增加可执行文件的大小:
##代码##Unless of course you specify an initializer:
除非您当然指定初始化程序:
##代码##So, in addition to 'machine language' (i.e. CPU instructions), an executable contains a description of the required memory layout.
因此,除了“机器语言”(即 CPU 指令)之外,可执行文件还包含对所需内存布局的描述。
回答by exebook
Memory can be allocated in many ways:
内存可以通过多种方式分配:
- in application heap (whole heap is allocated for your app by OS when the program starts)
- in operating system heap (so you can grab more and more)
- in garbage collector controlled heap (same as both above)
- on stack (so you can get a stack overflow)
- reserved in code/data segment of your binary (executable)
- in remote place (file, network - and you receive a handle not a pointer to that memory)
- 在应用程序堆中(当程序启动时,整个堆由操作系统分配给您的应用程序)
- 在操作系统堆中(所以你可以抓取越来越多)
- 在垃圾收集器控制的堆中(与上述相同)
- 在堆栈上(所以你可以得到堆栈溢出)
- 在二进制文件的代码/数据段中保留(可执行文件)
- 在远程位置(文件,网络 - 您收到一个句柄而不是指向该内存的指针)
Now your question is what is "memory allocated at compile time". Definitely it is just an incorrectly phrased saying, which is supposed to refer to either binary segment allocation or stack allocation, or in some cases even to a heap allocation, but in that case the allocation is hidden from programmer eyes by invisible constructor call. Or probably the person who said that just wanted to say that memory is not allocated on heap, but did not know about stack or segment allocations.(Or did not want to go into that kind of detail).
现在您的问题是什么是“在编译时分配的内存”。绝对这只是一个措辞不正确的说法,它应该指的是二进制段分配或堆栈分配,或者在某些情况下甚至是堆分配,但在这种情况下,分配是通过不可见的构造函数调用隐藏在程序员眼中的。或者说这话的人可能只是想说内存不是在堆上分配的,但不知道堆栈或段分配。(或者不想深入讨论那种细节)。
But in most cases person just wants to say that the amount of memory being allocated is known at compile time.
但在大多数情况下,人们只想说正在分配的内存量在编译时是已知的。
The binary size will only change when the memory is reserved in the code or data segment of your app.
只有在应用程序的代码或数据段中保留了内存时,二进制大小才会更改。
回答by Yves Daoust
You are right. Memory is actually allocated (paged) at load time, i.e. when the executable file is brought into (virtual) memory. Memory can also be initialized on that moment. The compiler just creates a memory map. [By the way, stack and heap spaces are also allocated at load time !]
你是对的。内存实际上是在加载时分配(分页),即当可执行文件被带入(虚拟)内存时。内存也可以在那一刻被初始化。编译器只是创建一个内存映射。[顺便说一下,堆栈和堆空间也在加载时分配!]
回答by jmoreno
I think you need to step back a bit. Memory allocated at compile time.... What can that mean? Can it mean that memory on chips that have not yet been manufactured, for computers that have not yet been designed, is somehow being reserved? No. No, time travel, no compilers that can manipulate the universe.
我认为你需要退后一点。在编译时分配的内存......这意味着什么?这是否意味着尚未制造的芯片上的内存,用于尚未设计的计算机,以某种方式被保留?不。不,时间旅行,没有可以操纵宇宙的编译器。
So, it must mean that the compiler generates instructions to allocate that memory somehow at runtime. But if you look at it in from the right angle, the compiler generates all instructions, so what can be the difference. The difference is that the compiler decides, and at runtime, your code can not change or modify its decisions. If it decided it needed 50 bytes at compile time, at runtime, you can't make it decide to allocate 60 -- that decision has already been made.
因此,这一定意味着编译器生成指令以在运行时以某种方式分配该内存。但是如果你从正确的角度看它,编译器会生成所有指令,那么有什么区别。不同之处在于编译器决定,而在运行时,您的代码无法更改或修改其决定。如果它决定在编译时需要 50 个字节,在运行时,你不能让它决定分配 60 个——这个决定已经做出了。