C++ 中的堆栈、静态和堆

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/408670/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 15:11:31  来源:igfitidea点击:

Stack, Static, and Heap in C++

c++staticgarbage-collectionstackheap

提问by Hai

I've searched, but I've not understood very well these three concepts. When do I have to use dynamic allocation (in the heap) and what's its real advantage? What are the problems of static and stack? Could I write an entire application without allocating variables in the heap?

我搜索过,但我对这三个概念不是很了解。我什么时候必须使用动态分配(在堆中),它的真正优势是什么?static和stack有什么问题?我可以编写整个应用程序而不在堆中分配变量吗?

I heard that others languages incorporate a "garbage collector" so you don't have to worry about memory. What does the garbage collector do?

我听说其他语言包含“垃圾收集器”,因此您不必担心内存。垃圾收集器是做什么的?

What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?

使用这个垃圾收集器不能自己操作内存,你能做什么?

Once someone said to me that with this declaration:

曾经有人用这个声明对我说:

int * asafe=new int;

I have a "pointer to a pointer". What does it mean? It is different of:

我有一个“指向指针的指针”。这是什么意思?它的不同之处在于:

asafe=new int;

?

?

回答by markets

A similar questionwas asked, but it didn't ask about statics.

有人问了一个类似的问题,但它没有询问静力学。

Summary of what static, heap, and stack memory are:

什么是静态、堆和堆栈内存的总结:

  • A static variable is basically a global variable, even if you cannot access it globally. Usually there is an address for it that is in the executable itself. There is only one copy for the entire program. No matter how many times you go into a function call (or class) (and in how many threads!) the variable is referring to the same memory location.

  • The heap is a bunch of memory that can be used dynamically. If you want 4kb for an object then the dynamic allocator will look through its list of free space in the heap, pick out a 4kb chunk, and give it to you. Generally, the dynamic memory allocator (malloc, new, et c.) starts at the end of memory and works backwards.

  • Explaining how a stack grows and shrinks is a bit outside the scope of this answer, but suffice to say you always add and remove from the end only. Stacks usually start high and grow down to lower addresses. You run out of memory when the stack meets the dynamic allocator somewhere in the middle (but refer to physical versus virtual memory and fragmentation). Multiple threads will require multiple stacks (the process generally reserves a minimum size for the stack).

  • 静态变量基本上是一个全局变量,即使你不能全局访问它。通常在可执行文件本身中有一个地址。整个程序只有一份副本。无论您进入函数调用(或类)多少次(以及在多少个线程中!),变量都指向相同的内存位置。

  • 堆是一堆可以动态使用的内存。如果你想要一个 4kb 的对象,那么动态分配器将查看它在堆中的空闲空间列表,挑选出一个 4kb 的块,然后给你。通常,动态内存分配器(malloc、new 等)从内存末尾开始并向后工作。

  • 解释堆栈如何增长和缩小有点超出了这个答案的范围,但可以说你总是只在最后添加和删除。堆栈通常从高开始,然后向下增长到较低的地址。当堆栈在中间某处遇到动态分配器时,您会耗尽内存(但请参阅物理内存与虚拟内存和碎片)。多个线程将需要多个堆栈(进程通常为堆栈保留一个最小大小)。

When you would want to use each one:

当你想使用每一个时:

  • Statics/globals are useful for memory that you know you will always need and you know that you don't ever want to deallocate. (By the way, embedded environments may be thought of as having only static memory... the stack and heap are part of a known address space shared by a third memory type: the program code. Programs will often do dynamic allocation out of their static memory when they need things like linked lists. But regardless, the static memory itself (the buffer) is not itself "allocated", but rather other objects are allocated out of the memory held by the buffer for this purpose. You can do this in non-embedded as well, and console games will frequently eschew the built in dynamic memory mechanisms in favor of tightly controlling the allocation process by using buffers of preset sizes for all allocations.)

  • Stack variables are useful for when you know that as long as the function is in scope (on the stack somewhere), you will want the variables to remain. Stacks are nice for variables that you need for the code where they are located, but which isn't needed outside that code. They are also really nice for when you are accessing a resource, like a file, and want the resource to automatically go away when you leave that code.

  • Heap allocations (dynamically allocated memory) is useful when you want to be more flexible than the above. Frequently, a function gets called to respond to an event (the user clicks the "create box" button). The proper response may require allocating a new object (a new Box object) that should stick around long after the function is exited, so it can't be on the stack. But you don't know how many boxes you would want at the start of the program, so it can't be a static.

  • 静态/全局变量对于您知道将永远需要并且您知道永远不想释放的内存很有用。(顺便说一句,嵌入式环境可能被认为只有静态内存......堆栈和堆是由第三种内存类型共享的已知地址空间的一部分:程序代码。程序通常会动态分配它们的内存静态内存,当他们需要链表之类的东西时。但无论如何,静态内存本身(缓冲区)本身并不是“分配”的,而是出于此目的,其他对象从缓冲区持有的内存中分配出来。你可以这样做在非嵌入式中也是如此,并且控制台游戏将经常避开内置的动态内存机制,转而通过对所有分配使用预设大小的缓冲区来严格控制分配过程。)

  • 当您知道只要函数在作用域内(在堆栈中某处),您就会希望变量保留,堆栈变量就很有用。堆栈非常适合您在它们所在的代码中需要的变量,但在该代码之外不需要。当您访问资源(例如文件)并希望资源在您离开该代码时自动消失时,它们也非常有用。

  • 当您希望比上述更灵活时,堆分配(动态分配的内存)很有用。通常,会调用一个函数来响应事件(用户单击“创建框”按钮)。正确的响应可能需要分配一个新对象(一个新的 Box 对象),该对象应该在函数退出后很长时间内仍然存在,因此它不能在堆栈中。但是你不知道在程序开始时你想要多少个盒子,所以它不能是静态的。

Garbage Collection

垃圾收集

I've heard a lot lately about how great Garbage Collectors are, so maybe a bit of a dissenting voice would be helpful.

我最近听到很多关于垃圾收集器有多棒的消息,所以也许有点反对的声音会有所帮助。

Garbage Collection is a wonderful mechanism for when performance is not a huge issue. I hear GCs are getting better and more sophisticated, but the fact is, you may be forced to accept a performance penalty (depending upon use case). And if you're lazy, it still may not work properly. At the best of times, Garbage Collectors realize that your memory goes away when it realizes that there are no more references to it (see reference counting). But, if you have an object that refers to itself (possibly by referring to another object which refers back), then reference counting alone will not indicate that the memory can be deleted. In this case, the GC needs to look at the entire reference soup and figure out if there are any islands that are only referred to by themselves. Offhand, I'd guess that to be an O(n^2) operation, but whatever it is, it can get bad if you are at all concerned with performance. (Edit: Martin B points outthat it is O(n) for reasonably efficient algorithms. That is still O(n) too much if you are concerned with performance and can deallocate in constant time without garbage collection.)

当性能不是一个大问题时,垃圾收集是一种很好的机制。我听说 GC 变得越来越好,越来越复杂,但事实是,您可能被迫接受性能损失(取决于用例)。如果你很懒惰,它仍然可能无法正常工作。在最好的情况下,垃圾收集器会意识到当它意识到不再有对它的引用时你的记忆就会消失(参见引用计数)。但是,如果您有一个引用自身的对象(可能通过引用另一个引用的对象),那么仅引用计数并不能表明可以删除内存。在这种情况下,GC 需要查看整个引用汤,并找出是否有任何仅由它们自己引用的孤岛。顺便说一句,我猜这是一个 O(n^2) 操作,但无论如何,如果您完全关心性能,它会变得很糟糕。(编辑:Martin B指出,对于合理有效的算法来说,它是 O(n)。如果您关心性能并且可以在没有垃圾收集的情况下在恒定时间内解除分配,那仍然是 O(n) 太多了。)

Personally, when I hear people say that C++ doesn't have garbage collection, my mind tags that as a feature of C++, but I'm probably in the minority. Probably the hardest thing for people to learn about programming in C and C++ are pointers and how to correctly handle their dynamic memory allocations. Some other languages, like Python, would be horrible without GC, so I think it comes down to what you want out of a language. If you want dependable performance, then C++ without garbage collection is the only thing this side of Fortran that I can think of. If you want ease of use and training wheels (to save you from crashing without requiring that you learn "proper" memory management), pick something with a GC. Even if you know how to manage memory well, it will save you time which you can spend optimizing other code. There really isn't much of a performance penalty anymore, but if you really need dependable performance (and the ability to know exactly what is going on, when, under the covers) then I'd stick with C++. There is a reason that every major game engine that I've ever heard of is in C++ (if not C or assembly). Python, et al are fine for scripting, but not the main game engine.

就我个人而言,当我听到人们说 C++ 没有垃圾收集功能时,我的想法是将其标记为 C++ 的一个特性,但我可能属于少数。人们学习 C 和 C++ 编程最难的事情可能是指针以及如何正确处理它们的动态内存分配。其他一些语言,比如 Python,如果没有 GC 会很糟糕,所以我认为这归结为你想从一种语言中得到什么。如果您想要可靠的性能,那么没有垃圾收集的 C++ 是我能想到的 Fortran 的唯一方面。如果您想要易用性和训练轮(为了避免崩溃而不需要您学习“正确的”内存管理),请选择带有 GC 的东西。即使你知道如何很好地管理内存,它也会节省你可以用来优化其他代码的时间。真的没有太多的性能损失了,但是如果你真的需要可靠的性能(以及确切知道发生了什么、何时、在幕后发生的能力),那么我会坚持使用 C++。我听说过的每个主要游戏引擎都使用 C++(如果不是 C 或汇编)是有原因的。Python 等适用于脚本编写,但不适用于主要游戏引擎。

回答by Johannes Schaub - litb

The following is of course all not quite precise. Take it with a grain of salt when you read it :)

以下当然都不是很准确。当你阅读它时,请带上一粒盐:)

Well, the three things you refer to are automatic, static and dynamic storage duration, which has something to do with how long objects live and when they begin life.

嗯,你提到的三件事是自动,静态和动态存储持续时间,这与对象存活的时间和开始生命的时间有关。



Automatic storage duration

自动存储时间

You use automatic storage duration for short livedand smalldata, that is needed only locallywithin some block:

您对短期数据使用自动存储持续时间,这些数据仅在某个块内本地需要:

if(some condition) {
    int a[3]; // array a has automatic storage duration
    fill_it(a);
    print_it(a);
}

The lifetime ends as soon as we exit the block, and it starts as soon as the object is defined. They are the most simple kind of storage duration, and are way faster than in particular dynamic storage duration.

一旦我们退出块,生命周期就结束,一旦对象被定义,生命周期就开始。它们是最简单的一种存储持续时间,并且比特别是动态存储持续时间快得多。



Static storage duration

静态存储时长

You use static storage duration for free variables, which might be accessed by any code all times, if their scope allows such usage (namespace scope), and for local variables that need extend their lifetime across exit of their scope (local scope), and for member variables that need to be shared by all objects of their class (classs scope). Their lifetime depends on the scope they are in. They can have namespace scopeand local scopeand class scope. What is true about both of them is, once their life begins, lifetime ends at the end of the program. Here are two examples:

您对自由变量使用静态存储持续时间,如果它们的范围允许这样的使用(命名空间范围),并且需要在其范围(本地范围)的退出范围内延长其生命周期,则任何代码都可以随时访问这些变量,并且对于需要由其类的所有对象(类范围)共享的成员变量。它们的生命周期取决于它们所在的作用域。它们可以有命名空间作用域局部作用域类作用域。他们两人的真实情况是,一旦他们的生命开始,生命就会在程序结束时结束。这里有两个例子:

// static storage duration. in global namespace scope
string globalA; 
int main() {
    foo();
    foo();
}

void foo() {
    // static storage duration. in local scope
    static string localA;
    localA += "ab"
    cout << localA;
}

The program prints ababab, because localAis not destroyed upon exit of its block. You can say that objects that have local scope begin lifetime when control reaches their definition. For localA, it happens when the function's body is entered. For objects in namespace scope, lifetime begins at program startup. The same is true for static objects of class scope:

程序打印ababab, 因为localA它的块退出时不会被销毁。您可以说具有局部作用域的对象在控制达到其定义时开始生存期。对于localA,它在进入函数体时发生。对于命名空间范围内的对象,生命周期从程序启动开始。对于类作用域的静态对象也是如此:

class A {
    static string classScopeA;
};

string A::classScopeA;

A a, b; &a.classScopeA == &b.classScopeA == &A::classScopeA;

As you see, classScopeAis not bound to particular objects of its class, but to the class itself. The address of all three names above is the same, and all denote the same object. There are special rule about when and how static objects are initialized, but let's not concern about that now. That's meant by the term static initialization order fiasco.

如您所见,classScopeA不绑定到其类的特定对象,而是绑定到类本身。上面三个名字的地址都是一样的,都表示同一个对象。关于何时以及如何初始化静态对象有特殊的规则,但现在我们不要担心。这是术语静态初始化顺序失败的意思。



Dynamic storage duration

动态存储时长

The last storage duration is dynamic. You use it if you want to have objects live on another isle, and you want to put pointers around that reference them. You also use them if your objects are big, and if you want to create arrays of size only known at runtime. Because of this flexibility, objects having dynamic storage duration are complicated and slow to manage. Objects having that dynamic duration begin lifetime when an appropriate newoperator invocation happens:

最后的存储持续时间是动态的。如果您想让对象存在于另一个岛上,并且想要在该引用周围放置指针,则可以使用它。如果您的对象很大,并且您想创建大小仅在运行时已知的数组,您也可以使用它们。由于这种灵活性,具有动态存储持续时间的对象复杂且管理缓慢。当适当的new运算符调用发生时,具有该动态持续时间的对象开始生命周期:

int main() {
    // the object that s points to has dynamic storage 
    // duration
    string *s = new string;
    // pass a pointer pointing to the object around. 
    // the object itself isn't touched
    foo(s);
    delete s;
}

void foo(string *s) {
    cout << s->size();
}

Its lifetime ends only when you call deletefor them. If you forget that, those objects never end lifetime. And class objects that define a user declared constructor won't have their destructors called. Objects having dynamic storage duration requires manual handling of their lifetime and associated memory resource. Libraries exist to ease use of them. Explicit garbage collectionfor particular objectscan be established by using a smart pointer:

只有当您为它们调用delete时,它的生命周期才会结束。如果您忘记了这一点,这些对象将永远不会结束生命周期。定义用户声明的构造函数的类对象不会调用其析构函数。具有动态存储持续时间的对象需要手动处理它们的生命周期和相关的内存资源。图书馆的存在是为了方便使用它们。可以使用智能指针为特定对象建立显式垃圾收集

int main() {
    shared_ptr<string> s(new string);
    foo(s);
}

void foo(shared_ptr<string> s) {
    cout << s->size();
}

You don't have to care about calling delete: The shared ptr does it for you, if the last pointer that references the object goes out of scope. The shared ptr itself has automatic storage duration. So itslifetime is automatically managed, allowing it to check whether it should delete the pointed to dynamic object in its destructor. For shared_ptr reference, see boost documents: http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm

您不必关心调用 delete:如果引用该对象的最后一个指针超出范围,则共享 ptr 会为您执行此操作。共享 ptr 本身具有自动存储持续时间。所以它的生命周期是自动管理的,允许它检查是否应该在其析构函数中删除指向的动态对象。有关 shared_ptr 参考,请参阅 boost 文档:http: //www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm

回答by peterchen

It's been said elaborately, just as "the short answer":

已经详细说了,就像“简短的答案”一样:

  • static variable (class)
    lifetime = program runtime (1)
    visibility = determined by access modifiers (private/protected/public)

  • static variable (global scope)
    lifetime = program runtime (1)
    visibility = the compilation unit it is instantiated in (2)

  • heap variable
    lifetime = defined by you (new to delete)
    visibility = defined by you (whatever you assign the pointer to)

  • stack variable
    visibility = from declaration until scope is exited
    lifetime = from declaration until declaring scope is exited

  • 静态变量(类)
    生命周期 = 程序运行时(1)
    可见性 = 由访问修饰符决定(私有/受保护/公共)

  • 静态变量(全局范围)
    生命周期 = 程序运行时(1)
    可见性 = 它在(2)中实例化的编译单元

  • 堆变量
    生命周期 = 由您定义(新删除)
    可见性 = 由您定义(无论您将指针分配给什么)

  • 堆栈变量
    可见性 = 从声明到退出范围
    生命周期 = 从声明到退出声明范围



(1) more exactly: from initialization until deinitialization of the compilation unit (i.e. C / C++ file). Order of initialization of compilation units is not defined by the standard.

(1) 更准确地说:从初始化直到编译单元(即C/C++文件)的反初始化。编译单元的初始化顺序没有由标准定义。

(2) Beware: if you instantiate a static variable in a header, each compilation unit gets its own copy.

(2) 注意:如果你在头文件中实例化一个静态变量,每个编译单元都会得到它自己的副本。

回答by Chris Smith

I'm sure one of the pedants will come up with a better answer shortly, but the main difference is speed and size.

我相信其中一位学究很快会提出更好的答案,但主要区别在于速度和大小。

Stack

Dramatically faster to allocate. It is done in O(1) since it is allocated when setting up the stack frame so it is essentially free. The drawback is that if you run out of stack space you are boned. You can adjust the stack size, but IIRC you have ~2MB to play with. Also, as soon as you exit the function everything on the stack is cleared. So it can be problematic to refer to it later. (Pointers to stack allocated objects leads to bugs.)

分配速度显着加快。它在 O(1) 中完成,因为它是在设置堆栈帧时分配的,因此它基本上是免费的。缺点是如果你用完堆栈空间,你就会被绑定。您可以调整堆栈大小,但 IIRC 有大约 2MB 可供使用。此外,一旦退出该函数,堆栈上的所有内容都会被清除。所以以后参考它可能会有问题。(指向堆栈分配对象的指针会导致错误。)

Heap

Dramatically slower to allocate. But you have GB to play with, and point to.

分配速度慢得多。但是你有GB可以玩,并指向。

Garbage Collector

垃圾收集器

The garbage collector is some code that runs in the background and frees memory. When you allocate memory on the heap it is very easy to forget to free it, which is known as a memory leak. Over time, the memory your application consumes grows and grows until it crashes. Having a garbage collector periodically free the memory you no longer need helps eliminate this class of bugs. Of course this comes at a price, as the garbage collector slows things down.

垃圾收集器是一些在后台运行并释放内存的代码。当您在堆上分配内存时,很容易忘记释放它,这称为内存泄漏。随着时间的推移,您的应用程序消耗的内存会不断增长,直到崩溃。让垃圾收集器定期释放您不再需要的内存有助于消除此类错误。当然,这是有代价的,因为垃圾收集器会减慢速度。

回答by ChrisW

What are the problems of static and stack?

static和stack有什么问题?

The problem with "static" allocation is that the allocation is made at compile-time: you can't use it to allocate some variable number of data, the number of which isn't known until run-time.

“静态”分配的问题在于分配是在编译时进行的:您不能使用它来分配一些可变数量的数据,这些数据的数量直到运行时才知道。

The problem with allocating on the "stack" is that the allocation is destroyed as soon as the subroutine which does the allocation returns.

在“堆栈”上分配的问题在于,一旦执行分配的子例程返回,分配就会被销毁。

I could write an entire application without allocate variables in the heap?

我可以编写整个应用程序而不在堆中分配变量吗?

Perhaps but not a non-trivial, normal, big application (but so-called "embedded" programs might be written without the heap, using a subset of C++).

也许但不是一个重要的、正常的、大的应用程序(但所谓的“嵌入式”程序可能在没有堆的情况下使用 C++ 的子集编写)。

What garbage collector does ?

垃圾收集器做什么?

It keeps watching your data ("mark and sweep") to detect when your application is no longer referencing it. This is convenient for the application, because the application doesn't need to deallocate the data ... but the garbage collector might be computationally expensive.

它会持续监视您的数据(“标记和清除”)以检测您的应用程序何时不再引用它。这对应用程序来说很方便,因为应用程序不需要释放数据……但是垃圾收集器可能在计算上很昂贵。

Garbage collectors aren't a usual feature of C++ programming.

垃圾收集器不是 C++ 编程的常见功能。

What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?

使用这个垃圾收集器不能自己操作内存,你能做什么?

Learn the C++ mechanisms for deterministic memory deallocation:

了解用于确定性内存释放的 C++ 机制:

  • 'static': never deallocated
  • 'stack': as soon as the variable "goes out of scope"
  • 'heap': when the pointer is deleted (explicitly deleted by the application, or implicitly deleted within some-or-other subroutine)
  • 'static':从不释放
  • 'stack':一旦变量“超出范围”
  • 'heap':当指针被删除时(应用程序显式删除,或在某个或其他子例程中隐式删除)

回答by kal

What if your program does not know upfront how much memory to allocate (hence you cannot use stack variables). Say linked lists, the lists can grow without knowing upfront what is its size. So allocating on a heap makes sense for a linked list when you are not aware of how many elements would be inserted into it.

如果您的程序不知道要分配多少内存(因此您不能使用堆栈变量)怎么办。比如说链表,列表可以在不知道其大小的情况下增长。因此,当您不知道将插入多少元素时,在堆上分配对于链表是有意义的。

回答by Rob Elsner

Stack memory allocation (function variables, local variables) can be problematic when your stack is too "deep" and you overflow the memory available to stack allocations. The heap is for objects that need to be accessed from multiple threads or throughout the program lifecycle. You can write an entire program without using the heap.

当堆栈太“深”并且溢出可用于堆栈分配的内存时,堆栈内存分配(函数变量、局部变量)可能会出现问题。堆用于需要从多个线程或在整个程序生命周期中访问的对象。您可以在不使用堆的情况下编写整个程序。

You can leak memory quite easily without a garbage collector, but you can also dictate when objects and memory is freed. I have run in to issues with Java when it runs the GC and I have a real time process, because the GC is an exclusive thread (nothing else can run). So if performance is critical and you can guarantee there are no leaked objects, not using a GC is very helpful. Otherwise it just makes you hate life when your application consumes memory and you have to track down the source of a leak.

如果没有垃圾收集器,您可以很容易地泄漏内存,但您也可以决定何时释放对象和内存。我在运行 GC 时遇到了 Java 问题,并且我有一个实时进程,因为 GC 是一个独占线程(没有其他线程可以运行)。因此,如果性能至关重要并且您可以保证没有泄漏的对象,那么不使用 GC 将非常有帮助。否则,当您的应用程序消耗内存并且您必须追踪泄漏的来源时,它只会让您讨厌生活。

回答by frediano

An advantage of GC in some situations is an annoyance in others; reliance on GC encourages not thinking much about it. In theory, waits until 'idle' period or until it absolutely must, when it will steal bandwidth and cause response latency in your app.

GC 在某些情况下的优势在另一些情况下却令人烦恼;对 GC 的依赖鼓励不要考虑太多。从理论上讲,等到“空闲”期或绝对必须时,它会窃取带宽并导致您的应用程序出现响应延迟。

But you don't have to 'not think about it.' Just as with everything else in multithreaded apps, when you can yield, you can yield. So for example, in .Net, it is possible to request a GC; by doing this, instead of less frequent longer running GC, you can have more frequent shorter running GC, and spread out the latency associated with this overhead.

但您不必“不去想它”。就像多线程应用程序中的所有其他内容一样,当您可以让步时,您就可以让步。所以例如在.Net中,可以请求一个GC;通过这样做,您可以更频繁地运行较短的 GC,而不是频率较低的长时间运行 GC,并分散与此开销相关的延迟。

But this defeats the primary attraction of GC which appears to be "encouraged to not have to think much about it because it is auto-mat-ic."

但这打败了 GC 的主要吸引力,它似乎“被鼓励不必考虑太多,因为它是自动的。”

If you were first exposed to programming before GC became prevalent and were comfortable with malloc/free and new/delete, then it might even be the case that you find GC a little annoying and/or are distrustful(as one might be distrustful of 'optimization,' which has had a checkered history.) Many apps tolerate random latency. But for apps that don't, where random latency is less acceptable, a common reaction is to eschew GC environments and move in the direction of purely unmanaged code (or god forbid, a long dying art, assembly language.)

如果您在 GC 流行之前第一次接触编程并且对 malloc/free 和 new/delete 感到满意,那么您甚至可能会发现 GC 有点烦人和/或不信任(因为人们可能不信任 '优化”,这有一段曲折的历史。)许多应用程序容忍随机延迟。但是对于那些不接受随机延迟不太可接受的应用程序,常见的反应是避开 GC 环境并转向纯粹的非托管代码(或上帝保佑,一门垂死的艺术,汇编语言)。

I had a summer student here a while back, an intern, smart kid, who was weaned on GC; he was so adament about the superiorty of GC that even when programming in unmanaged C/C++ he refused to follow the malloc/free new/delete model because, quote, "you shouldn't have to do this in a modern programming language." And you know? For tiny, short running apps, you can indeed get away with that, but not for long running performant apps.

不久前我在这里有一个暑期学生,一个实习生,聪明的孩子,他在 GC 上断奶了;他非常肯定 GC 的优越性,以至于即使在使用非托管 C/C++ 进行编程时,他也拒绝遵循 malloc/free new/delete 模型,因为,引用,“您不应该用现代编程语言来做这件事。” 而且你知道?对于运行时间短的小型应用程序,您确实可以避免这种情况,但对于长时间运行的高性能应用程序则不然。

回答by raj

Stack is a memory allocated by the compiler, when ever we compiles the program, in default compiler allocates some memory from OS ( we can change the settings from compiler settings in your IDE) and OS is the one which give you the memory, its depends on many available memory on the system and many other things, and coming to stack memory is allocate when we declare a variable they copy(ref as formals) those variables are pushed on to stack they follow some naming conventions by default its CDECL in Visual studios ex: infix notation: c=a+b; the stack pushing is done right to left PUSHING, b to stack, operator, a to stack and result of those i,e c to stack. In pre fix notation: =+cab Here all the variables are pushed to stack 1st (right to left)and then the operation are made. This memory allocated by compiler is fixed. So lets assume 1MB of memory is allocated to our application, lets say variables used 700kb of memory(all the local variables are pushed to stack unless they are dynamically allocated) so remaining 324kb memory is allocated to heap. And this stack has less life time, when the scope of the function ends these stacks gets cleared.

堆栈是由编译器分配的内存,当我们编译程序时,默认编译器会从操作系统分配一些内存(我们可以从 IDE 中的编译器设置更改设置),而操作系统是为您提供内存的内存,这取决于在系统上的许多可用内存和许多其他东西上,当我们声明一个变量时会分配到堆栈内存,它们复制(引用为形式)这些变量被推入堆栈,它们默认遵循一些命名约定,它在 Visual Studios 中的 CDECL例如:中缀符号:c=a+b;堆栈推送是从右到左推送,b 到堆栈,运算符,a 到堆栈和那些 i,ec 到堆栈的结果。在前缀符号中:=+cab 这里所有的变量都被压入第一个(从右到左)堆栈,然后进行操作。编译器分配的这个内存是固定的。所以让我们假设 1MB 的内存分配给我们的应用程序,假设变量使用了 700kb 的内存(所有局部变量都被推送到堆栈,除非它们是动态分配的)所以剩余的 324kb 内存分配给堆。并且这个堆栈的生命周期较短,当函数的作用域结束时,这些堆栈会被清除。