Python 在 tensorflow 中关闭会话不会重置图形

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/42706761/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 22:03:04  来源:igfitidea点击:

closing session in tensorflow doesn't reset graph

pythontensorflow

提问by titus

The number of nodes available in the current graph keep increasing at every iteration. This seems unintuitive since the session is closed, and all of it's resources should be freed. What is the reason why the previous nodes are still lingering even when creating a new session? Here is my code:

当前图中可用的节点数量在每次迭代中都会不断增加。这似乎不直观,因为会话已关闭,并且应该释放所有资源。即使创建新会话,之前的节点仍然存在的原因是什么?这是我的代码:

for i in range(3):
    var = tf.Variable(0)
    sess = tf.Session(config=tf.ConfigProto())
    with sess.as_default():
        tf.global_variables_initializer().run()
        print(len(sess.graph._nodes_by_name.keys()))
    sess.close() 

It outputs:

它输出:

5
10
15

回答by Mad Wombat

Closing session does not reset graph by design. If you want to reset graph you can either call tf.reset_default_graph()like this

关闭会话不会按设计重置图形。如果您想重置图形,您可以调用tf.reset_default_graph()像这样

for _ in range(3):
    tf.reset_default_graph()
    var = tf.Variable(0)
    with tf.Session() as session:
        session.run(tf.global_variables_initializer())
        print(len(session.graph._nodes_by_name.keys()))

or you can do something like this

或者你可以做这样的事情

for _ in range(3):
    with tf.Graph().as_default() as graph:
        var = tf.Variable(0)
        with tf.Session() as session:
            session.run(tf.global_variables_initializer())
            print(len(graph._nodes_by_name.keys()))

回答by M. Mortazavi

I ran into session closure issues when running a TensorFlow program from within Spyder. The RNN cells seem to remain and seeking to create new ones of the same name seems to cause problems. This is probably because when running from Spyder, the c-based TensorFlow session does not close properly even if the program has completed its "run" from within Spyder. Spyder has to be restarted in order to get a new session. When running from within Spyder, setting "reuse=True" on the cells gets around this problem. However, this does not seem like a valid mode in iterative programming when training an RNN cell. In that case, some unexpected results/behaviors might occur without the observer knowing what's going on.

在 Spyder 中运行 TensorFlow 程序时,我遇到了会话关闭问题。RNN 单元似乎仍然存在,寻求创建同名的新单元似乎会引起问题。这可能是因为从 Spyder 运行时,基于 c 的 TensorFlow 会话不会正确关闭,即使程序已从 Spyder 中完成其“运行”。Spyder 必须重新启动才能获得新会话。从 Spyder 内部运行时,在单元格上设置“reuse=True”可以解决这个问题。然而,在训练 RNN 单元时,这似乎不是迭代编程中的有效模式。在这种情况下,在观察者不知道发生了什么的情况下,可能会发生一些意外的结果/行为。

回答by pinxue

Let's ensure what happens in tf.Session() firstly.

让我们首先确保 tf.Session() 中发生了什么。

It means you submit the default graph def to tensorflow runtime, the runtime then allocate GPU/CPU/Remote memory accordingly.

这意味着您将默认图形 def 提交给 tensorflow 运行时,然后运行时相应地分配 GPU/CPU/远程内存。

So that, when you close the session, the runtime just release all the resource allocated, but leave your graph no touching!

因此,当您关闭会话时,运行时只会释放所有分配的资源,但不会影响您的图形!