Python tf.nn.conv2d 与 tf.layers.conv2d
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/42785026/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
tf.nn.conv2d vs tf.layers.conv2d
提问by jul
Is there any advantage in using tf.nn.*
over tf.layers.*
?
使用tf.nn.*
over有什么好处tf.layers.*
吗?
Most of the examples in the doc use tf.nn.conv2d
, for instance, but it is not clear why they do so.
tf.nn.conv2d
例如,文档中的大多数示例都使用,但不清楚为什么要这样做。
采纳答案by GBY
For convolution, they are the same. More precisely, tf.layers.conv2d
(actually _Conv
) uses tf.nn.convolution
as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()...
对于卷积,它们是相同的。更准确地说,tf.layers.conv2d
(实际上_Conv
)tf.nn.convolution
用作后端。您可以遵循以下调用链:tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()...
回答by Mircea
As GBY mentioned, they use the same implementation.
正如 GBY 所提到的,它们使用相同的实现。
There is a slight difference in the parameters.
参数略有不同。
For tf.nn.conv2d:
对于 tf.nn.conv2d:
filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]
For tf.layers.conv2d:
对于 tf.layers.conv2d:
filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
I would use tf.nn.conv2d when loading a pretrained model (example code: https://github.com/ry/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.
在加载预训练模型(示例代码:https: //github.com/ry/tensorflow-vgg16)时,我会使用 tf.nn.conv2d ,而对于从头开始训练的模型,我会使用 tf.layers.conv2d 。
回答by EXP0
As others mentioned the parameters are different especially the "filter(s)". tf.nn.conv2d takes a tensor as a filter, which means you can specify the weight decay (or maybe other properties) like the following in cifar10 code. (Whether you want/need to have weight decay in conv layer is another question.)
正如其他人提到的,参数是不同的,尤其是“过滤器”。tf.nn.conv2d 将张量作为过滤器,这意味着您可以在cifar10 代码中指定权重衰减(或其他属性),如下所示。(您是否想要/需要在 conv 层中进行权重衰减是另一个问题。)
kernel = _variable_with_weight_decay('weights',
shape=[5, 5, 3, 64],
stddev=5e-2,
wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
I'm not quite sure how to set weight decay in tf.layers.conv2d since it only take an integer as filters. Maybe using kernel_constraint
?
我不太确定如何在 tf.layers.conv2d 中设置权重衰减,因为它只需要一个整数作为过滤器。也许使用kernel_constraint
?
On the other hand, tf.layers.conv2d handles activation and bias automatically while you have to write additional codes for these if you use tf.nn.conv2d.
另一方面,tf.layers.conv2d 会自动处理激活和偏差,而如果您使用 tf.nn.conv2d,则必须为这些编写额外的代码。
回答by hotvector
All of these other replies talk about how the parameters are different, but actually, the main difference of tf.nn and tf.layers conv2d is that for tf.nn, you need to create your own filter tensor and pass it in. This filter needs to have the size of: [kernel_height, kernel_width, in_channels, num_filters]
所有这些其他回复都说参数是如何不同的,但实际上,tf.nn 和 tf.layers conv2d 的主要区别在于,对于 tf.nn,您需要创建自己的过滤器张量并将其传入。这个过滤器需要有以下大小: [kernel_height, kernel_width, in_channels, num_filters]
Essentially, tf.nn is lower level than tf.layers. Unfortunately, this answer is not applicable anymore is tf.layers is obselete
本质上,tf.nn 的级别低于 tf.layers。不幸的是,这个答案不再适用是 tf.layers 已过时
回答by Nikhil Banka
DIFFERENCES IN PARAMETER:
参数差异:
Using tf.layer* in a code:
在代码中使用 tf.layer*:
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
Using tf.nn* in a code: ( Notice we need to pass weights and biases additionallyas parameters )
在代码中使用 tf.nn* :(注意我们需要额外传递权重和偏差作为参数)
strides = 1
# Weights matrix looks like: [kernel_size(=5), kernel_size(=5), input_channels (=3), filters (= 32)]
# Similarly bias = looks like [filters (=32)]
out = tf.nn.conv2d(input, weights, padding="SAME", strides = [1, strides, strides, 1])
out = tf.nn.bias_add(out, bias)
out = tf.nn.relu(out)
回答by rmeertens
Take a look here:tensorflow > tf.layers.conv2d
看看这里:tensorflow > tf.layers.conv2d
and here: tensorflow > conv2d
在这里:张量流> conv2d
As you can see the arguments to the layers version are:
如您所见,图层版本的参数是:
tf.layers.conv2d(inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(), kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, trainable=True, name=None, reuse=None)
tf.layers.conv2d(inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=无,bias_initializer=tf.zeros_initializer(),kernel_regularizer=None,bias_regularizer=None,activity_regularizer=None,trainable=True,name=None,reuse=None)
and the nn version:
和 nn 版本:
tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)
tf.nn.conv2d(输入,过滤器,步幅,填充,use_cudnn_on_gpu=None,data_format=None,name=None)
I think you can choose the one with the options you want/need/like!
我认为您可以选择具有您想要/需要/喜欢的选项的那个!