Python pytorch 中的模型摘要
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/42480111/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Model summary in pytorch
提问by Wasi Ahmad
Is there any way, I can print the summary of a model in PyTorch like model.summary()
method does in Keras as follows?
有什么办法,我可以像在 Keras 中的model.summary()
方法一样在PyTorch 中打印模型的摘要,如下所示?
Model Summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 15, 27) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1513 flatten_1[0][0]
====================================================================================================
Total params: 2,385
Trainable params: 2,385
Non-trainable params: 0
采纳答案by SpiderWasp42
While you will not get as detailed information about the model as in Keras' model.summary, simply printing the model will give you some idea about the different layers involved and their specifications.
虽然您不会像 Keras 的 model.summary 中那样获得有关模型的详细信息,但只需打印模型即可让您对所涉及的不同层及其规格有所了解。
For instance:
例如:
from torchvision import models
model = models.vgg16()
print(model)
The output in this case would be something as follows:
在这种情况下,输出将如下所示:
VGG (
(features): Sequential (
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU (inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU (inplace)
(4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU (inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU (inplace)
(9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU (inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU (inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU (inplace)
(16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU (inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU (inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU (inplace)
(23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU (inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU (inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU (inplace)
(30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
)
(classifier): Sequential (
(0): Dropout (p = 0.5)
(1): Linear (25088 -> 4096)
(2): ReLU (inplace)
(3): Dropout (p = 0.5)
(4): Linear (4096 -> 4096)
(5): ReLU (inplace)
(6): Linear (4096 -> 1000)
)
)
Now you could, as mentioned by Kashyap, use the state_dict
method to get the weights of the different layers. But using this listing of the layers would perhaps provide more direction is creating a helper function to get that Keras like model summary! Hope this helps!
现在,正如Kashyap所提到的,您可以使用该state_dict
方法来获取不同层的权重。但是使用这个层列表可能会提供更多的方向是创建一个辅助函数来获得类似 Keras 的模型摘要!希望这可以帮助!
回答by Shubham Chandel
Yes, you can get exact Keras representation, using pytorch-summarypackage.
是的,您可以使用pytorch-summary包获得准确的Keras表示。
Example for VGG16
VGG16 的示例
from torchvision import models
from torchsummary import summary
vgg = models.vgg16()
summary(vgg, (3, 224, 224))
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1,792
ReLU-2 [-1, 64, 224, 224] 0
Conv2d-3 [-1, 64, 224, 224] 36,928
ReLU-4 [-1, 64, 224, 224] 0
MaxPool2d-5 [-1, 64, 112, 112] 0
Conv2d-6 [-1, 128, 112, 112] 73,856
ReLU-7 [-1, 128, 112, 112] 0
Conv2d-8 [-1, 128, 112, 112] 147,584
ReLU-9 [-1, 128, 112, 112] 0
MaxPool2d-10 [-1, 128, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 295,168
ReLU-12 [-1, 256, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 590,080
ReLU-14 [-1, 256, 56, 56] 0
Conv2d-15 [-1, 256, 56, 56] 590,080
ReLU-16 [-1, 256, 56, 56] 0
MaxPool2d-17 [-1, 256, 28, 28] 0
Conv2d-18 [-1, 512, 28, 28] 1,180,160
ReLU-19 [-1, 512, 28, 28] 0
Conv2d-20 [-1, 512, 28, 28] 2,359,808
ReLU-21 [-1, 512, 28, 28] 0
Conv2d-22 [-1, 512, 28, 28] 2,359,808
ReLU-23 [-1, 512, 28, 28] 0
MaxPool2d-24 [-1, 512, 14, 14] 0
Conv2d-25 [-1, 512, 14, 14] 2,359,808
ReLU-26 [-1, 512, 14, 14] 0
Conv2d-27 [-1, 512, 14, 14] 2,359,808
ReLU-28 [-1, 512, 14, 14] 0
Conv2d-29 [-1, 512, 14, 14] 2,359,808
ReLU-30 [-1, 512, 14, 14] 0
MaxPool2d-31 [-1, 512, 7, 7] 0
Linear-32 [-1, 4096] 102,764,544
ReLU-33 [-1, 4096] 0
Dropout-34 [-1, 4096] 0
Linear-35 [-1, 4096] 16,781,312
ReLU-36 [-1, 4096] 0
Dropout-37 [-1, 4096] 0
Linear-38 [-1, 1000] 4,097,000
================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 218.59
Params size (MB): 527.79
Estimated Total Size (MB): 746.96
----------------------------------------------------------------
回答by wassname
This will show a model's weights and parameters (but not output shape).
这将显示模型的权重和参数(但不显示输出形状)。
from torch.nn.modules.module import _addindent
import torch
import numpy as np
def torch_summarize(model, show_weights=True, show_parameters=True):
"""Summarizes torch model by showing trainable parameters and weights."""
tmpstr = model.__class__.__name__ + ' (\n'
for key, module in model._modules.items():
# if it contains layers let call it recursively to get params and weights
if type(module) in [
torch.nn.modules.container.Container,
torch.nn.modules.container.Sequential
]:
modstr = torch_summarize(module)
else:
modstr = module.__repr__()
modstr = _addindent(modstr, 2)
params = sum([np.prod(p.size()) for p in module.parameters()])
weights = tuple([tuple(p.size()) for p in module.parameters()])
tmpstr += ' (' + key + '): ' + modstr
if show_weights:
tmpstr += ', weights={}'.format(weights)
if show_parameters:
tmpstr += ', parameters={}'.format(params)
tmpstr += '\n'
tmpstr = tmpstr + ')'
return tmpstr
# Test
import torchvision.models as models
model = models.alexnet()
print(torch_summarize(model))
# # Output
# AlexNet (
# (features): Sequential (
# (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)), weights=((64, 3, 11, 11), (64,)), parameters=23296
# (1): ReLU (inplace), weights=(), parameters=0
# (2): MaxPool2d (size=(3, 3), stride=(2, 2), dilation=(1, 1)), weights=(), parameters=0
# (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)), weights=((192, 64, 5, 5), (192,)), parameters=307392
# (4): ReLU (inplace), weights=(), parameters=0
# (5): MaxPool2d (size=(3, 3), stride=(2, 2), dilation=(1, 1)), weights=(), parameters=0
# (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), weights=((384, 192, 3, 3), (384,)), parameters=663936
# (7): ReLU (inplace), weights=(), parameters=0
# (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), weights=((256, 384, 3, 3), (256,)), parameters=884992
# (9): ReLU (inplace), weights=(), parameters=0
# (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), weights=((256, 256, 3, 3), (256,)), parameters=590080
# (11): ReLU (inplace), weights=(), parameters=0
# (12): MaxPool2d (size=(3, 3), stride=(2, 2), dilation=(1, 1)), weights=(), parameters=0
# ), weights=((64, 3, 11, 11), (64,), (192, 64, 5, 5), (192,), (384, 192, 3, 3), (384,), (256, 384, 3, 3), (256,), (256, 256, 3, 3), (256,)), parameters=2469696
# (classifier): Sequential (
# (0): Dropout (p = 0.5), weights=(), parameters=0
# (1): Linear (9216 -> 4096), weights=((4096, 9216), (4096,)), parameters=37752832
# (2): ReLU (inplace), weights=(), parameters=0
# (3): Dropout (p = 0.5), weights=(), parameters=0
# (4): Linear (4096 -> 4096), weights=((4096, 4096), (4096,)), parameters=16781312
# (5): ReLU (inplace), weights=(), parameters=0
# (6): Linear (4096 -> 1000), weights=((1000, 4096), (1000,)), parameters=4097000
# ), weights=((4096, 9216), (4096,), (4096, 4096), (4096,), (1000, 4096), (1000,)), parameters=58631144
# )
Edit: isaykatsman has a pytorch PR to add a model.summary()
that is exactly like keras https://github.com/pytorch/pytorch/pull/3043/files
编辑:isaykatsman 有一个 pytorch PR 添加一个model.summary()
与 keras 完全一样的https://github.com/pytorch/pytorch/pull/3043/files
回答by prosti
In order to use torchsummary type:
为了使用 torchsummary 类型:
from torchsummary import summary
Install it first if you don't have it.
如果没有,请先安装它。
pip install torchsummary
And then you can try it, but note from some reason it is not working unless I set model to cuda alexnet.cuda
:
然后你可以尝试它,但由于某种原因它不起作用,除非我将模型设置为 cuda alexnet.cuda
:
from torchsummary import summary
help(summary)
import torchvision.models as models
alexnet = models.alexnet(pretrained=False)
alexnet.cuda()
summary(alexnet, (3, 224, 224))
print(alexnet)
The summary
must take the input size and batch size is set to -1 meaning any batch size we provide.
在summary
必须输入尺寸和批量大小设置为-1任何批量大小,我们提供的意思。
If we set summary(alexnet, (3, 224, 224), 32)
this means use the bs=32
.
如果我们设置summary(alexnet, (3, 224, 224), 32)
这意味着使用bs=32
.
summary(model, input_size, batch_size=-1, device='cuda')
Out:
出去:
Help on function summary in module torchsummary.torchsummary:
summary(model, input_size, batch_size=-1, device='cuda')
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [32, 64, 55, 55] 23,296
ReLU-2 [32, 64, 55, 55] 0
MaxPool2d-3 [32, 64, 27, 27] 0
Conv2d-4 [32, 192, 27, 27] 307,392
ReLU-5 [32, 192, 27, 27] 0
MaxPool2d-6 [32, 192, 13, 13] 0
Conv2d-7 [32, 384, 13, 13] 663,936
ReLU-8 [32, 384, 13, 13] 0
Conv2d-9 [32, 256, 13, 13] 884,992
ReLU-10 [32, 256, 13, 13] 0
Conv2d-11 [32, 256, 13, 13] 590,080
ReLU-12 [32, 256, 13, 13] 0
MaxPool2d-13 [32, 256, 6, 6] 0
AdaptiveAvgPool2d-14 [32, 256, 6, 6] 0
Dropout-15 [32, 9216] 0
Linear-16 [32, 4096] 37,752,832
ReLU-17 [32, 4096] 0
Dropout-18 [32, 4096] 0
Linear-19 [32, 4096] 16,781,312
ReLU-20 [32, 4096] 0
Linear-21 [32, 1000] 4,097,000
================================================================
Total params: 61,100,840
Trainable params: 61,100,840
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 18.38
Forward/backward pass size (MB): 268.12
Params size (MB): 233.08
Estimated Total Size (MB): 519.58
----------------------------------------------------------------
AlexNet(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(1): ReLU(inplace)
(2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(4): ReLU(inplace)
(5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace)
(8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): ReLU(inplace)
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Linear(in_features=9216, out_features=4096, bias=True)
(2): ReLU(inplace)
(3): Dropout(p=0.5)
(4): Linear(in_features=4096, out_features=4096, bias=True)
(5): ReLU(inplace)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
回答by Punnerud
Simplest to remember (not as pretty as Keras):
最容易记住(不如 Keras 漂亮):
print(model)
This also work:
这也有效:
repr(model)
If you just want the number of parameters:
如果你只想要参数的数量:
sum([param.nelement() for param in model.parameters()])
From: Is there similar pytorch function as model.summary() as keras? (forum.PyTorch.org)
来自:与 keras 一样,是否有与 model.summary() 类似的 pytorch 函数?(论坛.PyTorch.org)
回答by zoro
AFAK there is no model.summary() like equivalent in pytorch
AFAK 在 pytorch 中没有类似的 model.summary()
Meanwhile you can refer scriptby szagoruyko, which gives a nice visualizaton like in resnet18-example
同时,您可以参考szagoruyko 的脚本,它提供了一个很好的可视化效果,如resnet18-example
Cheers
干杯
回答by Jibin Mathew
Simply print the model after defining an object for the model class
为模型类定义一个对象后,只需打印模型
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.RNN(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward():
...
model = RNN(input_dim, embedding_dim, hidden_dim, output_dim)
print(model)
回答by Ashiq Imran
You can use
您可以使用
from torchsummary import summary
You can specify device
您可以指定设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
You can create a Network, and if you are using MNIST datasets, then following commands will work and show you summary
您可以创建一个网络,如果您使用的是 MNIST 数据集,那么以下命令将起作用并显示摘要
model = Network().to(device)
summary(model,(1,28,28))
回答by trakis95
You can just use x.shape
, in order to measure tensor's x
dimensions
您可以使用x.shape
, 来测量张量的x
维度
回答by dnivog
For visualization and summary of PyTorch
models, tensorboardXcan also can be utilized.
对于PyTorch
模型的可视化和汇总,也可以使用tensorboardX。