感受野计算

2022-10-22,

递归版:

r

l

1

=

s

l

r

l

+

k

l

s

l

r_{l-1}=s_l*r_l+k_l-s_l

rl1=slrl+klsl

迭代版:

r

0

=

l

=

1

L

(

(

k

l

1

)

i

=

1

l

1

s

i

)

+

1

r_0=\sum_{l=1}^{L}((k_l-1)\prod_{i=1}^{l-1}s_i)+1

r0=l=1L((kl1)i=1l1si)+1
注:ref的网站上是上面的这个公式,不过我认为

i

i

i应该从0开始,因为当

l

l

l为1时,连乘符号的下标是1,上标是0,不符合逻辑,如果

i

i

i从0开始,那么上下标都是0,这时设定

s

0

s_0

s0为1,则以下的计算就行得通了。下面的都用

i

i

i从0开始的计算方式。

如果是空洞卷积

只需要改一下

k

l

k_l

kl,把它改成

α

(

k

l

1

)

+

1

\alpha(k_l-1)+1

α(kl1)+1

α

\alpha

α是dilated rate。

α

(

k

l

1

)

+

1

\alpha(k_l-1)+1

α(kl1)+1怎么来的,是原来的

k

l

k_l

kl加上空洞后的尺寸,

k

l

+

(

k

l

1

)

(

α

1

)

k_l+(k_l-1)*(\alpha-1)

kl+(kl1)(α1)算一下即可得出。

符号的解释:

r

0

r_0

r0

f

2

f_2

f2每一个像素点对应于

f

0

f_0

f0感受野大小。

r

1

r_1

r1

f

2

f_2

f2每一个像素点对应于

f

1

f_1

f1感受野大小。

r

2

r_2

r2

f

2

f_2

f2每一个像素点对应于

f

2

f_2

f2感受野大小。
其他见名知意。

公式怎么得来的:

递归版的公式,从r_2开始,由于

f

2

f_2

f2每一个像素点对应于本身,

f

2

f_2

f2每一个像素点对应于

f

2

f_2

f2感受野大小为1,即r_2为1。

f

2

f_2

f2每一个像素点对应于

f

1

f_1

f1的两个像素,故r_1为2。

f

1

f_1

f1每一个像素点对应于

f

0

f_0

f0的5个像素,而

f

1

f_1

f1本身有2个像素,由于

s

1

s_1

s1为3,所以最终

f

2

f_2

f2每一个像素点对应于$f_0$8个像素。可以推出,前面一层的r是后面一层的r乘以这两层之间的s加上这两层之间的k与s的差,默认k大于s。

迭代版的是由递归版的解出来的,请看下图:

来源请看ref。

关于维度

举的例子是一维的例子,二维的也可以用,最后乘方即可。比如一维的算得3,那么二维的就是3*3。

Resnet_Atrous50 感受野计算

aspp.py

import torch
from torch._C import Size
import torch.nn as nn
import torch.nn.functional as F

class ASPP(nn.Module):
    '''
    空洞空间金字塔池化(Atrous Spatial Pyramid Pooling)在给定的输入上以不同采样率的空洞卷积
    并行采样,相当于以多个比例捕捉图像的上下文。
    '''
    def __init__(self, in_chans, out_chans, rate=1):
        super(ASPP, self).__init__()
        # 以不同的采样率预制空洞卷积(通过调整dilation实现)
        # 1x1卷积——无空洞
        self.branch1 = nn.Sequential(
            nn.Conv2d(in_chans, out_chans, 1, 1, padding=0, dilation=rate, bias=True),
            nn.BatchNorm2d(out_chans),
            nn.ReLU(inplace=True)
        )
        # 3x3卷积——空洞6
        self.branch2 = nn.Sequential(
            nn.Conv2d(in_chans, out_chans, 3, 1, padding=6*rate, dilation=6*rate, bias=True),
            nn.BatchNorm2d(out_chans),
            nn.ReLU(inplace=True)
        )
        # 3x3卷积——空洞12
        self.branch3 = nn.Sequential(
            nn.Conv2d(in_chans, out_chans, 3, 1, padding=12*rate, dilation=12*rate, bias=True),
            nn.BatchNorm2d(out_chans),
            nn.ReLU(inplace=True)
        )
        # 3x3卷积——空洞18
        self.branch4 = nn.Sequential(
            nn.Conv2d(in_chans, out_chans, 3, 1, padding=18*rate, dilation=18*rate, bias=True),
            nn.BatchNorm2d(out_chans),
            nn.ReLU(inplace=True)
        )
        # 全局平均池化——获取图像层级特征
        self.branch5_avg = nn.AdaptiveAvgPool2d(1)
        # 1x1的conv、bn、relu——用于处理平均池化所得的特征图
        self.branch5_conv = nn.Conv2d(in_chans, out_chans, 1, 1, 0, bias=True)
        self.branch5_bn = nn.BatchNorm2d(out_chans)
        self.branch5_relu = nn.ReLU(inplace=True)
        # 1x1的conv、bn、relu——用于处理concat所得的特征图
        self.conv_cat = nn.Sequential(
            nn.Conv2d(out_chans*5, out_chans, 1, 1, padding=0, bias=True),
            nn.BatchNorm2d(out_chans),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        # 获取size——用于上采样的时候确定上采样到多大
        b, c, h, w = x.size()
        # 一个1x1的卷积
        conv1x1 = self.branch1(x)
        # 三个3x3的空洞卷积
        conv3x3_1 = self.branch2(x)
        conv3x3_2 = self.branch3(x)
        conv3x3_3 = self.branch4(x)
        # 一个平均池化
        global_feature = self.branch5_avg(x)
        # 对平均池化所得的特征图进行处理
        global_feature = self.branch5_relu(self.branch5_bn(self.branch5_conv(global_feature)))
        # 将平均池化+卷积处理后的特征图上采样到原始x的输入大小
        global_feature = F.interpolate(global_feature, (h, w), None, 'bilinear', True)
        # 把所有特征图cat在一起(包括1x1、三组3x3、平均池化+1x1),cat通道的维度
        feature_cat = torch.cat([conv1x1, conv3x3_1, conv3x3_2, conv3x3_3, global_feature], dim=1)
        # 最后再连一个1x1卷积,把cat翻了5倍之后的通道数缩减回来
        result = self.conv_cat(feature_cat)
        return result


deeplabv3plus.py

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo

bn_mom = 0.0003

# 预先训练模型地址
model_urls = {
    'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth'
}

# same空洞卷积
# 对于k=3的卷积,通过设定padding=1*atrous,保证添加空洞后的3x3卷积,输入输出feature map同样大小
# 当k=3,s=1时,如果padding=dilated rate,那么输入输出尺寸相同。
def conv3x3(in_planes, out_planes, stride=1, atrous=1):
    """3x3 convolution with padding"""
    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
                     padding=1 * atrous, dilation=atrous, bias=False)

class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_chans, out_chans, stride=1, atrous=1, downsample=None):
        super(BasicBlock, self).__init__()
        # 使用自定义的same 空洞卷积
        self.conv1 = conv3x3(in_chans, out_chans, stride, atrous)
        self.bn1 = nn.BatchNorm2d(out_chans)
        self.relu = nn.ReLU(inplace=True)
        
        self.conv2 = conv3x3(out_chans, out_chans)
        self.bn2 = nn.BatchNorm2d(out_chans)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out

class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, in_chans, out_chans, stride=1, atrous=1, downsample=None):
        super(Bottleneck, self).__init__()
        self.conv1 = nn.Conv2d(in_chans, out_chans, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_chans)
        # same空洞卷积
        self.conv2 = nn.Conv2d(out_chans, out_chans, kernel_size=3, stride=stride,
                               padding=1 * atrous, dilation=atrous, bias=False)
        self.bn2 = nn.BatchNorm2d(out_chans)
        self.conv3 = nn.Conv2d(out_chans, out_chans * self.expansion, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(out_chans * self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
        self.stride = stride

    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


class ResNet_Atrous(nn.Module):
    # 当layers=[3,4,6,3]时,block为bottlenet时,就生成resnet50
    def __init__(self, block, layers, atrous=None, os=16):
        super(ResNet_Atrous, self).__init__()
        self.block = block
        stride_list = None
        if os == 8:
            stride_list = [2, 1, 1]
        elif os == 16:
            stride_list = [2, 2, 1]
        else:
            raise ValueError('resnet_atrous.py: output stride=%d is not supported.' % os)

        self.inplanes = 64
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)

        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        # resnet的 block1
        self.layer1 = self._make_layer(block, 64, 64, layers[0]) # layers[0] 3 layers[0]包含3组bottleneck
        # resnet的 block2
        self.layer2 = self._make_layer(block, 64 * block.expansion, 128, layers[1], stride=stride_list[0]) # 4 layers[1]包含4组bottleneck
        # resnet的 block3
        self.layer3 = self._make_layer(block, 128 * block.expansion, 256, layers[2], stride=stride_list[1], atrous=16 // os) # 6  stride 2 atros 1 outchans=256*block.expansion
        self.layer4 = self._make_layer(block, 256 * block.expansion, 512, layers[3], stride=stride_list[2], # stride 由感受野(os)决定
                                       atrous=[item * 16 // os for item in atrous]) # 3 dilated rate 由unit dilated rate(item)和感受野(os)决定
                                       # stride 1  downsample repeat 3 inchans1024 outchans512 atrous=[item*16 //16, for item in [1, 2, 1]]=[1, 2, 1 ]
        self.layer5 = self._make_layer(block, 512 * block.expansion, 512, layers[3], stride=1, atrous=[item*16//os for item in atrous])
        # repeat 3 stride 1 atrous=[1, 2, 1]
        self.layer6 = self._make_layer(block, 512 * block.expansion, 512, layers[3], stride=1, atrous=[item*16//os for item in atrous]) # 3
        self.layers = []

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def _make_layer(self, block, in_chans, out_chans, blocks, stride=1, atrous=None):
        downsample = None
        if atrous == None:
            atrous = [1] * blocks
        elif isinstance(atrous, int):
            atrous_list = [atrous] * blocks
            atrous = atrous_list
      
        if stride != 1 or in_chans != out_chans * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(in_chans, out_chans * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(out_chans * block.expansion),
            )

        layers = []
        layers.append(block(in_chans, out_chans, stride=stride, atrous=atrous[0], downsample=downsample))
        in_chans = out_chans * block.expansion
        for i in range(1, blocks):
            layers.append(block(in_chans, out_chans, stride=1, atrous=atrous[i]))

        return nn.Sequential(*layers)

    def forward(self, x):
        layers_list = []
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
        x = self.layer1(x)
        # 此时x为4倍下采样
        layers_list.append(x)
        x = self.layer2(x)
        # 此时x为8倍下采样
        layers_list.append(x)
        x = self.layer3(x)
        # 此时x为8倍或者16倍下采样,由本代码的123,125行的 stride_list决定
        # stride_list[2,1,1]时,就是8倍下采样
        # stride_list[2,2,1]时,就是16倍下采样
        
        layers_list.append(x)
        x = self.layer4(x)
        x = self.layer5(x)
        x = self.layer6(x)
        # 此时x为8倍或者16倍下采样,由本代码的123,125行的 stride_list决定
        # stride_list[2,1,1]时,就是8倍下采样
        # stride_list[2,2,1]时,就是16倍下采样
        layers_list.append(x)
        # return 4个feature map,分别是block1,block2,block3,block6的feature map
        return layers_list

def resnet34_atrous(pretrained=True, os=16, **kwargs):
    """Constructs a atrous ResNet-34 model."""
    model = ResNet_Atrous(BasicBlock, [3, 4, 6, 3], atrous=[1, 2, 1], os=os, **kwargs)
    if pretrained:
        old_dict = model_zoo.load_url(model_urls['resnet34'])
        model_dict = model.state_dict()
        old_dict = {k: v for k, v in old_dict.items() if (k in model_dict)}
        model_dict.update(old_dict)
        model.load_state_dict(model_dict)
    return model


def resnet50_atrous(pretrained=True, os=16, **kwargs):
    """Constructs a atrous ResNet-50 model."""
    model = ResNet_Atrous(Bottleneck, [3, 4, 6, 3], atrous=[1, 2, 1], os=os, **kwargs)
    if pretrained:
        old_dict = model_zoo.load_url(model_urls['resnet50'])
        model_dict = model.state_dict()
        old_dict = {k: v for k, v in old_dict.items() if (k in model_dict)}
        model_dict.update(old_dict)
        model.load_state_dict(model_dict)
    return model


def resnet101_atrous(pretrained=True, os=16, **kwargs):
    """Constructs a atrous ResNet-101 model."""
    model = ResNet_Atrous(Bottleneck, [3, 4, 23, 3], atrous=[1, 2, 1], os=os, **kwargs)
    if pretrained:
        old_dict = model_zoo.load_url(model_urls['resnet101'])
        model_dict = model.state_dict()
        old_dict = {k: v for k, v in old_dict.items() if (k in model_dict)}
        model_dict.update(old_dict)
        model.load_state_dict(model_dict)
    return model


from aspp import ASPP

class Config(object):

    OUTPUT_STRIDE = 16
    #设定ASPP模块输出的channel数
    ASPP_OUTDIM = 256
    # Decoder中,shortcut的1x1卷积的channel数目
    SHORTCUT_DIM = 48
    # Decoder中,shortcut的卷积的核大小
    SHORTCUT_KERNEL = 1
    # 每个像素要被分类的类别数
    NUM_CLASSES = 21

class DeeplabV3Plus(nn.Module):
    def __init__(self, cfg, backbone=resnet50_atrous):
        super(DeeplabV3Plus, self).__init__()
        self.backbone = backbone(pretrained=False, os=cfg.OUTPUT_STRIDE)
        input_channel = 512 * self.backbone.block.expansion
        self.aspp = ASPP(in_chans=input_channel, out_chans=cfg.ASPP_OUTDIM) # rate=16//cfg.OUTPUT_STRIDE
        self.dropout1 = nn.Dropout(0.5)
        self.upsample4 = nn.UpsamplingBilinear2d(scale_factor=4)
        self.upsample_sub = nn.UpsamplingBilinear2d(scale_factor=cfg.OUTPUT_STRIDE//4)

        indim = 64 * self.backbone.block.expansion
        self.shortcut_conv = nn.Sequential(
                nn.Conv2d(indim, cfg.SHORTCUT_DIM, cfg.SHORTCUT_KERNEL, 1, padding=cfg.SHORTCUT_KERNEL//2,bias=False),
                nn.BatchNorm2d(cfg.SHORTCUT_DIM),
                nn.ReLU(inplace=True),
        )
        self.cat_conv = nn.Sequential(
                nn.Conv2d(cfg.ASPP_OUTDIM+cfg.SHORTCUT_DIM, cfg.ASPP_OUTDIM, 3, 1, padding=1,bias=False),
                nn.BatchNorm2d(cfg.ASPP_OUTDIM),
                nn.ReLU(inplace=True),
                nn.Dropout(0.5),
                nn.Conv2d(cfg.ASPP_OUTDIM, cfg.ASPP_OUTDIM, 3, 1, padding=1, bias=False),
                nn.BatchNorm2d(cfg.ASPP_OUTDIM),
                nn.ReLU(inplace=True),
                nn.Dropout(0.1),
        )
        self.cls_conv = nn.Conv2d(cfg.ASPP_OUTDIM, cfg.NUM_CLASSES, 1, 1, padding=0)
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def forward(self, x):
        # 利用backbone生成block1,2,3,4,5,6,7的feature maps
        layers = self.backbone(x)
        # layers[-1]是block7输出的feature map相对于原图下采样了16倍
        # 把block7的输出送入aspp
        feature_aspp = self.aspp(layers[-1])
        feature_aspp = self.dropout1(feature_aspp)
        # 双线行插值上采样4倍
        feature_aspp = self.upsample_sub(feature_aspp)

        # layers[0],是block1输出的featuremap,相对于原图下采样的4倍,我们将它送入1x1x48的卷积中
        feature_shallow = self.shortcut_conv(layers[0])
        # aspp上采样4倍,变成相对于原图下采样4倍,与featue _shallow 拼接融合
        feature_cat = torch.cat([feature_aspp, feature_shallow],1)
        result = self.cat_conv(feature_cat)
        result = self.cls_conv(result)
        result = self.upsample4(result)
        return result


cfg = Config()
model = DeeplabV3Plus(cfg, backbone=resnet50_atrous)
# print(model)
x = torch.randn((2, 3, 128, 128), dtype=torch.float32)
y = model(x)
# print(y.shape)

cal_receptive_field.py

from deeplabv3plus import *

not_handle = [
'aspp.branch1.0',
'aspp.branch2.0',
'aspp.branch3.0',
'aspp.branch4.0',
'aspp.branch5_conv',
'aspp.conv_cat.0',
'shortcut_conv.0',
'cat_conv.0',
'cat_conv.4',
'cls_conv',
'backbone.layer4.0.downsample.0',
'backbone.layer3.0.downsample.0',
'backbone.layer2.0.downsample.0',
'backbone.layer1.0.downsample.0'
]


acc_s = 0
add_r = 0
r = 0
k, s, p, d = 0, 0, 0, 0
f0 = 1536
f = 0
for name, module in model.named_modules():
    if name not in not_handle and (isinstance(module, nn.Conv2d) or isinstance(module, nn.MaxPool2d)):
        if isinstance(module, nn.Conv2d):
            k, s, p, d = module.kernel_size[0], module.stride[0], module.padding[0], module.dilation[0]
        if isinstance(module, nn.MaxPool2d):
            k, s, p, d = module.kernel_size, module.stride, module.padding, module.dilation
        if acc_s != 0:
            if d != 1:
                add_r = (d*(k-1) + 1-1)*acc_s
            else:
                add_r = (k-1)*acc_s
        if acc_s == 0:
            acc_s = s
        else:
            acc_s *= s
        if r == 0:
            r = k
        else:
            r += add_r
        if f == 0:
            f = (f0 + 2*p - d*(k-1) -1)//s + 1
        else:
            f = (f + 2*p - d*(k-1) -1)//s + 1
        print(name, s, acc_s, add_r, r, f)

运行cal_receptive_field.py的结果:

backbone.conv1 2 2 0 7 768
backbone.maxpool 2 4 4 11 384
backbone.layer1.0.conv1 1 4 0 11 384  
backbone.layer1.0.conv2 1 4 8 19 384  
backbone.layer1.0.conv3 1 4 0 19 384  
backbone.layer1.1.conv1 1 4 0 19 384  
backbone.layer1.1.conv2 1 4 8 27 384  
backbone.layer1.1.conv3 1 4 0 27 384  
backbone.layer1.2.conv1 1 4 0 27 384  
backbone.layer1.2.conv2 1 4 8 35 384  
backbone.layer1.2.conv3 1 4 0 35 384  
backbone.layer2.0.conv1 1 4 0 35 384  
backbone.layer2.0.conv2 2 8 8 43 192  
backbone.layer2.0.conv3 1 8 0 43 192  
backbone.layer2.1.conv1 1 8 0 43 192  
backbone.layer2.1.conv2 1 8 16 59 192 
backbone.layer2.1.conv3 1 8 0 59 192  
backbone.layer2.2.conv1 1 8 0 59 192  
backbone.layer2.2.conv2 1 8 16 75 192 
backbone.layer2.2.conv3 1 8 0 75 192  
backbone.layer2.3.conv1 1 8 0 75 192  
backbone.layer2.3.conv2 1 8 16 91 192 
backbone.layer2.3.conv3 1 8 0 91 192  
backbone.layer3.0.conv1 1 8 0 91 192  
backbone.layer3.0.conv2 2 16 16 107 96
backbone.layer3.0.conv3 1 16 0 107 96 
backbone.layer3.1.conv1 1 16 0 107 96 
backbone.layer3.1.conv2 1 16 32 139 96
backbone.layer3.1.conv3 1 16 0 139 96 
backbone.layer3.2.conv1 1 16 0 139 96
backbone.layer3.2.conv2 1 16 32 171 96
backbone.layer3.2.conv3 1 16 0 171 96
backbone.layer3.3.conv1 1 16 0 171 96
backbone.layer3.3.conv2 1 16 32 203 96
backbone.layer3.3.conv3 1 16 0 203 96
backbone.layer3.4.conv1 1 16 0 203 96
backbone.layer3.4.conv2 1 16 32 235 96
backbone.layer3.4.conv3 1 16 0 235 96
backbone.layer3.5.conv1 1 16 0 235 96
backbone.layer3.5.conv2 1 16 32 267 96
backbone.layer3.5.conv3 1 16 0 267 96
backbone.layer4.0.conv1 1 16 0 267 96
backbone.layer4.0.conv2 1 16 32 299 96
backbone.layer4.0.conv3 1 16 0 299 96
backbone.layer4.1.conv1 1 16 0 299 96
backbone.layer4.1.conv2 1 16 64 363 96
backbone.layer4.1.conv3 1 16 0 363 96
backbone.layer4.2.conv1 1 16 0 363 96
backbone.layer4.2.conv2 1 16 32 395 96
backbone.layer4.2.conv3 1 16 0 395 96
backbone.layer5.0.conv1 1 16 0 395 96
backbone.layer5.0.conv2 1 16 32 427 96
backbone.layer5.0.conv3 1 16 0 427 96
backbone.layer5.1.conv1 1 16 0 427 96
backbone.layer5.1.conv2 1 16 64 491 96
backbone.layer5.1.conv3 1 16 0 491 96
backbone.layer5.2.conv1 1 16 0 491 96
backbone.layer5.2.conv2 1 16 32 523 96
backbone.layer5.2.conv3 1 16 0 523 96
backbone.layer6.0.conv1 1 16 0 523 96
backbone.layer6.0.conv2 1 16 32 555 96
backbone.layer6.0.conv3 1 16 0 555 96
backbone.layer6.1.conv1 1 16 0 555 96
backbone.layer6.1.conv2 1 16 64 619 96
backbone.layer6.1.conv3 1 16 0 619 96
backbone.layer6.2.conv1 1 16 0 619 96
backbone.layer6.2.conv2 1 16 32 651 96
backbone.layer6.2.conv3 1 16 0 651 96

符号解释:
s, acc_s, add_r, r, f:本层layer的stride,从input到上一层layer(包含)的除了downsample的conv或maxpool的stride的累乘,(本层的kernel_size-1)*acc_s,本层的receptive field,本层的feature map。
r = 上一层的r + add_r的解释:
因为

r

0

=

l

=

0

L

(

(

k

l

1

)

i

=

0

l

1

)

+

1

r_0=\sum_{l=0}^{L}((k_l-1)\prod_{i=0}^{l-1})+1

r0=l=0L((kl1)i=0l1)+1

L

=

1

L=1

L=1,

r

0

=

(

k

1

1

)

s

0

+

1

r_0=(k_1-1)*s_0+1

r0=(k11)s0+1,

L

=

2

L=2

L=2,

r

0

=

(

k

1

1

)

s

0

+

(

k

2

1

)

s

0

s

1

+

1

r_0=(k_1-1)*s_0+(k_2-1)*s_0*s_1+1

r0=(k11)s0+(k21)s0s1+1

L

=

3

L=3

L=3,

r

0

=

(

k

1

1

)

s

0

+

(

k

2

1

)

s

0

s

1

+

(

k

3

1

)

s

0

s

1

s

2

+

1

r_0=(k_1-1)*s_0+(k_2-1)*s_0*s_1+(k_3-1)*s_0*s_1*s_2+1

r0=(k11)s0+(k21)s0s1+(k31)s0s1s2+1
也就是,

L

L

L每增加1,就是再往下一层,

r

0

r_0

r0也增加了add_r这么多。
downsample的感受野没打印的解释:
本例downsample的conv的kernel_size是1,add_r为0,所以它们的感受野与downsample之前一样,downsample之前的感受野在上面已经打印出。
aspp的各分支的感受野计算:

由上面的打印结果得知backbone的最后一层的 s,acc_s,add_r,r,f分别为1,16,0,651,96。
branch1:
add_r = (1-1)*acc_s=0
r = r + add_r = 651
branch2:
空洞卷积,k变一变,下同。
k =

α

\alpha

α(k-1)+1=6*2+1=13
r = r + add_r = 651+(13-1)*16=843
branch3:
k=12*2+1=25
r = r + add_r = 651+(25-1)*16=1035
branch4:
k=18*2+1=37
r = r + add_r = 651 + (37-1)*16=1227
branch5:
全局平均池化,相当于kernel_size为96,stride为96的conv。
r = r + add_r = 651 + (96-1)*16=2171

ref

Computing Receptive Fields of Convolutional Neural Networks
How to Calculate Receptive Field Size in CNN

《感受野计算.doc》

下载本文的Word格式文档,以方便收藏与打印。