代码之家  ›  专栏  ›  技术社区  ›  MNM

Pytorch试图使NN接收到无效的参数组合

  •  0
  • MNM  · 技术社区  · 6 年前

    我正试着用pytroch建立我的第一个NN,结果遇到了一个问题。

    TypeError:new()接收到无效的参数组合-get(float,int,int,int),但应为以下值之一: *(火炬装置) *(火炬储存) *(张量其他) *(整数大小的元组,torch.device设备) *(对象数据,torch.device设备)

    现在我知道这是什么意思了,因为我没有向方法或init传递正确的类型。但我不知道我应该通过什么,因为它看起来是对的。

    def main():
    #Get the time and data
    now = datetime.datetime.now()
    hourGlassToStack = 2 #Hourglasses to stack
    numModules= 2        #Residual Modules for each hourglass
    numFeats = 256      #Number of features in each hourglass
    numRegModules = 2   #Depth regression modules
    
    print("Creating Model")
    model = HourglassNet3D(hourGlassToStack, numModules, numFeats,numRegModules).cuda()
    print("Model Created")
    

    这是创建模型的主要方法。 然后调用这些方法。

    class HourglassNet3D(nn.Module):
    
      def __init__(self, nStack, nModules, nFeats, nRegModules):
        super(HourglassNet3D, self).__init__()
        self.nStack = nStack
        self.nModules = nModules
        self.nFeats = nFeats
        self.nRegModules = nRegModules
        self.conv1_ = nn.Conv2d(3, 64, bias = True, kernel_size = 7, stride = 2, padding = 3)
       self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace = True)
        self.r1 = Residual(64, 128)
        self.maxpool = nn.MaxPool2d(kernel_size = 2, stride = 2)
        self.r4 = Residual(128, 128)
        self.r5 = Residual(128, self.nFeats)
    
        _hourglass, _Residual, _lin_, _tmpOut, _ll_, _tmpOut_, _reg_ = [], [], [], [], [], [], []
        for i in range(self.nStack):
          _hourglass.append(Hourglass(4, self.nModules, self.nFeats))
          for j in range(self.nModules):
            _Residual.append(Residual(self.nFeats, self.nFeats))
          lin = nn.Sequential(nn.Conv2d(self.nFeats, self.nFeats, bias = True, kernel_size = 1, stride = 1), 
                          nn.BatchNorm2d(self.nFeats), self.relu)
          _lin_.append(lin)
          _tmpOut.append(nn.Conv2d(self.nFeats, 16, bias = True, kernel_size = 1, stride = 1))
      _ll_.append(nn.Conv2d(self.nFeats, self.nFeats, bias = True, kernel_size = 1, stride = 1))
      _tmpOut_.append(nn.Conv2d(16, self.nFeats, bias = True, kernel_size = 1, stride = 1))
    
    for i in range(4):
      for j in range(self.nRegModules):
        _reg_.append(Residual(self.nFeats, self.nFeats))
    
    self.hourglass = nn.ModuleList(_hourglass)
    self.Residual = nn.ModuleList(_Residual)
    self.lin_ = nn.ModuleList(_lin_)
    self.tmpOut = nn.ModuleList(_tmpOut)
    self.ll_ = nn.ModuleList(_ll_)
    self.tmpOut_ = nn.ModuleList(_tmpOut_)
    self.reg_ = nn.ModuleList(_reg_)
    
    self.reg = nn.Linear(4 * 4 * self.nFeats,16 )
    

    这个叫这个

    class Residual(nn.Module):
    #set the number ofinput and output for each layer
    def __init__(self, numIn, numOut):
       super(Residual, self).__init__()
       self.numIn = numIn
       self.numOut = numOut
       self.bn = nn.BatchNorm2d(self.numIn)
       self.relu = nn.ReLU(inplace = True)
       self.conv1 = nn.Conv2d(self.numIn, self.numOut / 2, bias = True, kernel_size = 1)
       self.bn1 = nn.BatchNorm2d(self.numOut / 2)
       self.conv2 = nn.Conv2d(self.numOut / 2, self.numOut / 2, bias = True, kernel_size = 3, stride = 1, padding = 1)
       self.bn2 = nn.BatchNorm2d(self.numOut / 2)
       self.conv3 = nn.Conv2d(self.numOut / 2, self.numOut, bias = True, kernel_size = 1)
    
       if self.numIn != self.numOut:
           self.conv4 = nn.Conv2d(self.numIn, self.numOut, bias = True, kernel_size = 1) 
    

    所有这些在我看来都很好,但我不知道如果我做错了,我该怎么通过。 谢谢你的帮助

    1 回复  |  直到 6 年前
        1
  •  1
  •   dennlinger    6 年前

    你可能需要注意你传递给你的卷积层的是什么 Residual 上课。 Per default, Python 3 will convert any division operation into a float variable .

    尝试将变量转换回整数,看看这是否有帮助。固定代码 残差 :

    class Residual(nn.Module):
    #set the number ofinput and output for each layer
    
        def __init__(self, numIn, numOut):
           super(Residual, self).__init__()
           self.numIn = numIn
           self.numOut = numOut
           self.bn = nn.BatchNorm2d(self.numIn)
           self.relu = nn.ReLU(inplace = True)
           self.conv1 = nn.Conv2d(self.numIn, int(self.numOut / 2), bias = True, kernel_size = 1)
           self.bn1 = nn.BatchNorm2d(int(self.numOut / 2))
           self.conv2 = nn.Conv2d(int(self.numOut / 2), int(self.numOut / 2), bias = True, kernel_size = 3, stride = 1, padding = 1)
           self.bn2 = nn.BatchNorm2d(int(self.numOut / 2))
           self.conv3 = nn.Conv2d(int(self.numOut / 2), self.numOut, bias = True, kernel_size = 1)
    
           if self.numIn != self.numOut:
               self.conv4 = nn.Conv2d(self.numIn, self.numOut, bias = True, kernel_size = 1)