代码之家  ›  专栏  ›  技术社区  ›  Black Swan

无法解压缩的值太多(应为2)错误

  •  0
  • Black Swan  · 技术社区  · 1 年前

    我试图使用PSO优化CNN的超参数,但无法修复这个错误“太多的值无法打开(预期为2)”,不确定我在适应度函数中缺少什么: 主代码:

    定义PSO优化

    dimensions = 3  # Number of hyperparameters to optimize
    options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
    lower_limit = np.array([1, 128, 512])
    upper_limit = np.array([10, 256, 1024])
    bounds = (lower_limit, upper_limit)
    optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)
    best_hyperparameters, _ = optimizer.optimize(fitness_function, iters=20) 
    

    PSO部分的详细错误:

    2023-07-06 19:09:25,049 - pyswarms.single.global_best - INFO - Optimize for 20 iters with {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
    pyswarms.single.global_best:   0%|          |0/20
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-6-2b66b04d1847> in <cell line: 11>()
          9 bounds = (lower_limit, upper_limit)
         10 optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)
    ---> 11 best_hyperparameters, _ = optimizer.optimize(fitness_function, iters=20)
    
    2 frames
    <ipython-input-4-fecbe8b4fed7> in fitness_function(params)
          1 # Define the fitness function for PSO optimization
          2 def fitness_function(params):
    ----> 3     hidden_channels, kernel_size, stride = params
          4 
          5     # Set the device
    
    ValueError: too many values to unpack (expected 3)
    
    at this line 6: `optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)
    

    我不知道为什么会出现这个错误,尝试了一些解决方案,但没有帮助。

    0 回复  |  直到 1 年前
        1
  •  0
  •   Debi Prasad    1 年前

    您的错误是由于您将 bounds 变量。截至文件编制之时 边界 变量必须是具有2个值的元组,即 bounds=(lower_limit,upper_limit) 其中您的下限和上限是numpy.ndarray或范围列表。所以这个版本的代码运行良好。

    dimensions = 3  # Number of hyperparameters to optimize
    options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
    bounds = ([16,3,1],[64,7,3])  # Bounds for each hyperparameter
    ## The above line is which has been given in wrong format which causes error
    optimizer=ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options,bounds=bounds)
    best_hyperparameters,_ = optimizer.optimize(fitness_function, iters=20)
    

    正如您所看到的,边界变量的形状应该是 (2,) 但根据你的意见 (3,) 。因此,您获得了 Error is this: "ValueError: too many values to unpack (expected 2) at this line 6: optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds) ,正如它所期望的那样,解压缩2个变量,但您给出了3个。

    P.S.-你可能是社区的新手,但要努力通过 this ,因为它可以帮助我们很容易地获得bug/查询,而且它很容易表示。快乐编码!!

        2
  •  0
  •   Black Swan    1 年前

    PSO的适应度函数如下:

    # Define the fitness function for PSO optimization
    def fitness_function(params):
      hidden_channels, kernel_size, stride = params
    
    # Set the device
      device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    
    # Create the CNN model
      model = CNN(input_channels=3, output_classes=10, hidden_channels=hidden_channels,
                kernel_size=kernel_size, stride=stride)
    model.to(device)
    
    # Define the loss function and optimizer
      criterion = nn.CrossEntropyLoss()
      optimizer = optim.Adam(model.parameters(), lr=0.001)
    
    # Load the CIFAR10 dataset
      train_dataset = CIFAR10(root='./data', train=True, download=True, transform=ToTensor())
    train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
    
    # Train the model
      num_epochs = 10
      for epoch in range(num_epochs):
        for images, labels in train_loader:
            images, labels = images.to(device), labels.to(device)
    
            optimizer.zero_grad()
            outputs = model(images)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()
    
    # Evaluate the model
      test_dataset = CIFAR10(root='./data', train=False, download=True, transform=ToTensor())
      test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
    
      model.eval()
      correct = 0
      total = 0
      with torch.no_grad():
        for images, labels in test_loader:
            images, labels = images.to(device), labels.to(device)
    
            outputs = model(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    
      accuracy = correct / total
      return accuracy
    
        # Calculate the fitness value
      fitness = -accuracy
        # Return the fitness value and any additional information
      return fitness, accuracy