代码之家  ›  专栏  ›  技术社区  ›  SamwellTarly

Tflearn中的低质量分类器?

  •  1
  • SamwellTarly  · 技术社区  · 7 年前

    我不熟悉机器学习和尝试 TFlearn学习 因为它很简单。

    我的目标是训练系统预测点所在的方向。

    (50,50) (51,51) 系统必须预测方向为NE(东北)。 如果我喂食 (49,49)

    输入: X1,Y1,X2,Y2,标签
    0到8。8个方向。

    这是我写的小代码,

    from __future__ import print_function
    import numpy as np
    import tflearn
    import tensorflow as tf
    import time
    from tflearn.data_utils import load_csv
    
    #Sample input 50,50,51,51,5
    data, labels = load_csv(filename, target_column=4,
                            categorical_labels=True, n_classes=8)
    
    my_optimizer = tflearn.SGD(learning_rate=0.1)
    net = tflearn.input_data(shape=[None, 4])
    net = tflearn.fully_connected(net, 32) #input 4, output 32
    net = tflearn.fully_connected(net, 32) #input 32, output 32
    net = tflearn.fully_connected(net, 8, activation='softmax')
    net = tflearn.regression(net,optimizer=my_optimizer)
    
    model = tflearn.DNN(net)
    
    model.fit(data, labels, n_epoch=100, batch_size=100000, show_metric=True)
    
    model.save("direction-classifier.tfl")
    

    我面临的问题是,即使在我传递了大约4000万个输入样本之后,系统精度也低至20%。
    我将输入限制为 40-x-60 40-y-60

    我无法理解我是否过度拟合了样本,因为在整个4000万输入的训练期间,精度从来都不高

    为什么这个简单示例的精度如此低?

    我已经包括了前25个步骤的输出。

    --
    Training Step: 100000  | total loss: 6.33983 | time: 163.327s
    | SGD | epoch: 001 | loss: 6.33983 - acc: 0.0663 -- iter: 999999/999999
    --
    Training Step: 200000  | total loss: 6.84055 | time: 161.981ss
    | SGD | epoch: 002 | loss: 6.84055 - acc: 0.1568 -- iter: 999999/999999
    --
    Training Step: 300000  | total loss: 5.90203 | time: 158.853ss
    | SGD | epoch: 003 | loss: 5.90203 - acc: 0.1426 -- iter: 999999/999999
    --
    Training Step: 400000  | total loss: 5.97782 | time: 157.607ss
    | SGD | epoch: 004 | loss: 5.97782 - acc: 0.1465 -- iter: 999999/999999
    --
    Training Step: 500000  | total loss: 5.97215 | time: 155.929ss
    | SGD | epoch: 005 | loss: 5.97215 - acc: 0.1234 -- iter: 999999/999999
    --
    Training Step: 600000  | total loss: 6.86967 | time: 157.299ss
    | SGD | epoch: 006 | loss: 6.86967 - acc: 0.1230 -- iter: 999999/999999
    --
    Training Step: 700000  | total loss: 6.10330 | time: 158.137ss
    | SGD | epoch: 007 | loss: 6.10330 - acc: 0.1242 -- iter: 999999/999999
    --
    Training Step: 800000  | total loss: 5.81901 | time: 157.464ss
    | SGD | epoch: 008 | loss: 5.81901 - acc: 0.1464 -- iter: 999999/999999
    --
    Training Step: 900000  | total loss: 7.09744 | time: 157.486ss
    | SGD | epoch: 009 | loss: 7.09744 - acc: 0.1359 -- iter: 999999/999999
    --
    Training Step: 1000000  | total loss: 7.19259 | time: 158.369s
    | SGD | epoch: 010 | loss: 7.19259 - acc: 0.1248 -- iter: 999999/999999
    --
    Training Step: 1100000  | total loss: 5.60177 | time: 157.221ss
    | SGD | epoch: 011 | loss: 5.60177 - acc: 0.1378 -- iter: 999999/999999
    --
    Training Step: 1200000  | total loss: 7.16676 | time: 158.607ss
    | SGD | epoch: 012 | loss: 7.16676 - acc: 0.1210 -- iter: 999999/999999
    --
    Training Step: 1300000  | total loss: 6.19163 | time: 163.711ss
    | SGD | epoch: 013 | loss: 6.19163 - acc: 0.1635 -- iter: 999999/999999
    --
    Training Step: 1400000  | total loss: 7.46101 | time: 162.091ss
    | SGD | epoch: 014 | loss: 7.46101 - acc: 0.1216 -- iter: 999999/999999
    --
    Training Step: 1500000  | total loss: 7.78055 | time: 158.468ss
    | SGD | epoch: 015 | loss: 7.78055 - acc: 0.1122 -- iter: 999999/999999
    --
    Training Step: 1600000  | total loss: 6.03101 | time: 158.251ss
    | SGD | epoch: 016 | loss: 6.03101 - acc: 0.1103 -- iter: 999999/999999
    --
    Training Step: 1700000  | total loss: 5.59769 | time: 158.083ss
    | SGD | epoch: 017 | loss: 5.59769 - acc: 0.1182 -- iter: 999999/999999
    --
    Training Step: 1800000  | total loss: 5.45591 | time: 158.088ss
    | SGD | epoch: 018 | loss: 5.45591 - acc: 0.0868 -- iter: 999999/999999
    --
    Training Step: 1900000  | total loss: 6.54951 | time: 157.755ss
    | SGD | epoch: 019 | loss: 6.54951 - acc: 0.1353 -- iter: 999999/999999
    --
    Training Step: 2000000  | total loss: 6.18566 | time: 157.408ss
    | SGD | epoch: 020 | loss: 6.18566 - acc: 0.0551 -- iter: 999999/999999
    --
    Training Step: 2100000  | total loss: 4.95146 | time: 157.572ss
    | SGD | epoch: 021 | loss: 4.95146 - acc: 0.1114 -- iter: 999999/999999
    --
    Training Step: 2200000  | total loss: 5.97208 | time: 157.279ss
    | SGD | epoch: 022 | loss: 5.97208 - acc: 0.1277 -- iter: 999999/999999
    --
    Training Step: 2300000  | total loss: 6.75645 | time: 157.201ss
    | SGD | epoch: 023 | loss: 6.75645 - acc: 0.1507 -- iter: 999999/999999
    --
    Training Step: 2400000  | total loss: 7.04119 | time: 157.346ss
    | SGD | epoch: 024 | loss: 7.04119 - acc: 0.1512 -- iter: 999999/999999
    --
    Training Step: 2500000  | total loss: 5.95451 | time: 157.722ss
    | SGD | epoch: 025 | loss: 5.95451 - acc: 0.1421 -- iter: 999999/999999
    
    2 回复  |  直到 7 年前
        1
  •  1
  •   user4172036 user4172036    7 年前

    正如我在上面的评论中所讨论的,下面的代码使用 a MLP helper class I created . 该类使用TensorFlow实现,并遵循scikit learn fit、predict和score接口。

    np.unique 查找生成的数据中的类标签数量,因为它可能会发生变化(某些方向可能缺失)。我还包括了一个空字符串标签,用于起点和终点相同的情况。

    密码

    使用下面的代码,我能够在一些运行中实现100%的交叉验证准确性。 来自sklearn。model_选择导入ShuffleSplit

    #Dictionary to lookup direction ()
    DM = {(-1, -1):'SW', (-1, 0):'W', (-1,  1):'NW', (0,  1):'N', 
          ( 1,  1):'NE', ( 1, 0):'E', ( 1, -1):'SE', (0, -1):'S',
          ( 0,  0):''}
    
    NR = 4096       #Number of rows in sample matrix
    A1 = np.random.randint(40, 61, size = (NR, 2))      #Random starting point
    A2 = np.random.randint(40, 61, size = (NR, 2))      #Random ending point
    A = np.hstack([A1, A2])         #Concat start and end point as feature vector
    #Create label from direction vector
    Y = np.array([DM[(x, y)] for x, y in (A2 - A1).clip(-1, 1)])
    NC = len(np.unique(Y))          #Number of classes
    ss = ShuffleSplit(n_splits = 1)
    trn, tst = next(ss.split(A))    #Make a train/test split for cross-validation
    #%% Create and train Multi-Layer Perceptron for Classification (MLPC)
    l = [4, 6, 6, NC]       #Neuron counts in each layer
    mlpc = MLPC(l, batchSize = 64, maxIter = 128, verbose = True)
    mlpc.fit(A[trn], Y[trn])
    s1 = mlpc.score(A[trn], Y[trn])     #Training accuracy
    s2 = mlpc.score(A[tst], Y[tst])     #Testing accuracy
    s3 = mlpc.score(A, Y)               #Total accuracy
    print('Trn: {:05f}\tTst: {:05f}\tAll: {:05f}'.format(s1, s2, s3))
    

    这是在我的机器上运行上述代码的示例:

    Iter     1            2.59423236 (Batch Size:    64)
    Iter     2            2.25392553 (Batch Size:    64)
    Iter     3            2.02569708 (Batch Size:    64)
    ...
    Iter    12            1.53575111 (Batch Size:    64)
    Iter    13            1.47963311 (Batch Size:    64)
    Iter    14            1.42776408 (Batch Size:    64)
    ...
    Iter    83            0.23911642 (Batch Size:    64)
    Iter    84            0.22893350 (Batch Size:    64)
    Iter    85            0.23644384 (Batch Size:    64)
    ...
    Iter    94            0.21170238 (Batch Size:    64)
    Iter    95            0.20718799 (Batch Size:    64)
    Iter    96            0.21230888 (Batch Size:    64)
    ...
    Iter   126            0.17334313 (Batch Size:    64)
    Iter   127            0.16970796 (Batch Size:    64)
    Iter   128            0.15931854 (Batch Size:    64)
    Trn: 0.995659   Tst: 1.000000   All: 0.996094
    
        2
  •  1
  •   SamwellTarly    7 年前

    结果是优化器导致了所有问题。移除自定义优化器后,损失开始正确下降,准确率提高到99%

    必须修改以下两行。

    my_optimizer = tflearn.SGD(learning_rate=0.1)
    net = tflearn.regression(net,optimizer=my_optimizer)
    

    当替换为时

    net = tflearn.regression(net)
    

    产生了完美的结果。