我在为cs231n分配1实现矢量化svm梯度时遇到了这个问题。
以下是一个示例:
ary = np.array([[1,-9,0],
[1,2,3],
[0,0,0]])
ary[[0,1]] += np.ones((2,2),dtype='int')
并输出:
array([[ 2, -8, 1],
[ 2, 3, 4],
[ 0, 0, 0]])
在行不唯一之前,一切正常:
ary[[0,1,1]] += np.ones((3,3),dtype='int')
虽然没有抛出错误,但输出非常奇怪:
array([[ 2, -8, 1],
[ 2, 3, 4],
[ 0, 0, 0]])
我认为第二行应该是[3,4,5],而不是[2,3,4],
我用来解决这个问题的天真方法是使用这样的for循环:
ary = np.array([[ 2, -8, 1],
[ 2, 3, 4],
[ 0, 0, 0]])
# the rows I want to change
rows = [0,1,2,1,0,1]
# the change matrix
change = np.random.randn((6,3))
for i,row in enumerate(rows):
ary[row] += change[i]
所以我真的不知道如何对循环进行矢量化,在NumPy中有更好的方法吗?
为什么这样做是不对的
ary[rows] += change
如果有人好奇我为什么要这样做,下面是我对svm\u loss\u矢量化函数的实现,我需要根据标签y计算权重的梯度:
def svm_loss_vectorized(W, X, y, reg):
"""
Structured SVM loss function, vectorized implementation.
Inputs and outputs are the same as svm_loss_naive.
"""
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero
# transpose X and W
# D means input dimensions, N means number of train example
# C means number of classes
# X.shape will be (D,N)
# W.shape will be (C,D)
X = X.T
W = W.T
dW = dW.T
num_train = X.shape[1]
# transpose W_y shape to (D,N)
W_y = W[y].T
S_y = np.sum(W_y*X ,axis=0)
margins = np.dot(W,X) + 1 - S_y
mask = np.array(margins>0)
# get the impact of num_train examples made on W's gradient
# that is,only when the mask is positive
# the train example has impact on W's gradient
dW_j = np.dot(mask, X.T)
dW += dW_j
mul_mask = np.sum(mask, axis=0, keepdims=True).T
# dW[y] -= mul_mask * X.T
dW_y = mul_mask * X.T
for i,label in enumerate(y):
dW[label] -= dW_y[i]
loss = np.sum(margins*mask) - num_train
loss /= num_train
dW /= num_train
# add regularization term
loss += reg * np.sum(W*W)
dW += reg * 2 * W
dW = dW.T
return loss, dW