python 3.5/windows 10/tensorflow gpu 1.12(GTX 1070)
目标:为3通道图像建立卷积自动编码器
教程来源:
https://towardsdatascience.com/autoencoders-introduction-and-implementation-3f40483b0a85
本教程使用mnist数据集,我的图像更大,在3个颜色通道,但我正在尝试适应相应的。
我有一个困惑的地方:
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
[28,28,1]是mnist图像的W/H和灰度
我理解内核大小等于过滤器大小--
这是正确的吗?
(
https://blog.xrds.acm.org/2016/06/convolutional-neural-networks-cnns-illustrated-explanation/
)
使用相同的内核/过滤器和步骤理解,如下所示:
我对生成特征图的理解:
我不会填充上面的图像,并且会到达以下位置:
filter_ct_a, out_shape_a, padding_a = calc_num_filters(shapeXY=[5,5,1], filterXY=[3,3], strideXY=[1,1])
print("# Filters: {}\nNew Shape: {}\n Padding : {}".format(filter_ct_a, out_shape_a, padding_a))
# Filters: 9
New Shape: [3, 3, 1]
Padding : [0, 0]
考虑到它是加垫的:
filter_ct_a, out_shape_a, padding_a = calc_num_filters(shapeXY=[5,5,1], filterXY=[3,3], strideXY=[1,1], paddingXY=[1,1])
print("# Filters: {}\nNew Shape: {}\n Padding : {}".format(filter_ct_a, out_shape_a, padding_a))
5.0
# Filters: 25
New Shape: [5, 5, 1]
Padding : [1, 1]
我将过滤器的数量解释为图像大小、填充、跨距和内核大小的函数。(
这是正确的吗?
)(
How to interpret TensorFlow's convolution filter and striding parameters?
)
我对这种关系的虚拟计算如下:
def calc_num_filters(shapeXY, filterXY, strideXY=[1,1], paddingXY = [0,0]):
paddingX = paddingXY[0]
while True:
filtersX = 1 + ((shapeXY[0]+2*paddingX-filterXY[0])/strideXY[0])
if filtersX == int(filtersX):# and filtersX%2 == 0:
break
paddingX += 1
if paddingX >= shapeXY[0]:
raise "incompatable filter shape X"
paddingY = paddingXY[1]
while True:
filtersY = 1 + ((shapeXY[1]+2*paddingY-filterXY[1])/strideXY[1])
if filtersY == int(filtersY):# and filtersY%2 == 0:
break
paddingY += 1
if paddingY >= shapeXY[1]:
raise "incompatable filter shape Y"
return (int(filtersX*filtersY),[int(filtersX), int(filtersY), shapeXY[2]], [paddingX, paddingY])
在教程示例中,conv1将张量大小从[28,28,1]更改为[28,28,32]。我注意到了
tf.layers.conv2d
似乎使频道(或z-dim)与
filters
在所有情况下传递的值。
我不知道这些值是如何兼容的:a
28x28 image
,用
kernel_size=(3,3)
导致
32 filters
?
假设步幅为[1,1]
filter_ct_a, out_shape_a, padding_a = calc_num_filters(shapeXY=[28,28,1], filterXY=[3,3], strideXY=[1,1])
print("# Filters: {}\nNew Shape: {}\n Padding : {}".format(filter_ct_a, out_shape_a, padding_a))
# Filters: 676
New Shape: [26, 26, 1]
Padding : [0, 0]
使用
strideXY=[3,3]
:
filter_ct_a, out_shape_a, padding_a = calc_num_filters(shapeXY=[28,28,1], filterXY=[3,3], strideXY=[3,3])
print("# Filters: {}\nNew Shape: {}\n Padding : {}".format(filter_ct_a, out_shape_a, padding_a))
# Filters: 100
New Shape: [10, 10, 1]
Padding : [1, 1]
如果过滤器(计数)、内核大小、步幅和图像大小是以我理解它们的方式相关的——为什么TensorFlow在导出过滤器计数时要求它?