新的算法理论不断涌现的同时,各种深度学习框架也不断出现在人们视野,比如Torch,Caffe等等。TensorFlow是Google开发的第二代机器学习系统,于2015年底开源,成为了新一代流行的机器学习的算法框架。这一章节我们将学习tensorflow的轻量级工具Canton的使用。
Canton是TensorFlow的一个轻量级包装,专注于直观的程序化建模和重量分享。它提供了灵活的方式来定义,训练,评估和保存计算模型。详细文档可以参考这个链接
input_variable=tf.Variable(np.random.normal(loc=0,scale=1,size=[1,256,256,3]).astype(‘float32’))
- conv_0有自己的权重。
- conv_1和conv_2共享权重。
conv = ct.Conv2D(3,16,3) shared_conv = ct.Conv2D(16,16,3) print(conv.weights) print(shared_conv.weights)
, ,
ct.Conv2D的第一个参数是filter的长宽,第二是输入值,第三个是输出值。而且这个函数自带偏移值b。i = conv(input_variable) i = shared_conv(i) out = shared_conv(i) print(out) loss = tf.reduce_mean(out**2.)这里可以看到只需要简单调用就可以实现网络层的链接。
现在让我们假设你只想训练共享层的权重(保持第一个conv层的权重不变)那么我们需要使用tf.get_collection(some_keys_you_have_to_remember), or get_layer(‘some_name’).trainable = False。只需要把需要训练的放到optimizer.minimize()即可。# define optimizer opt = tf.train.AdamOptimizer(1e-3) # define train op train_step = opt.minimize(loss,var_list=shared_conv.get_weights())现在就可以使用tensorflow的方式来训练了。
sess = ct.get_session() # just the TF Session sess.run(tf.global_variables_initializer()) # initialize all weights for i in range(10): res = sess.run([train_step,loss],feed_dict={}) # you should feed inputs if you have print('loss:',res[1])loss: 3.76966
loss: 3.58365
loss: 3.40627
loss: 3.23731
loss: 3.07659
loss: 2.92387
loss: 2.7789
loss: 2.6414
loss: 2.51115
loss: 2.38784可以看到loss的值在不断下降,也就是权重得到了训练。如果想要加入更多的层到模型中,可以进行如下操作。out = shared_conv(out) out = shared_conv(out) # redefine loss loss = tf.reduce_mean(out**2.) # redefine train op (Note: do not redefine the optimizer, which will produce error due to variable scope clashing) train_step = opt.minimize(loss,var_list=shared_conv.get_weights())之后不需要重新初始化变量,因为session一直都没有关闭
for i in range(10): res = sess.run([train_step,loss],feed_dict={}) print('loss:',res[1])loss: 4.82773
loss: 4.44005
loss: 4.05348
loss: 3.682
loss: 3.33433
loss: 3.01487
loss: 2.72505
loss: 2.4644
loss: 2.23137
loss: 2.02376
你可以看到loss的值,如果你想保存权重的话,可以进行如下操作。shared_conv.save_weights('shared_conv.npy')shared_conv.npy文件里面将存储两次间隔的权重。
如果损失太低,显示出过度配合的迹象。 假设想要将权重恢复到上一个检查点shared_conv.load_weights('shared_conv.npy')将会回到先前保存权值的地方。
Can的类的定义class DoubleConv(ct.Can): def __init__(self): super().__init__() # init base class self.convs = [ct.Conv2D(3,16,3),ct.Conv2D(16,3,3)] # define conv2d cans self.incan(self.convs) # add as subcans def __call__(self,i): i = self.convs[0](i) i = self.convs[1](i) return i因此我们可以定义对象
dc = DoubleConv() print(dc.subcans)[
, ]
可以通过函数得到权重print(dc.get_weights())[
, , , ]
也可以像之前一样训练网络。i = dc(input_variable) out = dc(i) # N=2 loss = tf.reduce_mean(out**2.) train_step = opt.minimize(loss, var_list=dc.get_weights()) sess.run(tf.global_variables_initializer()) # init and re-init all the weights (mainly for the optimizer) for i in range(10): res = sess.run([train_step,loss],feed_dict={}) print('loss:',res[1])loss: 6.04884
loss: 5.44523
loss: 4.90349
loss: 4.41841
loss: 3.98473
loss: 3.5973
loss: 3.25126
loss: 2.94217
loss: 2.66599
loss: 2.41911
保存权值dc.save_weights('test.npy') dc.load_weights('test.npy')Can的真实使用def DoubleConv2(): can = ct.Can() convs = [ct.Conv2D(3,16,3),ct.Conv2D(16,3,3)] def call(i): i = convs[0](i) i = convs[1](i) return i can.incan(convs) can.set_function(call) return can dc2 = DoubleConv2() out = dc2(input_variable) loss = tf.reduce_mean(out**2.) train_step = opt.minimize(loss, var_list=dc2.get_weights()) sess.run(tf.global_variables_initializer()) # init and re-init all the weights (mainly for the optimizer) for i in range(10): res = sess.run([train_step,loss],feed_dict={}) print('loss:',res[1])
loss: 2.80366
loss: 2.69145
loss: 2.58286
loss: 2.47786
loss: 2.37641
loss: 2.27852
loss: 2.18415
loss: 2.09326
loss: 2.00581
loss: 1.92172
保存权值dc.save_weights('test.npy') dc.load_weights('test.npy')更加美观,便捷的表示(对于chain的cans)定义def DoubleConv3(): c = ct.Can() c.add(ct.Conv2D(3,16,3)) c.add(ct.Conv2D(16,3,3)) c.chain() return c
训练dc3 = DoubleConv3() out = dc3(input_variable) loss = tf.reduce_mean(out**2.) train_step = opt.minimize(loss, var_list=dc3.get_weights()) sess.run(tf.global_variables_initializer()) # init and re-init all the variables (mainly for the optimizer) for i in range(10): res = sess.run([train_step,loss],feed_dict={}) print('loss:',res[1])
loss: 2.11142
loss: 2.01924
loss: 1.93045
loss: 1.84502
loss: 1.76294
loss: 1.68418
loss: 1.60869
loss: 1.53641
loss: 1.46729
loss: 1.40125
好了,can的介绍就到这里,可以去创建自己的网络流了。人工智能技术网 倡导尊重与保护知识产权。如发现本站文章存在版权等问题,烦请30天内提供版权疑问、身份证明、版权证明、联系方式等发邮件至1851688011@qq.com我们将及时沟通与处理。!:pg电子官方网站 > 新闻 » tensorflow的轻量级工具Canton的使用
相关推荐
留言与评论(共有 0 条评论)