分享好友 人工智能首页 频道列表

Caffe网络结构 caffe 如何设计网络模型

Caffe教程  2023-02-09 19:254350

网络定义文件net.prototxt,可以用工具画出网络结构。最快速的方法是使用netscope,粘贴内容后shift+回车就可以看结果了。

Caffe网络结构

 

caffe也自带了网络结构绘制工具,以下是在windows下使用graphviz的操作步骤。

安装pydot。

pip install protobuf pydot

下载graphviz。解压并将bin添加到环境变量。

cd %PythonPath%
python draw_net.py ..\..\..\..\examples\mnist\lenet_train_test.prototxt ..\..\..\..\examples\mnist\lenet_train_test.png

 

结果如下。

Caffe网络结构

 caffe运行记录及部分注释如下。

I1213 13:33:59.757851  2904 layer_factory.hpp:77] Creating layer mnist
I1213 13:33:59.757851  2904 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 13:33:59.757851  2904 net.cpp:91] Creating Layer mnist
I1213 13:33:59.757851  2904 net.cpp:399] mnist -> data
I1213 13:33:59.757851  2904 net.cpp:399] mnist -> label
I1213 13:33:59.757851  5316 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 13:33:59.757851  5316 db_lmdb.cpp:40] Opened lmdb examples/mnist/mnist_train_lmdb
I1213 13:33:59.882871  2904 data_layer.cpp:41] output data size: 64,1,28,28      # Batch size 64
I1213 13:33:59.898500  2904 net.cpp:141] Setting up mnist
I1213 13:33:59.898500  2904 net.cpp:148] Top shape: 64 1 28 28 (50176)   
I1213 13:33:59.898500  2904 net.cpp:148] Top shape: 64 (64)
I1213 13:33:59.898500   348 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I1213 13:33:59.898500  2904 net.cpp:156] Memory required for data: 200960
I1213 13:33:59.898500  2904 layer_factory.hpp:77] Creating layer conv1
I1213 13:33:59.898500  2904 net.cpp:91] Creating Layer conv1
I1213 13:33:59.898500  2904 net.cpp:425] conv1 <- data
I1213 13:33:59.898500  2904 net.cpp:399] conv1 -> conv1
I1213 13:34:00.389096  2904 net.cpp:141] Setting up conv1
I1213 13:34:00.389096  2904 net.cpp:148] Top shape: 64 20 24 24 (737280)   # num_output: 20 kernel_size: 5 28-5+1=24
I1213 13:34:00.389096  2904 net.cpp:156] Memory required for data: 3150080
I1213 13:34:00.389096  2904 layer_factory.hpp:77] Creating layer pool1
I1213 13:34:00.389096  2904 net.cpp:91] Creating Layer pool1
I1213 13:34:00.389096  2904 net.cpp:425] pool1 <- conv1
I1213 13:34:00.389096  2904 net.cpp:399] pool1 -> pool1
I1213 13:34:00.404721  2904 net.cpp:141] Setting up pool1
I1213 13:34:00.404721  2904 net.cpp:148] Top shape: 64 20 12 12 (184320)   # kernel_size: 2 stride: 2 24/2=12
I1213 13:34:00.404721  2904 net.cpp:156] Memory required for data: 3887360
I1213 13:34:00.404721  2904 layer_factory.hpp:77] Creating layer conv2
I1213 13:34:00.404721  2904 net.cpp:91] Creating Layer conv2
I1213 13:34:00.404721  2904 net.cpp:425] conv2 <- pool1
I1213 13:34:00.404721  2904 net.cpp:399] conv2 -> conv2
I1213 13:34:00.404721  2904 net.cpp:141] Setting up conv2
I1213 13:34:00.404721  2904 net.cpp:148] Top shape: 64 50 8 8 (204800)  # num_output: 50 kernel_size: 5 12-5+1=8
I1213 13:34:00.404721  2904 net.cpp:156] Memory required for data: 4706560
I1213 13:34:00.404721  2904 layer_factory.hpp:77] Creating layer pool2
I1213 13:34:00.404721  2904 net.cpp:91] Creating Layer pool2
I1213 13:34:00.404721  2904 net.cpp:425] pool2 <- conv2
I1213 13:34:00.420348  2904 net.cpp:399] pool2 -> pool2
I1213 13:34:00.420348  2904 net.cpp:141] Setting up pool2
I1213 13:34:00.420348  2904 net.cpp:148] Top shape: 64 50 4 4 (51200)   # kernel_size: 2 stride: 2 8/2=4
I1213 13:34:00.420348  2904 net.cpp:156] Memory required for data: 4911360
I1213 13:34:00.420348  2904 layer_factory.hpp:77] Creating layer ip1
I1213 13:34:00.420348  2904 net.cpp:91] Creating Layer ip1
I1213 13:34:00.420348  2904 net.cpp:425] ip1 <- pool2
I1213 13:34:00.420348  2904 net.cpp:399] ip1 -> ip1
I1213 13:34:00.420348  2904 net.cpp:141] Setting up ip1
I1213 13:34:00.435976  2904 net.cpp:148] Top shape: 64 500 (32000)  # num_output=500
I1213 13:34:00.435976  2904 net.cpp:156] Memory required for data: 5039360
I1213 13:34:00.435976  2904 layer_factory.hpp:77] Creating layer relu1
I1213 13:34:00.435976  2904 net.cpp:91] Creating Layer relu1
I1213 13:34:00.435976  2904 net.cpp:425] relu1 <- ip1
I1213 13:34:00.435976  2904 net.cpp:386] relu1 -> ip1 (in-place)
I1213 13:34:00.435976  2904 net.cpp:141] Setting up relu1
I1213 13:34:00.435976  2904 net.cpp:148] Top shape: 64 500 (32000)
I1213 13:34:00.435976  2904 net.cpp:156] Memory required for data: 5167360
I1213 13:34:00.435976  2904 layer_factory.hpp:77] Creating layer ip2
I1213 13:34:00.435976  2904 net.cpp:91] Creating Layer ip2
I1213 13:34:00.435976  2904 net.cpp:425] ip2 <- ip1
I1213 13:34:00.435976  2904 net.cpp:399] ip2 -> ip2
I1213 13:34:00.451606  2904 net.cpp:141] Setting up ip2
I1213 13:34:00.451606  2904 net.cpp:148] Top shape: 64 10 (640)  # num_output=10
I1213 13:34:00.451606  2904 net.cpp:156] Memory required for data: 5169920
I1213 13:34:00.451606  2904 layer_factory.hpp:77] Creating layer loss
I1213 13:34:00.451606  2904 net.cpp:91] Creating Layer loss
I1213 13:34:00.451606  2904 net.cpp:425] loss <- ip2
I1213 13:34:00.451606  2904 net.cpp:425] loss <- label
I1213 13:34:00.451606  2904 net.cpp:399] loss -> loss
I1213 13:34:00.451606  2904 layer_factory.hpp:77] Creating layer loss
I1213 13:34:00.451606  2904 net.cpp:141] Setting up loss
I1213 13:34:00.451606  2904 net.cpp:148] Top shape: (1)
I1213 13:34:00.451606  2904 net.cpp:151]     with loss weight 1
I1213 13:34:00.467228  2904 net.cpp:156] Memory required for data: 5169924
I1213 13:34:00.467228  2904 net.cpp:217] loss needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] ip2 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] relu1 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] ip1 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] pool2 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] conv2 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] pool1 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:217] conv1 needs backward computation.
I1213 13:34:00.467228  2904 net.cpp:219] mnist does not need backward computation.
I1213 13:34:00.467228  2904 net.cpp:261] This network produces output loss
I1213 13:34:00.467228  2904 net.cpp:274] Network initialization done.
I1213 13:34:00.482890  2904 solver.cpp:181] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I1213 13:34:00.482890  2904 net.cpp:313] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I1213 13:34:00.482890  2904 net.cpp:49] Initializing net from parameters: 

 

查看更多关于【Caffe教程】的文章

展开全文
相关推荐
反对 0
举报 0
图文资讯
热门推荐
优选好物
更多热点专题
更多推荐文章
caffe调试 ubuntu1404+eclipse
转自:http://blog.csdn.net/yaoxingfu72/article/details/47999795首先确保你caffe编译成功,而且makefile.config中将DEBUG:=1那一行取消注释,我的caffe根目录为 caffe-master。你也可以在Eclipse中编译caffe,我是先编译好caffe,然后进入Eclipse中调试1

0评论2023-03-08522

Caffe hdf5 layer data 大于2G 的导入
问题:      Datatype class: H5T_FLOAT,      Check failed: error == cudaSuccess (2 vs. 0)  out of memory.      hdf5 layer 最大的导入的大小是2G, 超过会报错[1]。解决方法:     有人 h5repart -m1g 将数据集分割成多个文件每个是

0评论2023-02-10987

caffe神经网络中不同的lr_policy间的区别
lr_policy可以设置为下面这些值,相应的学习率的计算为:- fixed:   保持base_lr不变.- step:    如果设置为step,则还需要设置一个stepsize,  返回 base_lr * gamma ^ (floor(iter / stepsize)),其中iter表示当前的迭代次数- exp:     返回base_lr

0评论2023-02-10949

Ubuntu配置GPU+CUDA+CAFFE ubuntu配置dns
参考网站:http://blog.csdn.net/xizero00/article/details/43227019/ (主要参考)http://www.cnblogs.com/platero/p/3993877.html (caffe+cudaGPU)http://www.cnblogs.com/platero/p/4118139.html (cuDNN)http://developer.download.nvidia.com/compute/cuda/

0评论2023-02-10616

关于深度学习(deep learning)的常见疑问 --- 谷歌大脑科学家 Caffe缔造者 贾扬清
问答环节问:在finetuning的时候,新问题的图像大小不同于pretraining的图像大小,只能缩放到同样的大小吗?” 答:对的:)问:目前dl在时序序列分析中的进展如何?研究思路如何,能简单描述一下么答:这个有点长,可以看看google最近的一系列machine trans

0评论2023-02-10646

深度学习框架Caffe —— Deep learning in Practice
因工作交接需要, 要将caffe使用方法及整体结构描述清楚。 鉴于也有同学问过我相关内容, 决定在本文中写个简单的tutorial, 方便大家参考。 本文简单的讲几个事情:Caffe能做什么?为什么选择caffe?环境整体结构Protocol buffer训练基本流程Python中训练Debu

0评论2023-02-09598

使用caffe的HDF5数据完毕回归任务
    一直在研究怎样用caffe做行人检測问题。然而參考那些经典结构比方faster-rcnn等,都是自己定义的caffe层来完毕的检測任务。这些都要求对caffe框架有一定程度的了解。近期看到了怎样用caffe完毕回归的任务,就想把检測问题当成回归问题来解决。   

0评论2023-02-09943

Caffe 编译: undefined reference to imencode()
本系列文章由 @yhl_leo 出品,转载请注明出处。 文章链接:http://blog.csdn.net/yhl_leo/article/details/52150781 整理之前编译工程中遇到的一个Bug,贴上提示log信息:...CXX/LD -o .build_release/examples/siamese/convert_mnist_siamese_data.bin.build

0评论2023-02-09884

[caffe]caffe资料收集 Caffeine.
1.caffe主页,有各种tutorial。2.Evan Shelhamer的tutorial,包括视频。 

0评论2023-02-09776

caffe_ssd学习-用自己的数据做训练 ssd caffe
几乎没用过linux操作系统,不懂shell编程,linux下shell+windows下UltraEdit勉勉强强生成了train.txt和val.txt期间各种错误辛酸不表,照着examples/imagenet/readme勉勉强强用自己的数据,按imagenet的训练方法,把reference_caffenet训起来了,小笔记本的风

0评论2023-02-09664

caffe Python API 之中值转换
# 编写一个函数,将二进制的均值转换为python的均值def convert_mean(binMean,npyMean):blob = caffe.proto.caffe_pb2.BlobProto()bin_mean = open(binMean, 'rb' ).read()blob.ParseFromString(bin_mean)arr = np.array( caffe.io.blobproto_to_array(blob)

0评论2023-02-09843

Windows10安装ubuntu & caffe GPU版
1.Ubuntu https://www.cnblogs.com/EasonJim/p/7112413.htmlhttps://blog.csdn.net/jesse_mx/article/details/61425361 安装后启动不了,直接进入windows.解决方案:https://www.cnblogs.com/lymboy/p/7783756.htmlhttps://jingyan.baidu.com/article/5553fa

0评论2023-02-091018

caffe(1) 网络结构层参数详解
prototxt文件是caffe的配置文件,用于保存CNN的网络结构和配置信息。prototxt文件有三种,分别是deploy.prototxt,train_val.prototxt和solver.prototxt。solver.prototxt是caffe的配置文件。里面定义了网络训练时候的各种参数,比如学习率、权重衰减、迭代次

0评论2023-02-09738

更多推荐