笔者在caffe上做各种实验有一段时间了,但一直都只是在修改配置文件或者存在某些新的idea却难以实现的地步,很多时候实现一些idea需要深入到底层去修改或者添加一下新的layer等,这样也就要求对caffe的底层源码有一个较为深层的理解,这个系列的博客将记录和分享在学习caffe源码中的体会和过程,作为一个EE转CS的编程菜鸟,如有错误,希望读者指正。
caffe大致上可以分为Blob, Layer, Net, Solver这四个大的模块。
Solver: An interface for classes that perform optimization on Nets
Net: Connects Layers together into a directed acyclic graph (DAG) specified by a NetParameter
Layer: An interface for the units of computation which can be composed into a Net
Blob: A wrapper around SyncedMemory holders serving as the basic computational unit through which Layers, Nets, and Solvers interact
Blob作为caffe数据流通的一个基本数据结构,层与层之间的数据是通过blob来实现的,具有cpu和gpu之间同步的能力,可以看作一个四维数组(num, channels, heights, width)。主要代码位于/home/xxx/caffe/include/blob.hpp。
*1. 主要变量*
shared_ptr<SyncedMemory> data_;
shared_ptr<SyncedMemory> diff_;
shared_ptr<SyncedMemory> shape_data_;
vector<int> shape_;
int count_;
int capacity_;
Blob作为一个基本的数据结构,内部变量并不复杂,首先是data指针,采用的是boost库的一个智能指针shared_ptr。这一部分主要用来申请内存存储data用于正向传播。diff_用来存储误差,shape_data 和 shape_都是用来存储Blob的形状,只是一个是老版本,一个是新版本,count表示Blob中的元素个数,也就是 num * channel * height * width,capacity表示当前元素的个数。
caffe基于blob进行存储和交换数据,为了便于优化,Blob提供了统一的内存接口存储某种类型的数据,可以根据cpu到gpu的同步需要,屏蔽cpu/gpu在计算上的开销。
*2. 主要函数*
template <typename Dtype>
class Blob {
public:
Blob()
: data_(), diff_(), count_(0), capacity_(0) {}
/// @brief Deprecated; use <code>Blob(const vector<int>& shape)</code>.
explicit Blob(const int num, const int channels, const int height,
const int width);
explicit Blob(const vector<int>& shape);
/// @brief Deprecated; use <code>Reshape(const vector<int>& shape)</code>.
void Reshape(const int num, const int channels, const int height,
const int width);
Blob作为一个基础的数据结构,其中构造函数开辟了一个内存空间来存储数据,Reshape()函数在layer中的reshape和forward中来adjust dimension,同时会改变Blob的大小,此时内存将会被重新分配。
blob中的index是可以从负坐标开始读的,这一点类似于python,对于blob中的四个基本变量,num channel height width,可以直接通过shape(0), shape(1), shape(2), shape(3)。
计算offset
inline int offset(const int n, const int c = , const int h = , const int w = )
inline int offset(const vector<int>& indices)
offset计算的方式也支持两种方式,直接指定n,c,h,w或者放到一个vector中进行计算,偏差是根据对应的n,c,h,w,返回的offset是
从一个blob中copy数据 ,通过开关控制是否copy_diff,如果是False则copy data。reshape控制是否需要reshape。
inline Dtype data_at(const int n, const int c, const int h, const int w)
inline Dtype diff_at(const int n, const int c, const int h, const int w)
inline Dtype data_at(const vector<int>& index)
inline Dtype diff_at(const vector<int>& index)
inline const shared_ptr<SyncedMemory>& data()
inline const shared_ptr<SyncedMemory>& diff()
这一部分函数主要通过给定的位置访问数据,根据位置计算与数据起始的偏差offset,在通过cpu_data*指针获得地址。
const Dtype* cpu_data() const;
void set_cpu_data(Dtype* data);
const int* gpu_shape() const;
const Dtype* gpu_data() const;
const Dtype* cpu_diff() const;
const Dtype* gpu_diff() const;
Dtype* mutable_cpu_data();
Dtype* mutable_gpu_data();
Dtype* mutable_cpu_diff();
Dtype* mutable_gpu_diff();
data就是存储前向传递的信息的数据,diff指的是神经网络在反向传播时候的梯度。
void FromProto(const BlobProto& proto, bool reshape = true);
void ToProto(BlobProto* proto, bool write_diff = false) const;
这两个函数主要是将数据序列化,存储到BlobProto,这里说到Proto是谷歌的一个数据序列化的存储格式,可以实现语言、平台无关、可扩展的序列化结构数据格式。Caffe里面数据的存储都采用这一结构.
总结一下Blob.hpp的一些函数:
Reshape()可以改变一个Blob的大小;
ReshapeLike()为data和diff重新分配了一块空间,大小和另一个Blob一样。
Num_axes()返回的是Blob的大小
Count()计算得到 num * channels * height * width
Offset() 可得到输入Blob数据的(n,c,h,w)的偏移量位置
CopyFrom()从source拷贝数据,copy_diff来作为标志区分拷贝的是data还是diff。
FromProto()从proto读数据进俩,其实就是反序列化
ToProto()把blob数据保存到proto中
ShareData() / ShareDiff() 从other的blob复制data和diff的值
#ifndef CAFFE_BLOB_HPP_
#define CAFFE_BLOB_HPP_
#include <algorithm>
#include <string>
#include <vector>
#include "caffe/common.hpp" //单例化caffe类,并且封装了boost和cuda随机数生成的函数,提供了统一接口
#include "caffe/proto/caffe.pb.h"
#include "caffe/syncedmem.hpp"
/*
主要是分配内存和释放内存。class yncedMemory定义了内存分配管理和CPU与GPU之间同步的函数。
Blob会使用SyncedMem自动决定什么时候去copy data以提高运行效率,通常情况是仅当gnu或cpu修改后有copy操作。
*/
const int kMaxBlobAxes = ; //在头文件中为它添加 extern 声明,以使其能被多个文件共享
namespace caffe {
/**
* @brief A wrapper around SyncedMemory holders serving as the basic
* computational unit through which Layer%s, Net%s, and Solver%s
* interact.
*
* TODO(dox): more thorough description.
*/
template <typename Dtype> //模板类,虚拟类型Dtype
class Blob {
public:
Blob() //构造函数:初始化列表 {空函数体}
: data_(), diff_(), count_(), capacity_() {}
//当构造函数被声明 explicit 时,编译器将不使用它作为转换操作符。
/// @brief Deprecated; use <code>Blob(const vector<int>& shape)</code>.
explicit Blob(const int num, const int channels, const int height,
const int width); //可以通过设置数据维度(N,C,H,W)初始化
//const 传递过来的参数在函数内不可以改变(无意义,因为本身就是形参)
//const引用参数在函数内为常量不可变
explicit Blob(const vector<int>& shape); //也可以通过传入vector<int>直接传入维数
/// @brief Deprecated; use <code>Reshape(const vector<int>& shape)</code>.
void Reshape(const int num, const int channels, const int height,
const int width);
/**
* @brief Change the dimensions of the blob, allocating new memory if
* necessary.
*
* This function can be called both to create an initial allocation
* of memory, and to adjust the dimensions of a top blob during Layer::Reshape
* or Layer::Forward. When changing the size of blob, memory will only be
* reallocated if sufficient memory does not already exist, and excess memory
* will never be freed.
*
* Note that reshaping an input blob and immediately calling Net::Backward is
*
* an error; either Net::Forward or Net::Reshape need to be called to
* propagate the new input shape to higher layers.
*/
void Reshape(const vector<int>& shape);
void Reshape(const BlobShape& shape);
void ReshapeLike(const Blob& other);
//内联函数 通过内联函数,编译器不需要跳转到内存其他地址去执行函数调用,也不需要保留函数调用时的现场数据。
// const 成员函数,任何不会修改数据成员的函数都应该声明为const 类型。
// 输出blob的形状
inline string shape_string() const { //
ostringstream stream;
for (int i = ; i < shape_.size(); ++i) {
stream << shape_[i] << " ";
}
stream << "(" << count_ << ")";
return stream.str();
}
inline const vector<int>& shape() const { return shape_; }
/**
* @brief Returns the dimension of the index-th axis (or the negative index-th
* axis from the end, if index is negative).
*
* @param index the axis index, which may be negative as it will be
* "canonicalized" using CanonicalAxisIndex.
* Dies on out of range index.
*/
inline int shape(int index) const { //根据索引返回维数,对于维数(N,C,H,W),shape(0)返回N,shape(-1)返回W。
return shape_[CanonicalAxisIndex(index)];
}
inline int num_axes() const { return shape_.size(); } //返回Blob维度数,对于维数(N,C,H,W),返回4
inline int count() const { return count_; } //返回Blob维度数,对于维数(N,C,H,W),返回N×C×H×W
/**
* @brief Compute the volume of a slice; i.e., the product of dimensions
* among a range of axes.
*
* @param start_axis The first axis to include in the slice.
*
* @param end_axis The first axis to exclude from the slice.
*/
//对于维数(N,C,H,W),count(0, 3)返回N×C×H
inline int count(int start_axis, int end_axis) const {
CHECK_LE(start_axis, end_axis);
CHECK_GE(start_axis, );
CHECK_GE(end_axis, );
CHECK_LE(start_axis, num_axes());
CHECK_LE(end_axis, num_axes());
int count = ;
for (int i = start_axis; i < end_axis; ++i) {
count *= shape(i);
}
return count;
}
/**
* @brief Compute the volume of a slice spanning from a particular first
* axis to the final axis.
*
* @param start_axis The first axis to include in the slice.
*/
//对于维数(N,C,H,W),count(1)返回C×H×W
inline int count(int start_axis) const {
return count(start_axis, num_axes());
}
/**
* @brief Returns the 'canonical' version of a (usually) user-specified axis,
* allowing for negative indexing (e.g., -1 for the last axis).
*
* @param axis_index the axis index.
* If 0 <= index < num_axes(), return index.
* If -num_axes <= index <= -1, return (num_axes() - (-index)),
* e.g., the last axis index (num_axes() - 1) if index == -1,
* the second to last if index == -2, etc.
* Dies on out of range index.
*/
inline int CanonicalAxisIndex(int axis_index) const {
CHECK_GE(axis_index, -num_axes())
<< "axis " << axis_index << " out of range for " << num_axes()
<< "-D Blob with shape " << shape_string();
CHECK_LT(axis_index, num_axes())
<< "axis " << axis_index << " out of range for " << num_axes()
<< "-D Blob with shape " << shape_string();
if (axis_index < ) {
return axis_index + num_axes();
}
return axis_index;
}
/// @brief Deprecated legacy shape accessor num: use shape(0) instead.
inline int num() const { return LegacyShape(); }
/// @brief Deprecated legacy shape accessor channels: use shape(1) instead.
inline int channels() const { return LegacyShape(); }
/// @brief Deprecated legacy shape accessor height: use shape(2) instead.
inline int height() const { return LegacyShape(); }
/// @brief Deprecated legacy shape accessor width: use shape(3) instead.
inline int width() const { return LegacyShape(); }
inline int LegacyShape(int index) const {
CHECK_LE(num_axes(), )
<< "Cannot use legacy accessors on Blobs with > 4 axes.";
CHECK_LT(index, );
CHECK_GE(index, -);
if (index >= num_axes() || index < -num_axes()) {
// Axis is out of range, but still in [0, 3] (or [-4, -1] for reverse
// indexing) -- this special case simulates the one-padding used to fill
// extraneous axes of legacy blobs.
return ;
}
return shape(index);
}
inline int offset(const int n, const int c = , const int h = ,
const int w = ) const { //计算物理偏移量,(n,c,h,w)的偏移量为((n∗C+c)∗H+h)∗W+w
CHECK_GE(n, );
CHECK_LE(n, num());
CHECK_GE(channels(), );
CHECK_LE(c, channels());
CHECK_GE(height(), );
CHECK_LE(h, height());
CHECK_GE(width(), );
CHECK_LE(w, width());
return ((n * channels() + c) * height() + h) * width() + w;
}
inline int offset(const vector<int>& indices) const {
CHECK_LE(indices.size(), num_axes());
int offset = ;
for (int i = ; i < num_axes(); ++i) {
offset *= shape(i);
if (indices.size() > i) {
CHECK_GE(indices[i], );
CHECK_LT(indices[i], shape(i));
offset += indices[i];
}
}
return offset;
}
/**
* @brief Copy from a source Blob.
*
* @param source the Blob to copy from
* @param copy_diff: if false, copy the data; if true, copy the diff
* @param reshape: if false, require this Blob to be pre-shaped to the shape
* of other (and die otherwise); if true, Reshape this Blob to other's
* shape if necessary
*/
void CopyFrom(const Blob<Dtype>& source, bool copy_diff = false, //从source拷贝数据, copy_diff来作为标志区分是拷贝data还是diff。
bool reshape = false);
inline Dtype data_at(const int n, const int c, const int h,
const int w) const {
return cpu_data()[offset(n, c, h, w)];
}
inline Dtype diff_at(const int n, const int c, const int h,
const int w) const {
return cpu_diff()[offset(n, c, h, w)];
}
inline Dtype data_at(const vector<int>& index) const {
return cpu_data()[offset(index)];
}
inline Dtype diff_at(const vector<int>& index) const {
return cpu_diff()[offset(index)];
}
inline const shared_ptr<SyncedMemory>& data() const {
CHECK(data_);
return data_;
}
inline const shared_ptr<SyncedMemory>& diff() const {
CHECK(diff_);
return diff_;
}\
/*
// 假定数据在 CPU 上进行初始化,我们有一个 blob
const Dtype* foo;
Dtype* bar;
foo = blob.gpu_data(); // 数据从 CPU 复制到 GPU
foo = blob.cpu_data(); // 没有数据复制,两者都有最新的内容
bar = blob.mutable_gpu_data(); // 没有数据复制
// ... 一些操作 ...
bar = blob.mutable_gpu_data(); // 仍在 GPU,没有数据复制
foo = blob.cpu_data(); // 由于 GPU 修改了数值,数据从 GPU 复制到 CPU
foo = blob.gpu_data(); //没有数据复制,两者都有最新的内容
bar = blob.mutable_cpu_data(); // 依旧没有数据复制
bar = blob.mutable_gpu_data(); //数据从 CPU 复制到 GPU
bar = blob.mutable_cpu_data(); //数据从 GPU 复制到 CPU
*/
const Dtype* cpu_data() const; //数据访问,const方式只读,不允许改写数据
void set_cpu_data(Dtype* data);
const int* gpu_shape() const;
const Dtype* gpu_data() const;
const Dtype* cpu_diff() const;
const Dtype* gpu_diff() const;
Dtype* mutable_cpu_data(); //mutable方式可改写数据(对diff_的访问也是类似的)
Dtype* mutable_gpu_data();
Dtype* mutable_cpu_diff();
Dtype* mutable_gpu_diff();
void Update();
void FromProto(const BlobProto& proto, bool reshape = true); //从proto读数据进来,其实就是反序列化
void ToProto(BlobProto* proto, bool write_diff = false) const; //blob数据保存到proto中
/// @brief Compute the sum of absolute values (L1 norm) of the data.
Dtype asum_data() const;
/// @brief Compute the sum of absolute values (L1 norm) of the diff.
Dtype asum_diff() const;
/// @brief Compute the sum of squares (L2 norm squared) of the data.
Dtype sumsq_data() const;
/// @brief Compute the sum of squares (L2 norm squared) of the diff.
Dtype sumsq_diff() const;
/// @brief Scale the blob data by a constant factor.
void scale_data(Dtype scale_factor);
/// @brief Scale the blob diff by a constant factor.
void scale_diff(Dtype scale_factor);
/**
* @brief Set the data_ shared_ptr to point to the SyncedMemory holding the
* data_ of Blob other -- useful in Layer%s which simply perform a copy
* in their Forward pass.
*
* This deallocates the SyncedMemory holding this Blob's data_, as
* shared_ptr calls its destructor when reset with the "=" operator.
*/
void ShareData(const Blob& other); //Blob& other 赋值给data_
/**
* @brief Set the diff_ shared_ptr to point to the SyncedMemory holding the
* diff_ of Blob other -- useful in Layer%s which simply perform a copy
* in their Forward pass.
*
* This deallocates the SyncedMemory holding this Blob's diff_, as
* shared_ptr calls its destructor when reset with the "=" operator.
*/
void ShareDiff(const Blob& other); //Blob& other 赋值给diff_
bool ShapeEquals(const BlobProto& other);
protected:
shared_ptr<SyncedMemory> data_; //存储前向传递数据
shared_ptr<SyncedMemory> diff_; //存储反向传递梯度
shared_ptr<SyncedMemory> shape_data_;
vector<int> shape_; //参数维度
int count_; //Blob存储的元素个数(shape_所有元素乘积)
int capacity_;//当前Blob的元素个数(控制动态分配)
DISABLE_COPY_AND_ASSIGN(Blob);
}; // class Blob
} // namespace caffe
#endif // CAFFE_BLOB_HPP_