OpenCV4入门教程142:yolov3实时目标检测

索引地址:系列索引

Darknet接口

本文使用OpenCV获取视频数据,其他部分有darknet库完成。

darknet库GitHub地址:darknet

darknet主页:darknet

说明:因为部分人对darknet使用领域的无限扩张,为了爱与和平,原作者已不在更新,现在的所有内容由他人维护。

在darknet源码中由用于编译的Makefile文件,调整里面的参数用于使能openmpi/cuda等等。

编译结果为:

1
2
3
darknet.h
libdarknet.a
libdarknet.so

就是头文件和动态库。

首先新建一个纯c++的Qt项目(cmake也可以,我比较喜欢Qt)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
TEMPLATE = app
CONFIG += console c++11
CONFIG -= app_bundle
CONFIG -= qt

unix:{
INCLUDEPATH += /usr/include/opencv4
LIBS += `pkgconf --libs opencv4` \
-L$$PWD/libs/ \
-ldarknet
}

SOURCES += \
main.cpp

主要是用Qt来链接动态库。

开发流程为:

flow

获取标签

预测结果的类别是数字,我们需要将其按照对应关系进行解码。

1
2
3
4
5
6
7
8
9
10
11
12
std::vector getClassesName(char path[]){
std::vector names;
if(!path){
return names;
}
std::ifstream readIn(path);
std::string str;
while(getline(readIn,str)){
names.push_back(str);
}
return names;
}

读取标签列表至vector中,这样就可以通过类似数组(names[0]=person)的方式直接访问数据值。

对象初始化

本程序主要用到两个对象:

1
2
3
4
network *net = load_network(cfgfile,weightfile,0);
set_batch_network(net,1);

cv::VideoCapture cap(0);

net用于处理网络权重,cap用于获取摄像头的数据。

图像格式转换

获取到图像之后,格式为cv::Mat,而net使用的图像格式为image,此为darknet自定义的格式,需要转换一下。当然,图像处理完显示的时候又要变为Mat,否则也不需要使用OpenCV了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
image mat2image(const cv::Mat &mat){
cv::Mat dst;
cv::cvtColor(mat, dst, cv::COLOR_RGB2BGR);

int w = mat.cols;
int h = mat.rows;
int c = mat.channels();
image im = make_image(w, h, c);
unsigned char * imageData = (unsigned char *)dst.data;
int step = dst.step;
for (int y = 0; y < h; ++y) {
for (int k = 0; k < c; ++k) {
for (int x = 0; x < w; ++x) {
im.data[k*w*h + y*w + x] = imageData[y*step + x*c + k] / 255.0f;
}
}
}
return im;
}

cv::Mat image2Mat(image im){
image copy = copy_image(im);
constrain_image(copy);
if(im.c==3) rgbgr_image(copy);

int x,y,c;
cv::Mat m;
switch(im.c){
case 3:
m = cv::Mat(im.w,im.h,CV_8UC3);
break;
case 4:
m=cv::Mat(im.w,im.h,CV_8UC4);
break;
}

int step = m.step;
for(y=0;y<im.h;++y){
for(x=0;x<im.w;++x){
for(c=0;c<im.c;++c){
float val=im.data[c*im.h*im.w+y*im.w+x];
m.data[y*step+x*im.c+c]=(unsigned char)(val*255);
}
}
}

free_image(copy);
cv::cvtColor(m,m,cv::COLOR_BGR2RGB);
return m;
}

如果认真学习OpenCV的话应该不难理解。

获取结果

获取原数据,将Mat处理成image,预测图像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
image in = mat2image(frame);//格式转换
image in_s = letterbox_image(in,net->w,net->h);//讲图像数据转换为网络需要的尺寸
layer l = net->layers[net->n-1];//预测层

int nboxes=0;//可能的结果数量
float *X=in_s.data;//需要预测的图像数据
time=what_time_is_it_now();
network_predict(net,X);//预测中
//以下是方便人类阅读的部分
std::ostringstream str;
str<< "Prediction spent "<<what_time_is_it_now()-time<<"seconds";
//使用OpenCV的函数来绘制文字
cv::putText(frame,str.str(),cv::Point(0,20),1,1,cv::Scalar(255,0,255),1,8,false);

//获取所有预测的结果
detection *dets = get_network_boxes(net,in.w,in.h,thresh,0.5,0,1,&nboxes);
//利用阈值过滤一部分结果
if(nms) do_nms_sort(dets,nboxes,l.classes,nms);

结果处理

比如说在一次检测中我们获取了10个可能的检测结果,那么我们要对每个结果进行处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
for(i=0;i<nboxes;++i){//处理每一个结果
for(int j=0;j<l.classes;j++){
//对于每一个结果,对于读取的每一个标签类别进行判断
if(dets[i].prob[j]>thresh){
//如果预测的可能性大于我们设置的阈值
box b = dets[i].bbox;
int left = (b.x-b.w/2.)*in.w;
int right = (b.x+b.w/2.)*in.w;
int top = (b.y-b.h/2.)*in.h;
int bot = (b.y+b.h/2.)*in.h;

if(left < 0) left = 0;
if(right > in.w-1) right = in.w-1;
if(top < 0) top = 0;
if(bot > in.h-1) bot = in.h-1;

//根据计算的left,right,top,bot绘制矩形框
cv::rectangle(frame,cv::Rect(left,top,right-left,bot-top),cv::Scalar(255,255,0),2,cv::LINE_8,0);
//记录日志
std::ostringstream text;
text<<names[j]<<": "<<dets[i].prob[j]*100<<"%";
//在界面绘制文字
cv::putText(frame,text.str(),cv::Point(left,top),1,1,cv::Scalar(255,255,0),2,8,false);
}
}
}

矩形框的计算过程来自官方的代码。

最终效果为:

result

权重/注释/完整代码下载:

说明:只提供tiny权重用于测试,如有需要,可以到darknet下载权重和配置文件或者自己训练。

OpenCV接口

测试代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
// This code is written at BigVision LLC. It is based on the OpenCV project. It is subject to the license terms in the LICENSE file found in this distribution and at http://opencv.org/license.html

// Usage example: ./object_detection_yolo.out --video=run.mp4
// ./object_detection_yolo.out --image=bird.jpg
#include <fstream>
#include <sstream>
#include <iostream>
#include <vector>

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

const char* keys =
"{help h usage ? | | Usage examples: \n\t\t./object_detection_yolo.out --image=dog.jpg \n\t\t./object_detection_yolo.out --video=run_sm.mp4}"
"{image i |<none>| input image }"
"{video v |<none>| input video }"
;
using namespace cv;
using namespace dnn;
using namespace std;

// Initialize the parameters
float confThreshold = 0.5; // Confidence threshold
float nmsThreshold = 0.4; // Non-maximum suppression threshold
int inpWidth = 416; // Width of network's input image
int inpHeight = 416; // Height of network's input image
vector<string> classes;

// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(Mat& frame, const vector<Mat>& out);

// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame);

// Get the names of the output layers
vector<String> getOutputsNames(const Net& net);

int main(int argc, char** argv)
{
CommandLineParser parser(argc, argv, keys);
parser.about("Use this script to run object detection using YOLO3 in OpenCV.");
if (parser.has("help"))
{
parser.printMessage();
return 0;
}
// Load names of classes
string classesFile = "coco.names";
ifstream ifs(classesFile.c_str());
string line;
while (getline(ifs, line)) classes.push_back(line);

// Give the configuration and weight files for the model
String modelConfiguration = "yolov3.cfg";
String modelWeights = "yolov3.weights";

// Load the network
Net net = readNetFromDarknet(modelConfiguration, modelWeights);
net.setPreferableBackend(DNN_BACKEND_OPENCV);
net.setPreferableTarget(DNN_TARGET_CPU);

// Open a video file or an image file or a camera stream.
string str, outputFile;
VideoCapture cap;
VideoWriter video;
Mat frame, blob;

try {

outputFile = "yolo_out_cpp.avi";
if (parser.has("image"))
{
// Open the image file
str = parser.get<String>("image");
ifstream ifile(str);
if (!ifile) throw("error");
cap.open(str);
str.replace(str.end()-4, str.end(), "_yolo_out_cpp.jpg");
outputFile = str;
}
else if (parser.has("video"))
{
// Open the video file
str = parser.get<String>("video");
ifstream ifile(str);
if (!ifile) throw("error");
cap.open(str);
str.replace(str.end()-4, str.end(), "_yolo_out_cpp.avi");
outputFile = str;
}
// Open the webcaom
else cap.open(parser.get<int>("device"));

}
catch(...) {
cout << "Could not open the input image/video stream" << endl;
return 0;
}

// Get the video writer initialized to save the output video
if (!parser.has("image")) {
video.open(outputFile, VideoWriter::fourcc('M','J','P','G'), 28, Size(cap.get(CAP_PROP_FRAME_WIDTH), cap.get(CAP_PROP_FRAME_HEIGHT)));
}

// Create a window
static const string kWinName = "Deep learning object detection in OpenCV";
namedWindow(kWinName, WINDOW_NORMAL);

// Process frames.
while (waitKey(1) < 0)
{
// get frame from the video
cap >> frame;

// Stop the program if reached end of video
if (frame.empty()) {
cout << "Done processing !!!" << endl;
cout << "Output file is stored as " << outputFile << endl;
waitKey(3000);
break;
}
// Create a 4D blob from a frame.
blobFromImage(frame, blob, 1/255.0, cv::Size(inpWidth, inpHeight), Scalar(0,0,0), true, false);

//Sets the input to the network
net.setInput(blob);

// Runs the forward pass to get output of the output layers
vector<Mat> outs;
net.forward(outs, getOutputsNames(net));

// Remove the bounding boxes with low confidence
postprocess(frame, outs);

// Put efficiency information. The function getPerfProfile returns the overall time for inference(t) and the timings for each of the layers(in layersTimes)
vector<double> layersTimes;
double freq = getTickFrequency() / 1000;
double t = net.getPerfProfile(layersTimes) / freq;
string label = format("Inference time for a frame : %.2f ms", t);
putText(frame, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255));

// Write the frame with the detection boxes
Mat detectedFrame;
frame.convertTo(detectedFrame, CV_8U);
if (parser.has("image")) imwrite(outputFile, detectedFrame);
else video.write(detectedFrame);

imshow(kWinName, frame);
}

cap.release();
if (!parser.has("image")) video.release();

return 0;
}

// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(Mat& frame, const vector<Mat>& outs)
{
vector<int> classIds;
vector<float> confidences;
vector<Rect> boxes;

for (size_t i = 0; i < outs.size(); ++i)
{
// Scan through all the bounding boxes output from the network and keep only the
// ones with high confidence scores. Assign the box's class label as the class
// with the highest score for the box.
float* data = (float*)outs[i].data;
for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols)
{
Mat scores = outs[i].row(j).colRange(5, outs[i].cols);
Point classIdPoint;
double confidence;
// Get the value and location of the maximum score
minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);
if (confidence > confThreshold)
{
int centerX = (int)(data[0] * frame.cols);
int centerY = (int)(data[1] * frame.rows);
int width = (int)(data[2] * frame.cols);
int height = (int)(data[3] * frame.rows);
int left = centerX - width / 2;
int top = centerY - height / 2;

classIds.push_back(classIdPoint.x);
confidences.push_back((float)confidence);
boxes.push_back(Rect(left, top, width, height));
}
}
}

// Perform non maximum suppression to eliminate redundant overlapping boxes with
// lower confidences
vector<int> indices;
NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);
for (size_t i = 0; i < indices.size(); ++i)
{
int idx = indices[i];
Rect box = boxes[idx];
drawPred(classIds[idx], confidences[idx], box.x, box.y,
box.x + box.width, box.y + box.height, frame);
}
}

// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{
//Draw a rectangle displaying the bounding box
rectangle(frame, Point(left, top), Point(right, bottom), Scalar(255, 178, 50), 3);

//Get the label for the class name and its confidence
string label = format("%.2f", conf);
if (!classes.empty())
{
CV_Assert(classId < (int)classes.size());
label = classes[classId] + ":" + label;
}

//Display the label at the top of the bounding box
int baseLine;
Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
top = max(top, labelSize.height);
rectangle(frame, Point(left, top - round(1.5*labelSize.height)), Point(left + round(1.5*labelSize.width), top + baseLine), Scalar(255, 255, 255), FILLED);
putText(frame, label, Point(left, top), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0,0,0),1);
}

// Get the names of the output layers
vector<String> getOutputsNames(const Net& net)
{
static vector<String> names;
if (names.empty())
{
//Get the indices of the output layers, i.e. the layers with unconnected outputs
vector<int> outLayers = net.getUnconnectedOutLayers();

//get the names of all the layers in the network
vector<String> layersNames = net.getLayerNames();

// Get the names of the output layers in names
names.resize(outLayers.size());
for (size_t i = 0; i < outLayers.size(); ++i)
names[i] = layersNames[outLayers[i] - 1];
}
return names;
}

测试效果视频:OpenCV/YOLOv3实时视频目标检测效果


OpenCV4入门教程142:yolov3实时目标检测
https://feater.top/opencv/realtime-object-detect-with-opencv-darknet-yolov3/
作者
JackeyLea
发布于
2020年12月21日
许可协议