当前位置: 首页>移动开发>正文

opencv 视频检测移动物体 opencv视频跟踪

目标跟踪

本文主要介绍cv2中的视频分析Camshift和Meanshift。

目标: 

学习Meanshift算法和Camshift算法来寻找和追踪视频中的目标物体

Meanshift算法:

meanshift算法的原理很简单。假设你有一堆点集,例如直方图反向投影得到的点集。 
你还有一个小的窗口,这个窗口可能是圆形的,现在你可能要移动这个窗口到点集密度最大的区域当中。 

如下图: 

    

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_直方图,第1张

最开始的窗口是蓝色圆环的区域,命名为C1。蓝色圆环的重音用一个蓝色的矩形标注,命名为C1_o。

然而,你发现在这个窗口当中所有点的点集构成的质心在蓝色圆形点处。而且,圆环的型心和质心并不重合。所以,移动蓝色的窗口,使得型心与之前得到的质心重合。在新移动后的圆环的区域当中再次寻找圆环当中所包围点集的质心,然后再次移动,通常情况下,型心和质心是不重合的。不断执行上面的移动过程,直到型心和质心大致重合结束。 
这样,最后圆形的窗口会落到像素分布最大的地方,也就是图中的绿色圈,命名为C2。

meanshift算法不仅仅限制在二维的图像处理问题当中,同样也可以使用于高维的数据处理。可以通过选取不同的核函数,来改变区域当中偏移向量的权重,最后meanshift算法的过程一定会收敛到某一个位置。(可证明)

meanshift算法除了应用在视频追踪当中,在聚类,平滑等等各种涉及到数据以及非监督学习的场合当中均有重要应用,是一个应用广泛的算法。

假如在二维环境当中,meanshift算法处理的数据是一群离散的二维点集,但是图像是一个矩阵信息,如何在一个视频当中使用meanshift算法来追踪一个运动的物体呢?

大致流程如下:

1.首先在图像上使用矩形框或者圆形框选定一个目标区域 
2.计算选定好区域的直方图分布。 
3.对下一帧图像b同样计算直方图分布。 
4.计算图像b当中与选定区域直方图分布最为相似的区域,使用meanshift算法将选定区域沿着最为相似的部分进行移动。(样例当中使用的是直方图反向投影) 
5.重复3到4的过程。

OpenCV中的meanshift算法: 
在opencv中使用meanshift算法,首先要设定目标,找到它的直方图,然后可以对这个直方图在每一帧当中进行反向投影。我们需要提供一个初试的窗口位置,计算HSV模型当中H(色调)的直方图。为了避免低亮度造成的影响,使用 cv2.inRange()将低亮度值忽略。

import cv2
import numpy as np

# 设置初始化的窗口位置
r,h,c,w = 0,100,0,100 # 设置初试窗口位置和大小
track_window = (c,r,w,h)

cap = cv2.VideoCapture(0)

ret, frame= cap.read()

# 设置追踪的区域
roi = frame[r:r+h, c:c+w]
# roi区域的hsv图像
hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# 取值hsv值在(0,60,32)到(180,255,255)之间的部分
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
# 计算直方图,参数为 图片(可多),通道数,蒙板区域,直方图长度,范围
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
# 归一化
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)

# 设置终止条件,迭代10次或者至少移动1次
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret, frame = cap.read()
    if ret == True:
        # 计算每一帧的hsv图像
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # 计算反向投影
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        # 调用meanShift算法在dst中寻找目标窗口,找到后返回目标窗口
        ret, track_window = cv2.meanShift(dst, track_window, term_crit)
        # Draw it on image
        x,y,w,h = track_window
        img2 = cv2.rectangle(frame, (x,y), (x+w,y+h), 255,2)
        cv2.imshow('img2',img2)


    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()

CamShift算法:

在视频或者是摄像头当中,如果被追踪的物体迎面过来,由于透视效果,物体会放大。之前设置好的窗口区域大小会不合适。

OpenCV实验室实现了一个CAMshift算法,首先使用meanshift算法找到目标,然后调整窗口大小,而且还会计算目标对象的的最佳外接圆的角度,并调整窗口。并使用调整后的窗口对物体继续追踪。

使用方法与meanShift算法一样,不过返回的是一个带有旋转角度的矩形。

Camshift,连续的自适应MeanShift算法,是对MeanShift算法的改进算法,可以在跟踪的过程中随着目标大小的变化实时调整搜索窗口大小,对于视频序列中的每一帧还是采用MeanShift来寻找最优迭代结果,至于如何实现自动调整窗口大小的,可以查到的论述较少,我的理解是通过对MeanShift算法中零阶矩的判断实现的。

代码:

1.python版本:可以自行通过鼠标设置区域进行追踪

import cv2
import numpy as np

xs, ys, ws, hs = 0, 0, 0, 0  # selection.x selection.y
xo, yo = 0, 0  # origin.x origin.y
selectObject = False
trackObject = 0


def onMouse(event, x, y, flags, prams):
    global xs, ys, ws, hs, selectObject, xo, yo, trackObject
    if selectObject == True:
        xs = min(x, xo)
        ys = min(y, yo)
        ws = abs(x - xo)
        hs = abs(y - yo)
    if event == cv2.EVENT_LBUTTONDOWN:
        xo, yo = x, y
        xs, ys, ws, hs = x, y, 0, 0
        selectObject = True
    elif event == cv2.EVENT_LBUTTONUP:
        selectObject = False
        trackObject = -1


cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.namedWindow('imshow')
cv2.setMouseCallback('imshow', onMouse)
term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
while (True):
    ret, frame = cap.read()
    if trackObject != 0:
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        mask = cv2.inRange(hsv, np.array((0., 30., 10.)), np.array((180., 256., 255.)))
        if trackObject == -1:
            track_window = (xs, ys, ws, hs)
            maskroi = mask[ys:ys + hs, xs:xs + ws]
            hsv_roi = hsv[ys:ys + hs, xs:xs + ws]
            roi_hist = cv2.calcHist([hsv_roi], [0], maskroi, [180], [0, 180])
            cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
            trackObject = 1
        dst = cv2.calcBackProject([hsv], [0], roi_hist, [0, 180], 1)
        dst &= mask
        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame, [pts], True, 255, 2)

    if selectObject == True and ws > 0 and hs > 0:
        cv2.imshow('imshow1', frame[ys:ys + hs, xs:xs + ws])
        cv2.bitwise_not(frame[ys:ys + hs, xs:xs + ws], frame[ys:ys + hs, xs:xs + ws])
    cv2.imshow('imshow', frame)
    if cv2.waitKey(10) == 27:
        break
cv2.destroyAllWindows()

对应的C++版本:

//---------------------------------【头文件、命名空间包含部分】----------------------------
//		描述:包含程序所使用的头文件和命名空间
//-------------------------------------------------------------------------------------------------
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
#include <windows.h>
 
using namespace cv;
using namespace std;
 
 
 
//-----------------------------------【全局变量声明】-----------------------------------------
//		描述:声明全局变量
//-------------------------------------------------------------------------------------------------
Mat image;
bool backprojMode = false;
bool selectObject = false;
int trackObject = 0;
bool showHist = true;
Point origin;
Rect selection;
int vmin = 10, vmax = 256, smin = 30;
 
 
//--------------------------------【onMouse( )回调函数】------------------------------------
//		描述:鼠标操作回调
//-------------------------------------------------------------------------------------------------
static void onMouse( int event, int x, int y, int, void* )
{
	if( selectObject )
	{
		selection.x = MIN(x, origin.x);
		selection.y = MIN(y, origin.y);
		selection.width = std::abs(x - origin.x);
		selection.height = std::abs(y - origin.y);
 
		selection &= Rect(0, 0, image.cols, image.rows);
	}
 
	switch( event )
	{
	//此句代码的OpenCV2版为:
	//case CV_EVENT_LBUTTONDOWN:
	//此句代码的OpenCV3版为:
	case EVENT_LBUTTONDOWN:
		origin = Point(x,y);
		selection = Rect(x,y,0,0);
		selectObject = true;
		break;
	//此句代码的OpenCV2版为:
	//case CV_EVENT_LBUTTONUP:
	//此句代码的OpenCV3版为:
	case EVENT_LBUTTONUP:
		selectObject = false;
		if( selection.width > 0 && selection.height > 0 )
			trackObject = -1;
		break;
	}
}
 
//--------------------------------【help( )函数】----------------------------------------------
//		描述:输出帮助信息
//-------------------------------------------------------------------------------------------------
static void ShowHelpText()
{
	cout <<"\n\n\t\t\t非常感谢购买《OpenCV3编程入门》一书!\n"
		<<"\n\n\t\t\t此为本书OpenCV3版的第8个配套示例程序\n"
		<<	"\n\n\t\t\t   当前使用的OpenCV版本为:" << CV_VERSION 
		<<"\n\n  ----------------------------------------------------------------------------" ;
 
	cout << "\n\n\t此Demo显示了基于均值漂移的追踪(tracking)技术\n"
		"\t请用鼠标框选一个有颜色的物体,对它进行追踪操作\n";
 
	cout << "\n\n\t操作说明: \n"
		"\t\t用鼠标框选对象来初始化跟踪\n"
		"\t\tESC - 退出程序\n"
		"\t\tc - 停止追踪\n"
		"\t\tb - 开/关-投影视图\n"
		"\t\th - 显示/隐藏-对象直方图\n"
		"\t\tp - 暂停视频\n";
}
 
const char* keys =
{
	"{1|  | 0 | camera number}"
};
 
 
//-----------------------------------【main( )函数】--------------------------------------------
//		描述:控制台应用程序的入口函数,我们的程序从这里开始
//-------------------------------------------------------------------------------------------------
int main( int argc, const char** argv )
{
	ShowHelpText();
 
	VideoCapture cap;
	Rect trackWindow;
	int hsize = 16;
	float hranges[] = {0,180};
	const float* phranges = hranges;
 
	cap.open(0);
	//cap.open("H:\opencv\ai.avi");
 
	if( !cap.isOpened() )
	{
		cout << "不能初始化摄像头\n";
	}
 
	namedWindow( "Histogram", 0 );//颜色直方图窗口
	namedWindow( "CamShift Demo", 0 );//跟踪图像窗口
	setMouseCallback( "CamShift Demo", onMouse, 0 );//关联鼠标事件
	createTrackbar( "Vmin", "CamShift Demo", &vmin, 256, 0 );//颜色空间参数设置
	createTrackbar( "Vmax", "CamShift Demo", &vmax, 256, 0 );
	createTrackbar( "Smin", "CamShift Demo", &smin, 256, 0 );
 
	Mat frame, hsv, hue, mask, hist, histimg = Mat::zeros(200, 320, CV_8UC3), backproj;
	bool paused = false;//暂停
	LARGE_INTEGER  _start, _stop;
	double   start, stop;
	for(;;)
	{
		QueryPerformanceCounter(&_start);
		start = (double)_start.QuadPart;          //获得计数器计数初值 
		if( !paused )
		{
			cap >> frame;
			if( frame.empty() )
				break;
		}
		QueryPerformanceCounter(&_stop);    //获取计数器当前值
		stop = (double)_stop.QuadPart;
		cout << (stop - start) * 10 / 25332 << endl;;
		frame.copyTo(image);
 
		if( !paused )//如果么有暂停。。。要是我就不会那么多事设置一个暂停在这
		{
			cvtColor(image, hsv, COLOR_BGR2HSV);//将图像转换为hsv颜色空间
 
			if( trackObject )//只有等于0的时候不跟踪?
			{
				int _vmin = vmin, _vmax = vmax;//颜色空间参数
 
				inRange(hsv, Scalar(0, smin, MIN(_vmin,_vmax)),
					Scalar(180, 256, MAX(_vmin, _vmax)), mask);
				int ch[] = {0, 0};
				hue.create(hsv.size(), hsv.depth());//反向直方图
				mixChannels(&hsv, 1, &hue, 1, ch, 1);
 
				if( trackObject < 0 )//已经用鼠标选取完区域后就可以跟踪了。。
				{
					Mat roi(hue, selection), maskroi(mask, selection);
					calcHist(&roi, 1, 0, maskroi, hist, 1, &hsize, &phranges);
					//此句代码的OpenCV3版为:
					normalize(hist, hist, 0, 255, NORM_MINMAX);
					//此句代码的OpenCV2版为:
					//normalize(hist, hist, 0, 255, CV_MINMAX);
 
					trackWindow = selection;
					trackObject = 1;
					histimg = Scalar::all(0);
					int binW = histimg.cols / hsize;
					Mat buf(1, hsize, CV_8UC3);
					for( int i = 0; i < hsize; i++ )
						buf.at<Vec3b>(i) = Vec3b(saturate_cast<uchar>(i*180./hsize), 255, 255);
 
					//此句代码的OpenCV3版为:
					cvtColor(buf, buf, COLOR_HSV2BGR);
					//此句代码的OpenCV2版为:
					//cvtColor(buf, buf, CV_HSV2BGR);
 
					for( int i = 0; i < hsize; i++ )
					{
						int val = saturate_cast<int>(hist.at<float>(i)*histimg.rows/255);
						rectangle( histimg, Point(i*binW,histimg.rows),
							Point((i+1)*binW,histimg.rows - val),
							Scalar(buf.at<Vec3b>(i)), -1, 8 );
					}
				}
				calcBackProject(&hue, 1, 0, hist, backproj, &phranges);
				cv::imshow("backproj", backproj);
				backproj &= mask;
				RotatedRect trackBox = CamShift(backproj, trackWindow,
 
				//此句代码的OpenCV3版为:
				TermCriteria( TermCriteria::EPS | TermCriteria::COUNT, 10, 1 ));
				//此句代码的OpenCV2版为:
				//TermCriteria( CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 10, 1 ));
 
				if( trackWindow.area() <= 1 )
				{
					int cols = backproj.cols, rows = backproj.rows, r = (MIN(cols, rows) + 5)/6;
					trackWindow = Rect(trackWindow.x - r, trackWindow.y - r,
						trackWindow.x + r, trackWindow.y + r) &
						Rect(0, 0, cols, rows);
				}
 
				if( backprojMode )
					cvtColor( backproj, image, COLOR_GRAY2BGR );
 
				//此句代码的OpenCV3版为:
				ellipse( image, trackBox, Scalar(0,0,255), 3, LINE_AA );
				//此句代码的OpenCV2版为:
				//ellipse( image, trackBox, Scalar(0,0,255), 3, CV_AA );
 
			}
		}
		else if( trackObject < 0 )//也就是说鼠标选定区域后,暂停键失效
			paused = false;
 
		if( selectObject && selection.width > 0 && selection.height > 0 )
		{
			Mat roi(image, selection);
			bitwise_not(roi, roi);
		}
 
		cv::imshow( "CamShift Demo", image );
		cv::imshow( "Histogram", histimg );
		char c = (char)waitKey(90);
		if( c == 27 )
			break;
		switch(c)
		{
		case 'b':
			backprojMode = !backprojMode;
			break;
		case 'c':
			trackObject = 0;
			histimg = Scalar::all(0);
			break;
		case 'h':
			showHist = !showHist;
			if( !showHist )
				destroyWindow( "Histogram" );
			else
				namedWindow( "Histogram", 1 );
			break;
		case 'p':
			paused = !paused;
			break;
		case 'k':
		{
			imwrite("pic.jpg", image);
			break;
		}
		default:
			;
		}
	}
 
	return 0;
}

 

效果:

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_直方图_02,第2张

 

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_直方图_03,第3张

 

或者直接预定义:

'''


import numpy as np
import cv2

cap = cv2.VideoCapture(0)

# take first frame of the video
ret,frame = cap.read()

# setup initial location of window
r,h,c,w = 300,200,400,300  # simply hardcoded the values
track_window = (c,r,w,h)


roi = frame[r:r+h, c:c+w]
hsv_roi =  cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((100., 30.,32.)), np.array((180.,120.,255.)))
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret ,frame = cap.read()

    if ret == True:
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame,[pts],True, 255,2)

        cv2.imshow('img2',img2)
        k = cv2.waitKey(60) & 0xff
        if k == 27:
            break

    else:
        break

cv2.destroyAllWindows()
cap.release()

'''

import cv2
import numpy as np

# 设置初始化的窗口位置
r,h,c,w = 0,100,0,100 # 设置初试窗口位置和大小
track_window = (c,r,w,h)

cap = cv2.VideoCapture(0)

ret, frame= cap.read()

# 设置追踪的区域
roi = frame[r:r+h, c:c+w]
# roi区域的hsv图像
hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# 取值hsv值在(0,60,32)到(180,255,255)之间的部分
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
# 计算直方图,参数为 图片(可多),通道数,蒙板区域,直方图长度,范围
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
# 归一化
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)

# 设置终止条件,迭代10次或者至少移动1次
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret, frame = cap.read()
    if ret == True:
        # 计算每一帧的hsv图像
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # 计算反向投影
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        # 调用meanShift算法在dst中寻找目标窗口,找到后返回目标窗口
        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        # Draw it on image
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame,[pts],True, 255,2)
        cv2.imshow('img2',img2)


    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()

 

背景分割:

视频的背景分割

本文用到的视频traffic.flv,来源于原作者Github,地址为: 
https://github.com/techfort/pycv/tree/master/chapter8/surveillance_demo 

OpenCV中有几种背景分割器(Background Subtractor),这里使用最常用的两种: 
K-Nearest (KNN
Mixture of Gaussian (MOG2)

KNN背景分割器:

# -*- coding:utf-8 -*-

import cv2

# Step1. 构造VideoCapture对象
cap = cv2.VideoCapture('traffic.flv')

# Step2. 创建一个背景分割器
# createBackgroundSubtractorKNN()函数里,可以指定detectShadows的值
# detectShadows=True,表示检测阴影,反之不检测阴影
knn = cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True :
    ret, frame = cap.read() # 读取视频
    fgmask = knn.apply(frame) # 背景分割
    cv2.imshow('frame', fgmask) # 显示分割结果
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

运行效果如下:

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_opencv 视频检测移动物体_04,第4张

 

MOG2背景分割器的小例子

# -*- coding:utf-8 -*-

import cv2

# Step1. 构造VideoCapture对象
cap = cv2.VideoCapture('traffic.flv')

# Step2. 创建一个背景分割器
# createBackgroundSubtractorMOG2()函数里,可以指定detectShadows的值
# detectShadows=True,表示检测阴影,反之不检测阴影
mog = cv2.createBackgroundSubtractorMOG2()

while True :
    ret, frame = cap.read() # 读取视频
    fgmask = mog.apply(frame) # 背景分割
    cv2.imshow('frame', fgmask) # 显示分割结果
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_#include_05,第5张

 

运动检测跟踪的小例子

 

# -*- coding:utf-8 -*-

import cv2

# Step1. 初始化VideoCapture对象
cap = cv2.VideoCapture('traffic.flv')

# Step2. 使用KNN背景分割器
knn= cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True :
    ret, frame = cap.read()
    fgmask = knn.apply(frame) # 分割背景

    # 阈值化,将非纯白色(244~255)的所有像素设为0
    th = cv2.threshold(fgmask.copy(), 244, 255, cv2.THRESH_BINARY)[1]

    # 为了使效果更好,进行一次膨胀
    dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3)), iterations=2)

    # 检测轮廓
    image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # 绘制轮廓
    for c in contours:
        if cv2.contourArea(c) > 1600:
            (x,y,w,h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 2)

    cv2.imshow('detection', frame)
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

 

 

opencv 视频检测移动物体 opencv视频跟踪,opencv 视频检测移动物体 opencv视频跟踪_点集_06,第6张

 

Surveillance Demo: Tracking Pedestrians in Camera Feed

#! /usr/bin/python
# 目标跟踪
"""Surveillance Demo: Tracking Pedestrians in Camera Feed
The application opens a video (could be a camera or a video file)
and tracks pedestrians in the video.
"""
__author__ = "joe minichino"
__copyright__ = "property of mankind."
__license__ = "MIT"
__version__ = "0.0.1"
__maintainer__ = "Joe Minichino"
__email__ = "joe.minichino@gmail.com"
__status__ = "Development"

import cv2
import numpy as np
import os.path as path
import argparse

parser = argparse.ArgumentParser()
parser.add_argument("-a", "--algorithm",
                    help="m (or nothing) for meanShift and c for camshift")
args = vars(parser.parse_args())


def center(points):
    """calculates centroid of a given matrix"""
    x = (points[0][0] + points[1][0] + points[2][0] + points[3][0]) / 4
    y = (points[0][1] + points[1][1] + points[2][1] + points[3][1]) / 4
    return np.array([np.float32(x), np.float32(y)], np.float32)


font = cv2.FONT_HERSHEY_SIMPLEX


class Pedestrian():
    """Pedestrian class
    each pedestrian is composed of a ROI, an ID and a Kalman filter
    so we create a Pedestrian class to hold the object state
    """

    def __init__(self, id, frame, track_window):
        """init the pedestrian object with track window coordinates"""
        # set up the roi
        self.id = int(id)
        x, y, w, h = track_window
        self.track_window = track_window
        self.roi = cv2.cvtColor(frame[y:y + h, x:x + w], cv2.COLOR_BGR2HSV)
        roi_hist = cv2.calcHist([self.roi], [0], None, [16], [0, 180])
        self.roi_hist = cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)

        # set up the kalman
        self.kalman = cv2.KalmanFilter(4, 2)
        self.kalman.measurementMatrix = np.array([[1, 0, 0, 0], [0, 1, 0, 0]], np.float32)
        self.kalman.transitionMatrix = np.array([[1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]], np.float32)
        self.kalman.processNoiseCov = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]],
                                               np.float32) * 0.03
        self.measurement = np.array((2, 1), np.float32)
        self.prediction = np.zeros((2, 1), np.float32)
        self.term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
        self.center = None
        self.update(frame)

    def __del__(self):
        print("Pedestrian %d destroyed" % self.id)

    def update(self, frame):
        # print "updating %d " % self.id
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        back_project = cv2.calcBackProject([hsv], [0], self.roi_hist, [0, 180], 1)

        if args.get("algorithm") == "c":
            ret, self.track_window = cv2.CamShift(back_project, self.track_window, self.term_crit)
            pts = cv2.boxPoints(ret)
            pts = np.int0(pts)
            self.center = center(pts)
            cv2.polylines(frame, [pts], True, 255, 1)

        if not args.get("algorithm") or args.get("algorithm") == "m":
            ret, self.track_window = cv2.meanShift(back_project, self.track_window, self.term_crit)
            x, y, w, h = self.track_window
            self.center = center([[x, y], [x + w, y], [x, y + h], [x + w, y + h]])
            cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 255, 0), 2)

        self.kalman.correct(self.center)
        prediction = self.kalman.predict()
        cv2.circle(frame, (int(prediction[0]), int(prediction[1])), 4, (255, 0, 0), -1)
        # fake shadow
        cv2.putText(frame, "ID: %d -> %s" % (self.id, self.center), (11, (self.id + 1) * 25 + 1),
                    font, 0.6,
                    (0, 0, 0),
                    1,
                    cv2.LINE_AA)
        # actual info
        cv2.putText(frame, "ID: %d -> %s" % (self.id, self.center), (10, (self.id + 1) * 25),
                    font, 0.6,
                    (0, 255, 0),
                    1,
                    cv2.LINE_AA)


def main():
    #camera = cv2.VideoCapture(path.join(path.dirname(__file__), "traffic.flv"))
    camera = cv2.VideoCapture(path.join(path.dirname(__file__), "768x576.avi"))
    # camera = cv2.VideoCapture(path.join(path.dirname(__file__), "..", "movie.mpg"))
    # camera = cv2.VideoCapture(0)
    history = 20
    # KNN background subtractor
    bs = cv2.createBackgroundSubtractorKNN()

    # MOG subtractor
    # bs = cv2.bgsegm.createBackgroundSubtractorMOG(history = history)
    # bs.setHistory(history)

    # GMG
    # bs = cv2.bgsegm.createBackgroundSubtractorGMG(initializationFrames = history)

    cv2.namedWindow("surveillance")
    pedestrians = {}
    firstFrame = True
    frames = 0
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480))
    while True:
        print(" -------------------- FRAME %d --------------------" % frames)
        grabbed, frame = camera.read()
        if (grabbed is False):
            print("failed to grab frame.")
            break

        fgmask = bs.apply(frame)

        # this is just to let the background subtractor build a bit of history
        if frames < history:
            frames += 1
            continue

        th = cv2.threshold(fgmask.copy(), 127, 255, cv2.THRESH_BINARY)[1]
        th = cv2.erode(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)), iterations=2)
        dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (8, 3)), iterations=2)
        image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        counter = 0
        for c in contours:
            if cv2.contourArea(c) > 500:
                (x, y, w, h) = cv2.boundingRect(c)
                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 1)
                # only create pedestrians in the first frame, then just follow the ones you have
                if firstFrame is True:
                    pedestrians[counter] = Pedestrian(counter, frame, (x, y, w, h))
                counter += 1

        for i, p in pedestrians.items():
            p.update(frame)

        firstFrame = False
        frames += 1

        cv2.imshow("surveillance", frame)
        out.write(frame)
        if cv2.waitKey(110) & 0xff == 27:
            break
    out.release()
    camera.release()


if __name__ == "__main__":
    main()

https://www.xamrdz.com/mobile/4qq1938609.html

相关文章: