[go: up one dir, main page]

CN108111818A - Moving target active perception method and apparatus based on multiple-camera collaboration - Google Patents

Moving target active perception method and apparatus based on multiple-camera collaboration Download PDF

Info

Publication number
CN108111818A
CN108111818A CN201711425735.9A CN201711425735A CN108111818A CN 108111818 A CN108111818 A CN 108111818A CN 201711425735 A CN201711425735 A CN 201711425735A CN 108111818 A CN108111818 A CN 108111818A
Authority
CN
China
Prior art keywords
camera
target
slave
candidate
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711425735.9A
Other languages
Chinese (zh)
Other versions
CN108111818B (en
Inventor
胡海苗
田荣朋
胡子昊
李波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201711425735.9A priority Critical patent/CN108111818B/en
Publication of CN108111818A publication Critical patent/CN108111818A/en
Application granted granted Critical
Publication of CN108111818B publication Critical patent/CN108111818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明提供了一种基于多摄像机协同的运动目标主动感知装置和方法。该方法包括:依据主摄像机画面与从摄像机画面进行标定,建立位置映射关系;实时监测主摄像机画面内运动目标,得到候选目标集合;依据目标重要性选取候选目标,依据位置映射关系选取从摄像机进行跟踪拍摄;计算镜头方位角和变焦倍数,调整从摄像机对准候选目标区域,获取该候选目标的高质量图像;根据高质量图像分析候选目标类别,把属于预定设置类型的目标确认为关注目标,放入关注目标集合。本发明利用主从摄像机间位置对应关系,调动从摄像机获取主摄像机画面内候选目标的高质量图像,提取目标图像特征,确认目标类别,实现目标的主动确认。The invention provides an active perception device and method for a moving target based on multi-camera coordination. The method includes: calibrating according to the main camera picture and the secondary camera picture, and establishing a position mapping relationship; monitoring the moving target in the main camera picture in real time, and obtaining a candidate target set; selecting candidate targets according to the importance of the target, and selecting the secondary camera according to the position mapping relationship. Track and shoot; calculate the lens azimuth and zoom factor, adjust the camera to align with the candidate target area, and obtain a high-quality image of the candidate target; analyze the candidate target category according to the high-quality image, and confirm the target that belongs to the predetermined setting type as the target of interest, Put it in the focus target collection. The present invention utilizes the positional relationship between the master and slave cameras, mobilizes the slave cameras to obtain high-quality images of candidate targets in the master camera screen, extracts target image features, confirms target categories, and realizes active confirmation of targets.

Description

基于多摄像机协同的运动目标主动感知方法和装置Active perception method and device for moving targets based on multi-camera collaboration

技术领域technical field

本发明涉及一种多摄像机监控系统中的成像方法和装置,尤其涉及一种基于多摄像机协同的目标主动感知方法和装置,属于视频监控领域。The invention relates to an imaging method and device in a multi-camera monitoring system, in particular to a method and device for active target perception based on multi-camera coordination, belonging to the field of video monitoring.

背景技术Background technique

当今,各式视频监控设备在生产、生活环境中广泛应用。视频监控一个重要任务是发现并记录场景中的目标,更进一步的,发现并记录关于目标身份的关键信息,为后续识别目标的身份信息提供帮助。摄像机拍摄到的诸如人脸、车牌、车辆年检标识等,均可以用于确定目标身份,是包含目标唯一性描述信息的部分。摄像机拍摄到更多包含唯一性描述信息,能够为目标身份的识别带来帮助。Nowadays, all kinds of video surveillance equipment are widely used in production and living environments. An important task of video surveillance is to discover and record the target in the scene, and further, to discover and record key information about the identity of the target, so as to provide help for the subsequent identification of the identity information of the target. The face, license plate, vehicle annual inspection mark, etc. captured by the camera can be used to determine the identity of the target, which is the part that contains the unique description information of the target. The camera captures more unique description information, which can help identify the target identity.

现有视频监控设备通过采集监控场景的视频图像,分析视频图像,达到识别和发现场景中目标的效果。但是,由于视频监控应用场景不一,目标的位置和姿态多种多样,部分目标远离或侧对摄像机的目标图像成像质量差,造成基于图像特征的识别方法对部分目标结果差。而现有监控设备使用被动成像策略。设备在固定位置拍摄监控场景,无法主动调整摄像机位置和成像参数改善成像效果。被动成像设备无法主动获取高质量的目标图像,无法获取目标唯一性描述信息,因而无法对目标进行有效识别,在应用中存在着误报、漏报的问题。需要设计一种具有主动成像功能的主动感知设备,解决目标主动确认的问题。Existing video monitoring equipment collects video images of a monitoring scene and analyzes the video images to achieve the effect of identifying and discovering objects in the scene. However, due to the different video surveillance application scenarios, the positions and attitudes of the targets are various, and the imaging quality of some targets far away from or facing the camera is poor, resulting in poor results for some targets based on the recognition method based on image features. Existing surveillance devices use passive imaging strategies. The equipment shoots the surveillance scene at a fixed position, and cannot actively adjust the camera position and imaging parameters to improve the imaging effect. Passive imaging equipment cannot actively obtain high-quality target images and unique description information of the target, so it cannot effectively identify the target, and there are problems of false positives and missed negatives in the application. It is necessary to design an active perception device with active imaging function to solve the problem of active target confirmation.

一类卡口相机系统通过在限制目标通过姿态的区域假设监控设备,避免了目标姿态对成像带来的影响,在受限的场景内,解决了目标唯一性描述信息获取的问题。此类系统通过在出入口等区域架设摄像机,辅以闪光灯、红外灯等设备,能够获取质量较高的图像。此类系统拍摄的目标图像分辨率高、质量好,易于提取目标关键信息,目标识别准确率高。但是受限于场景要求,此类系统仅能部署在收费卡口、大楼出入口等少数位置,应用场景有限。A type of bayonet camera system assumes a monitoring device in an area where the target's passing posture is restricted, avoiding the impact of the target's posture on imaging, and solving the problem of obtaining unique description information for the target in the restricted scene. This type of system can obtain high-quality images by setting up cameras in areas such as entrances and exits, supplemented by flashlights, infrared lights and other equipment. The target image captured by this type of system has high resolution and good quality, and it is easy to extract the key information of the target, and the target recognition accuracy is high. However, due to scenario requirements, this type of system can only be deployed in a few places such as toll checkpoints and building entrances and exits, and its application scenarios are limited.

本发明通过摄像机协同,对于主摄像机中发现的待确认目标,调动从摄像机进行跟踪抓拍,并根据抓拍到的高质量目标图像,确认目标类别,在满足了目标主动确认的需求的同时,解决了卡口相机应用场景有限的问题。Through camera coordination, the present invention mobilizes the slave cameras to track and capture the target to be confirmed found in the main camera, and confirms the target category according to the captured high-quality target image, which solves the problem while meeting the requirement of active target confirmation. The problem of limited application scenarios of bayonet cameras.

本发明设计了一种基于多摄像机协同的运动目标主动感知方法,该方法中从主摄像机画面中检测画面中运动区域,得到候选目标集合,利用摄像机联动关系,调动从摄像机获取每一个候选目标的高质量图像;使用分类器分析目标高质量图像,提取目标图像特征,分析目标类别,根据分类结果将预定设置类型的目标确认关注目标。The present invention designs a method for active perception of moving targets based on multi-camera collaboration. In this method, the moving area in the screen is detected from the main camera screen to obtain a set of candidate targets, and the camera linkage relationship is used to mobilize the camera to obtain each candidate target. High-quality images; use the classifier to analyze the high-quality images of the target, extract the features of the target image, analyze the target category, and confirm the target of interest according to the target of the predetermined type according to the classification result.

发明内容Contents of the invention

本发明解决的问题在于,对主摄像机检测出的候选目标,调动周边从摄像机获取候选目标的高质量图像,提取目标图像特征,分析目标类别,确认候选目标是否为关注目标(interested object)。The problem to be solved by the present invention is that, for the candidate target detected by the main camera, the peripheral cameras are mobilized to obtain high-quality images of the candidate target, the features of the target image are extracted, the target category is analyzed, and whether the candidate target is an interested object.

本发明使用的摄像机分为两类,相对监控场景固定的全景摄像机和具有旋转(Pan)、俯仰(Tilt)、变焦(Zoom)功能的PTZ摄像机。The camera used in the present invention is divided into two types, a panoramic camera fixed relative to the monitoring scene and a PTZ camera with functions of rotation (Pan), pitch (Tilt) and zoom (Zoom).

本发明公开了一种运动目标主动感知装置,包括一台不动全景摄像机和多台PTZ摄像机和所述的运动目标主动感知装置。其中全景监控摄像机为主摄像机,用于获取全景监控视频,PTZ摄像机为从摄像机,用于跟踪和拍摄目标,获取目标高质量图像。运动目标主动感知装置用于在主摄像机画面中提取待确认目标,调动从摄像机拍摄高质量目标图像,分析目标类别,将属于预定设置类型待确认目标的确认为目标。The invention discloses an active sensing device for a moving target, which comprises a motionless panoramic camera, a plurality of PTZ cameras and the active sensing device for a moving target. Among them, the panoramic surveillance camera is the main camera, which is used to obtain the panoramic surveillance video, and the PTZ camera is the secondary camera, which is used to track and shoot the target and obtain high-quality images of the target. The moving target active sensing device is used to extract the target to be confirmed in the main camera screen, mobilize the secondary camera to take high-quality target images, analyze the target category, and confirm the target that belongs to the predetermined setting type to be confirmed as the target.

本发明公开了一种基于多摄像机协同的运动目标自主感知成像方法,其特征在于包括如下步骤:The invention discloses a multi-camera coordination-based autonomous perception imaging method for a moving target, which is characterized in that it includes the following steps:

(1)依据主摄像机画面与从摄像机画面,通过特征提取和特征匹配的方式进行主摄像机与从摄像机间自动标定,建立位置映射关系,(1) According to the main camera picture and the secondary camera picture, the automatic calibration between the main camera and the secondary camera is carried out through feature extraction and feature matching, and the position mapping relationship is established.

(2)依据检测阈值设定,对主摄像机视场中的多个运动区域进行实时检测,得到候选目标的集合,(2) According to the detection threshold setting, real-time detection is performed on multiple moving areas in the field of view of the main camera to obtain a set of candidate targets,

(3)依据重要性评价函数选取候选目标集合中重要性最高的候选目标,依据位置映射关系将选取从摄像机跟踪拍摄。(3) Select the candidate target with the highest importance in the candidate target set according to the importance evaluation function, and select the secondary camera to track and shoot according to the position mapping relationship.

(4)依据候选目标位置和主摄像机与从摄像机间位置映射关系,从摄像机计算镜头方位角和变焦倍数,调整从摄像机对准候选目标区域,获取该候选目标的高质量图像,(4) According to the position of the candidate target and the position mapping relationship between the main camera and the slave camera, the slave camera calculates the lens azimuth and zoom factor, adjusts the slave camera to align with the candidate target area, and obtains a high-quality image of the candidate target,

(5)提取目标的高质量图像的特征,分析确认目标类别,根据目标分类结果,把属于预定设置类型的目标确认为关注目标,放入关注目标集合,把不属于预定设置类型的目标确认为非关注目标,不放入关注目标集合,(5) Extract the features of the high-quality image of the target, analyze and confirm the target category, according to the target classification result, confirm the target that belongs to the predetermined setting type as the attention target, put it into the focus target set, and confirm the target that does not belong to the predetermined setting type as the target Non-focused targets are not included in the set of focused targets.

(6)确认是否完成所有候选目标的确认,是则退出,否则返回步骤3。(6) Confirm whether the confirmation of all candidate targets is completed, if yes, exit, otherwise return to step 3.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于所述步骤1具有如下流程:The method for active perception of moving objects based on multi-camera coordination as described above is characterized in that the step 1 has the following process:

1.1选取任意未标定的从摄像机。1.1 Select any uncalibrated slave camera.

1.2手动调整从摄像机焦距为最小值,调整从摄像机镜头方向,直至从摄像机与主摄像机具有最大化重叠视野。1.2 Manually adjust the focal length of the slave camera to the minimum value, and adjust the lens direction of the slave camera until the slave camera and the master camera have the maximum overlapping field of view.

1.3分别提取主摄像机画面和从摄像机画面加速鲁棒特征(Speeded-Up RobustFeatures)。1.3 Extract the main camera picture and the speeded-up robust features (Speeded-Up RobustFeatures) from the camera picture respectively.

1.4使用K最近邻(k-Nearest Neighbor)算法和暴力搜索算法匹配加速鲁棒(SURF)特征点,得到匹配结果GoodMatches。1.4 Use the k-nearest neighbor (k-Nearest Neighbor) algorithm and the brute force search algorithm to match the accelerated robust (SURF) feature points, and get the matching result GoodMatches.

1.5主摄像机由匹配结果GoodMatches,使用最小二乘法计算主摄像画面与从摄像机画面间的仿射矩阵,建立位置映射关系,完成主从摄像机标定。1.5 The main camera uses the matching result GoodMatches to calculate the affine matrix between the main camera picture and the slave camera picture using the least squares method, establish a position mapping relationship, and complete the master-slave camera calibration.

1.6判断所有从摄像机是否配准完毕,是则返回步骤1.1,否则退出。1.6 Determine whether all the slave cameras have been registered, if yes, return to step 1.1, otherwise exit.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于:所述步骤1中,特征匹配步骤中使用K最近邻算法和暴力搜索算法,匹配主摄像机画面和从摄像机画面中的加速鲁棒特征点。对于从摄像机画面每个加速鲁棒特征点,在主摄像机加速鲁棒特征点集合中使用K最近邻(KNN)算法搜索欧式距离最近的3个特征点,结果记入集合Matches;计算Matches中所有加速鲁棒特征点对的欧氏距离,记其中最小的距离为d,取Matches中所有距离小于的min(2d,minDist)点对构成集合GoodMatches,该集合即为匹配特征点对集合。其中minDist为预先设定的阈值,可根据实际情况调整,但应保证GoodMatches中点对个数不少于15个。The method for active perception of moving objects based on multi-camera collaboration as described above is characterized in that: in the step 1, the K nearest neighbor algorithm and the brute force search algorithm are used in the feature matching step to match the acceleration of the main camera picture and the secondary camera picture Robust feature points. For each accelerated robust feature point from the camera image, use the K nearest neighbor (KNN) algorithm to search for the three feature points with the closest Euclidean distance in the main camera accelerated robust feature point set, and record the results into the set Matches; calculate all Accelerate the Euclidean distance of the robust feature point pair, record the minimum distance as d, and take all the min(2d,minDist) point pairs with a distance smaller than Matches to form the set GoodMatches, which is the set of matching feature point pairs. Among them, minDist is a preset threshold, which can be adjusted according to the actual situation, but it should be ensured that the number of point pairs in GoodMatches is not less than 15.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于:所述步骤1中,主摄像机和从摄像机间位置映射关系包括两部分:主摄像机画面坐标和从摄像机对应关系;主摄像机和从摄像机画面坐标转换关系,The method for active perception of moving objects based on multi-camera collaboration as described above is characterized in that: in the step 1, the position mapping relationship between the master camera and the slave camera includes two parts: the coordinates of the master camera picture and the corresponding relationship between the slave camera; the master camera and from the coordinate transformation relationship of the camera screen,

主摄像机画面坐标和从摄像机由包围主摄像机画面中匹配特征点的凸包表示,The frame coordinates of the master camera and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera frame,

根据GoodMatches中的匹配特征点对,计算出主摄像机画面中能够包围所有特征点的凸包,在步骤中C中,落在该凸包中的候选目标分配给该从摄像机。According to the matching feature point pairs in GoodMatches, calculate the convex hull that can enclose all the feature points in the main camera picture, and in step C, assign the candidate targets falling in the convex hull to the slave camera.

主摄像机和从摄像机画面坐标转换关系由仿射变换表示,The coordinate transformation relationship between the main camera and the slave camera screen is represented by affine transformation,

根据集合GoodMatches中点对的图像坐标位置对应关系,由最小二乘法计算得出的主摄像机画面到从摄像机画面的仿射变换。According to the corresponding relationship between the image coordinate positions of the point pairs in the set GoodMatches, the affine transformation from the main camera picture to the secondary camera picture is calculated by the least square method.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于:所述步骤2中,使用帧差法检测主摄像机画面中的候选目标,The method for active perception of moving objects based on multi-camera coordination as described above is characterized in that: in the step 2, the frame difference method is used to detect the candidate objects in the main camera picture,

使用连续自适应均值偏移算法跟踪主摄像机画面中的候选目标,Use continuous adaptive mean shift algorithm to track candidate objects in the main camera view,

且候选目标的实时检测的结果具有以下形式:And the result of the real-time detection of the candidate target has the following form:

[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];[ObjectID, Time, PosX_Left, PosY_Left, PosX_Right, PosY_Right];

其中:in:

ObjectID表示的是候选目标的编号,ObjectID represents the number of the candidate target,

Time表示候选目标的出现时间,Time represents the appearance time of the candidate target,

PosX_Left、PosY_Left、PosX_Right、PosY_Right分别表示包围盒左上角和右下角的坐标的时间序列。PosX_Left, PosY_Left, PosX_Right, and PosY_Right represent the time series of the coordinates of the upper left corner and the lower right corner of the bounding box, respectively.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于所述步骤3中,使用以下公式描述目标的重要性:The above-mentioned method for active perception of moving objects based on multi-camera coordination is characterized in that in step 3, the following formula is used to describe the importance of the object:

E=Eleave+α×Ewait E= Eleave +α× Ewait

其中,Eleave为描述目标离开画面时间的评价函数,目标离开画面的时间越短,该函数数值越大。Ewait为描述目标在目标队列中等待时长的评价函数,未被抓取的时间越久,该函数数值越大;α为可调节参数,α越大越关注目标进入次序。Among them, E leave is an evaluation function describing the time when the target leaves the screen, and the shorter the time for the target to leave the screen, the greater the value of this function. E wait is an evaluation function that describes the waiting time of the target in the target queue. The longer the time without being captured, the greater the value of this function; α is an adjustable parameter, and the larger the α, the more attention is paid to the entry order of the target.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于所述步骤3中,目标离开画面的时间由下述函数估计:The above-mentioned method for active perception of moving objects based on multi-camera coordination is characterized in that in step 3, the time when the object leaves the frame is estimated by the following function:

其中,w,h为主摄像机画面宽高,(x,y)为目标当前位置,[x0,y0]为目标进入画面时的位置,为目标运动速度估计值。该函数计算出的时间表示目标在主摄像机画面中沿当前运动方向以匀速直线运动走到画面边界的时间。Among them, w, h is the width and height of the main camera screen, (x, y) is the current position of the target, [x 0 , y 0 ] is the position when the target enters the screen, is the estimated velocity of the target. The time calculated by this function represents the time when the target moves in a straight line at a constant speed along the current motion direction in the main camera image to the edge of the image.

如上所述的基于多摄像机协同的运动目标主动感知方法,其特征在于:所述步骤4中,使用主摄像机和从摄像机画面坐标转换关系,将主摄像机中候选目标的坐标转换为从摄像机初始位置画面上的相对坐标,依据鱼眼球面投影规则将从摄像机初始位置画面的相对坐标转换为从摄像机镜头方向的角坐标。从摄像机焦距由此方法估算:若给定目标长宽最大值为l*像素,从摄像机建立位置映射矩阵时的焦距是f,候选目标在从摄像机画面中的宽高为w,h,则可以推出调整后焦距为:The method for active perception of moving objects based on multi-camera collaboration as described above is characterized in that: in step 4, the coordinates of the candidate target in the main camera are converted to the initial position of the slave camera using the coordinate transformation relationship between the master camera and the slave camera screen The relative coordinates on the screen, according to the fisheye spherical projection rule, convert the relative coordinates of the screen from the initial position of the camera to the angular coordinates from the direction of the camera lens. The focal length of the camera is estimated by this method: if the maximum length and width of the given target is l * pixel, the focal length when establishing the position mapping matrix from the camera is f, and the width and height of the candidate target in the image of the camera are w, h, then it can be The adjusted focal length is:

附图说明Description of drawings

图1为根据本发明的一个实施例的基于多摄像机协同的运动目标主动感知方法的流程图。FIG. 1 is a flow chart of a method for active perception of a moving object based on multi-camera coordination according to an embodiment of the present invention.

图2为根据本发明的一个实施例的基于多摄像机协同的运动目标主动感知装置的系统配置图。Fig. 2 is a system configuration diagram of an active sensing device for moving objects based on multi-camera coordination according to an embodiment of the present invention.

图3为根据本发明的一个实施例的基于多摄像机协同的运动目标主动感知方法的主从摄像机标定的流程图。FIG. 3 is a flow chart of master-slave camera calibration in a method for active perception of moving objects based on multi-camera coordination according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明做进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

根据本发明的一个实施例的基于多摄像机协同的运动目标主动感知装置的系统配置如图2所示。实现本发明中的基于多摄像机协同的运动目标主动感知方法的装置包括:至少两部摄像机,用于实现主从摄像机工作模式;以及,一套运动目标主动感知装置。本发明中的主从摄像机工作模式具体指,对于主摄像机中发现的目标,调动从摄像机进行主动感知,获取高质量视频图像。The system configuration of the device for active perception of moving objects based on multi-camera coordination according to an embodiment of the present invention is shown in FIG. 2 . The device for implementing the multi-camera coordination-based active perception method for moving objects in the present invention includes: at least two cameras for realizing the master-slave camera working mode; and a set of moving object active perception devices. The working mode of the master-slave camera in the present invention specifically refers to mobilizing the slave cameras for active perception of objects found in the master camera to obtain high-quality video images.

本发明使用的摄像机分为两类,一类是对应的监控场景固定的全景摄像机,另一类是具有旋转(Pan)、俯仰(Tilt)、变焦(Zoom)功能的PTZ摄像机。根据本发明的一个具体实施例中的主摄像机与从摄像机包括一台视野固定的全景摄像机和多台PTZ摄像机;作为主摄像机的全景摄像机具有较大视野,其画面能够覆盖监控场景的至少大部分区域。在根据本发明的一个具体实施例中,主摄像机是固定的枪机。在根据本发明的另一个具体实施例中,主摄像机是是固定镜头方向的PTZ摄像机。The cameras used in the present invention are divided into two categories, one is a fixed panoramic camera corresponding to a monitoring scene, and the other is a PTZ camera with functions of rotation (Pan), pitch (Tilt) and zoom (Zoom). According to a specific embodiment of the present invention, the main camera and the slave camera include a fixed panoramic camera and multiple PTZ cameras; the panoramic camera as the main camera has a larger field of view, and its picture can cover at least most of the monitoring scene area. In a specific embodiment according to the present invention, the main camera is a fixed bolt. In another specific embodiment according to the present invention, the main camera is a PTZ camera with a fixed lens direction.

在根据本发明的一个实施例,一台从摄像机画面覆盖的监控场景与主摄像机覆盖的监控场景有重叠,而与其他从摄像机覆盖的场景没有重叠。但在根据本发明的又一实施例中,一台从摄像机画面覆盖的监控场景与其他从摄像机覆盖的场景有部分重叠。According to an embodiment of the present invention, the surveillance scene covered by a secondary camera overlaps with the surveillance scene covered by the main camera, but does not overlap with the scenes covered by other secondary cameras. However, in another embodiment of the present invention, the monitoring scene covered by the screen of one secondary camera partially overlaps with the scenes covered by other secondary cameras.

根据本发明的运动目标主动感知装置采集主摄像机和多台从摄像机的视频图像,对采集的视频图像进行处理。The moving object active sensing device according to the present invention collects video images of a master camera and a plurality of slave cameras, and processes the collected video images.

根据本发明的一个实施例的运动目标主动感知装置被设置在一台个人计算机(PC)、嵌入式处理盒或板卡上。The moving object active sensing device according to one embodiment of the present invention is set on a personal computer (PC), embedded processing box or board.

在根据本发明的一个实施例中,搭载根据本发明的运动目标主动感知装置的硬件被集成在主摄像机硬件内。在根据本发明的又一个实施例中,根据本发明的运动目标主动感知装置被设置在通过网络连接主摄像机和从摄像机的计算机上。In one embodiment of the present invention, the hardware equipped with the device for active sensing of moving objects according to the present invention is integrated into the hardware of the main camera. In yet another embodiment of the present invention, the device for actively sensing a moving object according to the present invention is set on a computer connected to a master camera and a slave camera through a network.

如图2所示,根据本发明的一个实施例的运动目标主动感知装置包括图像采集单元、候选目标检测单元、目标选取单元、位置映射单元、目标跟踪确认单元。As shown in FIG. 2 , an active sensing device for moving objects according to an embodiment of the present invention includes an image acquisition unit, a candidate object detection unit, an object selection unit, a position mapping unit, and an object tracking confirmation unit.

图像采集单元采集主摄像机和从摄像机的图像。图像采集单元将来自主摄像机的图像传送至候选目标检测单元,用于进行候选目标的检测;图像采集单元还将来自从摄像机的图像,传送至目标跟踪确认单元,用于提取目标特征,分析和确认目标类别。The image acquisition unit acquires the images of the master camera and the slave camera. The image acquisition unit transmits the image from the main camera to the candidate target detection unit for detection of candidate targets; the image acquisition unit also transmits the image from the slave camera to the target tracking confirmation unit for extracting target features, analyzing and confirming the target category.

候选目标检测单元从图像采集单元接收主摄像机的图像,依据检测阈值设定对图像中的运动区域进行实时监测,得到候选目标的集合,并将该集合传送至目标选取单元。根据本发明的一个实施例,候选目标检测单元采用帧差法提取主摄像机画面中的运动区域,并使用连续自适应均值漂移算法跟踪候选目标。The candidate target detection unit receives the image of the main camera from the image acquisition unit, monitors the moving area in the image in real time according to the detection threshold setting, obtains a set of candidate targets, and transmits the set to the target selection unit. According to an embodiment of the present invention, the candidate target detection unit uses the frame difference method to extract the moving area in the main camera picture, and uses the continuous adaptive mean shift algorithm to track the candidate target.

目标选取单元接收来自候选目标检测单元的候选目标的集合,依据目标重要性评价函数,选取候选目标集合中当前时刻重要性最高的候选目标,将选出的候选目标发送至目标位置映射单元。The target selection unit receives the set of candidate targets from the candidate target detection unit, selects the candidate target with the highest importance at the current moment in the candidate target set according to the target importance evaluation function, and sends the selected candidate target to the target position mapping unit.

位置映射单元接受选出的候选目标,依据主摄像机和从摄像机间坐标映射关系选择从摄像机,并发送目标候选再主摄像机画面中的坐标至选定的从摄像机。选定的从摄像机接受目标的坐标,计算镜头角度和焦距调整量,对准候选目标进行拍摄,向目标跟踪确认单元传送高质量目标图像。另外,位置映射单元在根据本发明的运动目标主动感知装置的启动过程中,控制主摄像机和从摄像机完成自动标定的过程,记录位置映射关系。The position mapping unit accepts the selected candidate target, selects the slave camera according to the coordinate mapping relationship between the master camera and the slave camera, and sends the coordinates of the target candidate in the main camera image to the selected slave camera. Receive the coordinates of the selected target from the camera, calculate the lens angle and focus adjustment, aim at the candidate target for shooting, and send high-quality target images to the target tracking confirmation unit. In addition, the position mapping unit controls the master camera and the slave camera to complete the automatic calibration process and records the position mapping relationship during the start-up process of the moving object active sensing device according to the present invention.

目标跟踪确认单元接受来自选定的从摄像机的高质量目标图像,提取高质量目标图像中的图像特征,通过分类器目标的图像特征分类,得到目标类别,把属于预定设置类型的目标确认为关注目标,放入关注目标集合,把不属于预定设置类型的目标确认为非关注目标,不放入关注目标集合。The target tracking confirmation unit accepts the high-quality target image from the selected camera, extracts the image features in the high-quality target image, classifies the image features of the target through the classifier, obtains the target category, and confirms the target belonging to the predetermined setting type as attention Targets, put them into the focus target set, and confirm the targets that do not belong to the predetermined setting type as non-focus targets, and do not put them into the focus target set.

图1所示的是根据本发明的一个实施例的基于多摄像机协同的运动目标主动感知方法,包括5个步骤:Figure 1 shows an active perception method for moving objects based on multi-camera coordination according to an embodiment of the present invention, including 5 steps:

基于图像特征匹配的位置映射关系建立;Establishment of position mapping relationship based on image feature matching;

基于运动目标检测,获取候选目标集合;Obtain a candidate target set based on moving target detection;

依据重要性评价函数选取候选目标集合中重要性最高的候选目标,依据主摄像机和从摄像机间的位置映射关系,选取从摄像机进行跟踪确认;Select the candidate target with the highest importance in the candidate target set according to the importance evaluation function, and select the secondary camera for tracking and confirmation according to the position mapping relationship between the primary camera and the secondary camera;

依据主从摄像机位置映射关系,从摄像机计算镜头方位和焦距调整量,调整从摄像机镜头对准候选目标方位,获取高质量图像;According to the master-slave camera position mapping relationship, the slave camera calculates the lens orientation and focal length adjustment, adjusts the slave camera lens to align with the candidate target orientation, and obtains high-quality images;

使用目标高质量图像提取目标特征,分析确认目标类别。Use target high-quality images to extract target features, analyze and confirm target categories.

下面依次对根据本发明的一个实施例中的上述5个步骤进行说明。The above five steps in one embodiment of the present invention will be described in sequence below.

(1)基于图像特征匹配的位置映射关系建立(1) Establishment of position mapping relationship based on image feature matching

如图3所示,根据本发明的方法的基于图像特征匹配的位置映射关系建立,通过特征提取、特征匹配的方式,标定主摄像机与每一台从摄像机,建立位置映射关系。本发明中的标定是指建立主摄像机画面和从摄像机画面中同一物体的坐标映射的过程。As shown in FIG. 3 , according to the method of the present invention, the position mapping relationship based on image feature matching is established, and the master camera and each slave camera are calibrated to establish the position mapping relationship through feature extraction and feature matching. Calibration in the present invention refers to the process of establishing the coordinate mapping of the same object in the main camera picture and the secondary camera picture.

本发明中的主摄像机和从摄像机间位置映射关系包括两部分:主摄像机画面坐标和从摄像机对应关系;主摄像机画面和从摄像机画面坐标转换关系。The position mapping relationship between the master camera and the slave cameras in the present invention includes two parts: the corresponding relationship between the coordinates of the master camera picture and the slave camera; the coordinate conversion relationship between the master camera picture and the slave camera picture.

本发明中使用仿射变换描述主摄像机画面与从摄像机画面的坐标映射关系。通过提取主从摄像机画面加速鲁棒特征(Speeded-Up Robust Features)点,利用主摄像机画面和从摄像机画面中相似特征点的位置对应关系进行主从摄像机标定,建立位置映射关系。在根据本发明的一个实施例中,主摄像机画面坐标和从摄像机目标位置坐标映射关系的表征形式为仿射变换矩阵。In the present invention, affine transformation is used to describe the coordinate mapping relationship between the master camera picture and the slave camera picture. By extracting the Speeded-Up Robust Features points of the master-slave camera picture, the master-slave camera is calibrated by using the position correspondence relationship of the similar feature points in the master camera picture and the slave camera picture, and the position mapping relationship is established. In an embodiment according to the present invention, the representation form of the mapping relationship between the main camera frame coordinates and the slave camera target position coordinates is an affine transformation matrix.

仿射变换为平移及线性映射两类函数的复合。在图像处理领域中,仿射变换适用于描述图像平移、旋转、缩放和反转(镜像)。设仿射变换M,M可以用以下公式表示:Affine transformation is a combination of translation and linear mapping functions. In the field of image processing, affine transformation is suitable for describing image translation, rotation, scaling and inversion (mirroring). Let the affine transformation M, M can be expressed by the following formula:

主摄像机画面和从摄像机画面中同一场景的坐标对应关系能够用仿射变换描述。给出主摄像机画面和从摄像机画面中已知的数个匹配点对,带入到方程(1)中,采用最小二乘法求解出参数a1~a4、tx和ty,即可得到两幅图像的仿射变换,即本发明中的位置映射矩阵。The coordinate relationship between the main camera picture and the same scene in the secondary camera picture can be described by affine transformation. Given the main camera picture and several matching point pairs known from the secondary camera picture, bring them into Equation (1), use the least square method to solve the parameters a 1 ~ a 4 , t x and t y , then you can get The affine transformation of the two images is the position mapping matrix in the present invention.

在根据本发明的一个实施例中,匹配点对由主摄像机和从摄像机的初始位置图像提取加速鲁棒特征特征点,并匹配方式获取。特征匹配过程中使用暴力搜索主摄像机画面和从摄像机画面中的相似加速鲁棒特征特征点。首先提取主摄像机画面和从摄像机画面的加速鲁棒特征点,对于从摄像机画面的每个加速鲁棒特征点,在主摄像机画面的加速鲁棒特征点集中使用K最近邻(KNN)算法搜索欧式距离最近的3个特征点,结果记入集合Matches;计算Matches中所有加速鲁棒特征点对的欧氏距离,记其中最小的距离为d,取Matches中所有距离小于min(2d,minDist)的点对构成集合GoodMatches,该GoodMatches集合即为输出的匹配特征点对集合。其中minDist为预先设定的阈值,可根据实际情况调整。在根据本发明的一个实施例中,阈值minDist可设为1000。In one embodiment of the present invention, the matching point pairs are obtained by extracting acceleration robust feature points from the initial position images of the main camera and the slave camera, and acquiring them in a matching manner. In the process of feature matching, brute force is used to search the similarity between the main camera picture and the secondary camera picture to accelerate the robust feature feature point. Firstly extract the accelerated robust feature points of the main camera picture and the secondary camera picture, for each accelerated robust feature point of the secondary camera picture, use the K nearest neighbor (KNN) algorithm to search for the Euclidean algorithm in the accelerated robust feature point set of the main camera picture The three closest feature points, the results are recorded in the set Matches; calculate the Euclidean distance of all accelerated robust feature point pairs in Matches, record the minimum distance as d, and take all the distances in Matches that are less than min(2d,minDist) The point pairs constitute the set GoodMatches, which is the set of output matching feature point pairs. Among them, minDist is a preset threshold, which can be adjusted according to the actual situation. In an embodiment according to the present invention, the threshold minDist may be set to 1000.

在根据本发明的一个具体实施例中,位置映射单元依据位置对应关系选取从摄像机跟踪拍摄待确认目标。位置对应关系由GoodMatches中的匹配特征点对通过计算得出。位置映射单元计算能够包含集合GoodMatches中主摄像机画面特征点的凸包,在后续步骤中将落在凸包中的候选目标分配给对应从摄像机跟踪拍摄。In a specific embodiment according to the present invention, the position mapping unit selects the target to be confirmed from the camera to track and shoot according to the position correspondence. The position correspondence is calculated by matching feature point pairs in GoodMatches. The position mapping unit calculates the convex hull that can contain the feature points of the main camera picture in the set GoodMatches, and assigns the candidate targets falling in the convex hull to the corresponding secondary camera tracking and shooting in the subsequent steps.

(2)候选目标提取与候选目标集合(2) Candidate target extraction and candidate target collection

运动目标是视频监控中最关注的对象,因此,候选目标提取关注场景中的运动目标。场景中的运动区域称为目标潜在区域,可能包含通过的候选目标。本发明中,通过运动区域检测得到的目标称为候选目标。全体候选目标构成候选目标集合。候选目标集合中记录各个候选目标自进入画面的时间和包围盒坐标的时间序列。Moving objects are the most concerned objects in video surveillance, therefore, candidate object extraction focuses on moving objects in the scene. Motion regions in the scene are called object latent regions, which may contain passing candidates. In the present invention, the objects obtained through the motion region detection are referred to as candidate objects. All candidate targets constitute a candidate target set. The candidate target set records the time sequence of each candidate target since entering the frame and the coordinates of the bounding box.

本发明中,候选目标检测单元在主摄像机画面上使用帧差法获取候选目标,使用连续自适应均值漂移(CamShift)算法跟踪候选目标,将候选目标的位置序列记录到候选目标集合中。In the present invention, the candidate target detection unit uses the frame difference method to obtain candidate targets on the main camera screen, uses the continuous adaptive mean shift (CamShift) algorithm to track the candidate targets, and records the position sequence of the candidate targets into the candidate target set.

在根据本发明的一个实施例中,候选目标集合包括但不限于:In an embodiment according to the present invention, the set of candidate targets includes but is not limited to:

[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];[ObjectID, Time, PosX_Left, PosY_Left, PosX_Right, PosY_Right];

其中ObjectID表示的是候选目标编号,Time表示当候选目标的出现时间,PosX_Left,PosY_Left,PosX_Right,PosY_Right分别表示包围盒左上角和右下角的坐标。Among them, ObjectID represents the candidate target number, Time represents the time when the candidate target appears, PosX_Left, PosY_Left, PosX_Right, PosY_Right represent the coordinates of the upper left corner and the lower right corner of the bounding box, respectively.

(3)基于目标重要性评价函数的候选目标选取(3) Candidate target selection based on target importance evaluation function

目标重要性评价主要根据候选目标的位置、运动方向、速度和进入监控场景后未被感知的时间长度,综合评价目标重要性。在根据本发明的一个实施例中,评价原则是运动速度较快,离监控画面边缘较近且运动方向朝向边缘,进入场景后未被感知的时间约长,则该目标重要性越高。目标选取单元按照重要性排序,抽选重要性最高的目标进行感知。Target importance evaluation is mainly based on the candidate target's position, motion direction, speed and the length of time it has not been perceived after entering the monitoring scene to comprehensively evaluate the target importance. In one embodiment of the present invention, the evaluation principle is that the faster the moving speed, the closer to the edge of the monitoring screen and the moving direction towards the edge, and the longer the unperceived time after entering the scene, the higher the importance of the target. The target selection unit sorts by importance, and selects the target with the highest importance for perception.

根据本发明的一个实施例,重要性评价函数形式如下:According to an embodiment of the present invention, the form of the importance evaluation function is as follows:

E=Eleave+α×Ewait E= Eleave +α× Ewait

其中,Eleave为描述目标离开画面时间的评价函数,目标离开画面的时间越短,该函数数值越大。Ewait为描述目标在目标队列中等待时长的评价函数,未被抓取的时间越久,该函数数值越大;α为可调节参数,α越大越关注目标进入次序。Among them, E leave is an evaluation function describing the time when the target leaves the screen, and the shorter the time for the target to leave the screen, the greater the value of this function. E wait is an evaluation function that describes the waiting time of the target in the target queue. The longer the time without being captured, the greater the value of the function; α is an adjustable parameter, and the larger the α, the more attention is paid to the entry order of the target.

根据本发明的一个实施例,目标离开画面的时间由下述函数估计:According to one embodiment of the present invention, the time when the object leaves the frame is estimated by the following function:

其中,w,h为主摄像机图像宽高,(x,y)为目标当前位置,[x0,y0]为目标进入画面时的位置,为目标运动速度估计值。该函数计算出的时间表示目标在画面中沿当前运动方向以匀速直线运动走到监控画面边界的时间。Among them, w, h is the width and height of the main camera image, (x, y) is the current position of the target, [x 0 , y 0 ] is the position when the target enters the screen, is the estimated velocity of the target. The time calculated by this function indicates the time when the target moves in a straight line at a uniform speed along the current direction of motion in the screen and reaches the border of the monitoring screen.

对于所选重要性最高的候选目标,位置映射单元依据位置映射关系选取从摄像机,将候选目标集合记录的目标坐标序列发送至从摄像机。For the selected candidate target with the highest importance, the position mapping unit selects the slave camera according to the position mapping relationship, and sends the target coordinate sequence recorded by the candidate target set to the slave camera.

(4)基于主从摄像机间位置映射关系的从摄像机控制参数计算(4) Calculation of the control parameters of the slave camera based on the position mapping relationship between the master and slave cameras

根据本发明的一个实施例,使用由初始化步骤生成的位置映射矩阵M进行摄像机间的坐标转换。设主摄像机中的候选目标中心点(x,y)通过主从摄像机间位置映射矩阵M变换到从摄像机初始画面中心的相对坐标过程表示为:According to one embodiment of the present invention, the coordinate conversion between cameras is performed using the position mapping matrix M generated by the initialization step. Let the candidate target center point (x, y) in the main camera be transformed to the relative coordinate process of the initial picture center of the slave camera through the position mapping matrix M between the master and slave cameras, expressed as:

从摄像机初始画面中心的相对坐标(x',y')是从摄像机画面中的二维像素坐标,摄像机无法据此调整,需要转换为从摄像机的方位角坐标。从摄像机依据鱼眼球面投影规则将初始位置画面的相对坐标转换为从摄像机镜头方向的角坐标。The relative coordinates (x', y') of the center of the initial image of the slave camera are the two-dimensional pixel coordinates of the slave camera image, the camera cannot be adjusted accordingly, and needs to be converted to the azimuth coordinates of the slave camera. The slave camera converts the relative coordinates of the initial position screen into the angular coordinates of the slave camera lens direction according to the fisheye spherical projection rule.

根据本发明的一个实施例,若目标在从摄像机初始画面中心的相对坐标为(x,y),从摄像机画面宽和高为(w,h),视场角为摄像机角度调整量有如下形式:According to an embodiment of the present invention, if the relative coordinates of the target at the center of the initial picture from the camera are (x, y), the width and height of the picture from the camera are (w, h), and the field of view is Camera Angle Adjustment There are the following forms:

在根据本发明的一个实施例中,主摄像机与从摄像机间包围盒(Bounding Box)尺寸转换通过使用坐标转换目标左上、右下顶点坐标进行估算。以转换后左上、右下顶点围成的包围盒的尺寸,即是目标的预估尺寸。In an embodiment according to the present invention, the size conversion of the bounding box (Bounding Box) between the master camera and the slave camera is estimated by using the upper left and lower right vertex coordinates of the coordinate conversion target. The size of the bounding box enclosed by the upper left and lower right vertices after conversion is the estimated size of the target.

在根据本发明的一个实施例中,摄像机在调整焦距(视场)中带来的目标尺寸的变化,由焦距的反比例函数进行计算。实现根据本发明的一个实施例的方法和/或装置在运行中能够采集固定尺寸的目标抓拍图像,在给定抓拍图像尺寸要求的情况下,可以估算出从摄像机焦距数值。如给定目标长宽最大值为l*像素,从摄像机建立位置映射关系时的焦距是f,候选目标在从摄像机画面中预估尺寸为(w,h),则可以推出调整后焦距为:In an embodiment according to the present invention, the change of the target size caused by adjusting the focal length (field of view) of the camera is calculated by an inverse proportional function of the focal length. The method and/or device according to an embodiment of the present invention can collect fixed-size target snapshot images during operation, and can estimate the focal length value of the secondary camera under the given snapshot image size requirements. If the maximum length and width of a given target is l * pixels, the focal length when establishing the position mapping relationship from the camera is f, and the candidate target is estimated to be (w, h) in size from the camera screen, then the adjusted focal length can be deduced as:

从摄像机根据上述方法计算的摄像机方向和焦距调整量调整摄像机,对准目标后进行一段时间的连续跟踪拍摄,获取高质量目标图像。Adjust the camera according to the camera direction and focal length adjustment calculated by the above method, and perform continuous tracking and shooting for a period of time after aiming at the target to obtain high-quality target images.

(5)使用目标高质量图像提取目标特征,分析确认目标类别(5) Use the high-quality image of the target to extract target features, analyze and confirm the target category

在根据本发明的一个实施例中,目标跟踪确认单元接收从摄像机拍摄的目标高质量图像,并提取目标特征,使用分类器分析目标类别,并根据目标分类结果更新候选目标的类别。在根据本发明的一个实施例中,类型属于预定设置类型的确认为关注目标(interested object),放入关注目标集合,不属于预定设置类型的目标确认为非关注目标,不放入关注目标集合。In one embodiment according to the present invention, the target tracking confirmation unit receives the high-quality image of the target captured by the camera, extracts the target features, uses a classifier to analyze the target category, and updates the category of the candidate target according to the target classification result. In an embodiment according to the present invention, the type belonging to the predetermined setting type is confirmed as an interested object, and is put into the interested object set, and the object that does not belong to the predetermined setting type is confirmed as a non-interested object, and is not put into the interested object set.

在根据本发明的一个实施例中,提取的特征是能够识别目标类型的特征,主要指的是人脸、躯干、四肢或机动车外观形状、车轮、车牌区域等特征。通过此类区别性特征对候选目标的类别进行确认。分类器通过分析画面中的目标特征,给出目标的分类结果。In one embodiment of the present invention, the extracted features are features that can identify the target type, mainly referring to features such as human face, torso, limbs, or the appearance shape of a motor vehicle, wheels, and license plate area. Classes of candidate objects are confirmed by such distinguishing features. The classifier gives the classification result of the target by analyzing the target features in the picture.

在根据本发明的一个实施例中,从摄像机对拍摄的每一帧高质量图像分类,综合统计每一帧的分类结果,选取可能性最大的分类结果作为目标的分类结果。由于摄像机转动速度有限,前几帧视频可能出现较大的模糊或未能捕捉到目标,对分类结果有不良影响。在根据本发明的一个实施例中,可以在目标分类中丢弃低质量图像,以避免对分类结果的不良影响。In one embodiment of the present invention, each frame of high-quality images captured by the camera is classified, the classification results of each frame are comprehensively counted, and the classification result with the greatest possibility is selected as the target classification result. Due to the limited rotation speed of the camera, the first few frames of video may appear blurred or fail to capture the target, which will have a negative impact on the classification results. In an embodiment according to the present invention, low-quality images can be discarded in object classification to avoid adverse effects on classification results.

以上公开的仅为本发明的具体实施例。在不脱离本发明的权利要求范围的前提下,本领域的技术人员,根据本发明提供的基本技术构思,能够进行各种相应的变化、修正。The above disclosures are only specific embodiments of the present invention. On the premise of not departing from the scope of the claims of the present invention, those skilled in the art can make various corresponding changes and modifications according to the basic technical concept provided by the present invention.

Claims (13)

1. A moving target active perception method based on multi-camera cooperation is characterized by comprising the following steps:
A) automatically calibrating the master camera and the slave camera by means of feature extraction and feature matching according to the pictures of the master camera and the slave camera, establishing a position mapping relation,
B) detecting a plurality of motion areas in the field of view of the main camera in real time according to the setting of a detection threshold value to obtain a set of candidate targets,
C) selecting the candidate target with the highest importance in the candidate target set according to the importance evaluation function, selecting the corresponding slave camera for tracking shooting according to the position mapping relation,
D) according to the mapping relation between the candidate target position and the position between the master camera and the slave camera, the slave camera calculates the lens azimuth angle and the zoom multiple, adjusts the slave camera to be aligned with the candidate target area, acquires the high-quality image of the candidate target,
E) extracting the characteristics of high-quality images of the targets, analyzing and confirming the target types, confirming the targets belonging to the preset setting types as attention targets according to the target classification results, putting the attention targets into an attention target set, confirming the targets not belonging to the preset setting types as non-attention targets, and not putting the attention target set.
2. The active perception method for moving objects based on multi-camera collaboration as claimed in claim 1, wherein the step a) includes:
A1) any slave camera that is not calibrated is selected,
A2) adjusting the focal length of the slave camera to the minimum value, adjusting the lens direction of the slave camera until the slave camera and the master camera have the maximized overlapped visual field,
A3) the accelerated robust features of the master and slave camera views are extracted separately,
A4) and matching the accelerated robust feature points by using a K-Nearest Neighbor (K-Nearest Neighbor) algorithm and a brute force search algorithm to obtain a matching result GoodMatches.
A5) And (4) calculating an affine matrix of the pictures of the master camera and the slave camera by using a least square method according to a matching result GoodMatches to finish the calibration of the master camera and the slave camera.
A6) And judging whether all the slave cameras are registered, if so, returning to the step A1), and otherwise, exiting. And the moving target active perception method based on multi-camera cooperation further comprises the following steps:
F) and C), confirming whether the confirmation of all the candidate targets is finished, if so, exiting, otherwise, returning to the step C).
3. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
the characteristic matching operation in the step A) uses a K nearest neighbor algorithm and a brute force search algorithm to match accelerated robust characteristic points in the pictures of the master camera and the slave camera,
for each acceleration robust feature point of the auxiliary camera picture, searching 3 feature points with the shortest Euclidean distance in the acceleration robust feature point set of the main camera by using a K nearest neighbor algorithm, recording the result into a set of Matches,
calculating Euclidean distances of all accelerated robust feature point pairs in a set of Matches, taking the minimum distance as d, and taking all point pairs with the distances smaller than min (2d, minDist) in the set of Matches to form a set of GoodMatches, wherein the set of GoodMatches is a matching feature point pair set, and minDist is a preset threshold value and can be adjusted according to actual conditions, but the number of the point pairs in the set of GoodMatches is not less than 15.
4. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
in the step a, the position mapping relationship between the master camera and the slave camera includes two parts: the picture coordinates of the master camera and the slave camera correspond to each other; the coordinate transformation relation between the pictures of the master camera and the slave camera,
the master camera view coordinates and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera view,
from the pairs of matching feature points in the set GoodMatches, a convex hull is computed in the master camera view that can enclose all the feature points, and in step C, a candidate object falling in the convex hull is assigned to the slave camera.
The master camera and slave camera view coordinate conversion relationship is represented by affine transformation,
and (3) according to the corresponding relation of the image coordinate positions of the point pairs in the set GoodMatches, calculating affine transformation from the picture of the master camera to the picture of the slave camera by using a least square method.
5. The active perception method of moving objects based on multi-camera collaboration as claimed in claim 1, wherein:
in the step B), a candidate target in the main camera picture is detected by using a frame difference method,
a candidate target in the main camera view is tracked using a continuous adaptive mean shift algorithm,
and the results of the real-time detection of the candidate targets have the following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
wherein:
ObjectID indicates the number of candidate objects,
time denotes the Time of occurrence of the candidate object,
PosX _ Left, PosY _ Left, PosX _ Right, PosY _ Right represent the time series of coordinates of the upper Left and lower Right corners of the bounding box, respectively.
6. The active perception method for moving objects based on multi-camera coordination according to claim 1, characterized in that in the step C),
the importance of the target is characterized using the following formula:
E=Eleave+α×Ewait
wherein,
Eleaveto describe the evaluation function of the time for the target to leave the picture, the shorter the time for the target to leave the picture, the larger the value of the function,
Ewaitto describe the evaluation function of the waiting time of the target in the target queue, the longer the uncaptured time is, the larger the value of the function is,
alpha is an adjustable parameter, the larger alpha is, the more attention is paid to the target entering sequence,
the time that the target leaves the frame is characterized by the following function:
wherein,
w and h are the width and height of the main camera image,
(x, y) is the current position of the target,
[x0,y0]is the position of the object when it enters the picture,
as an estimate of the velocity of the movement of the object,
the time characterized by the above equation represents the time for the object to move straight to the boundary of the picture at a uniform velocity in the current motion direction in the main camera picture.
7. The active perception method for moving objects based on multi-camera cooperation according to claim 1, wherein in the step D), the slave camera calculates the angular coordinate and the focal length of the lens direction of the slave camera using the position mapping relationship between the master camera and the slave camera generated in the step B),
the slave camera lens direction is calculated as follows:
converting the coordinates of the candidate target on the master camera into the relative coordinates on the initial position picture of the slave camera according to the coordinate mapping shutdown, and then converting the relative coordinates on the initial position picture of the slave camera into the angular coordinates in the lens direction of the slave camera according to the fisheye spherical projection rule
The camera focal length is calculated as follows:
if the maximum length and width of the given target is l × pixels, the focal length when the position mapping relationship is established from the camera is f, and the width and height of the candidate target in the picture of the slave camera is w, h, then the focal length after adjustment can be deduced as:
8. an active target sensing apparatus, comprising:
the image acquisition unit is used for acquiring video images of the master camera and the slave camera;
the candidate target detection unit is used for extracting candidate targets from the video image of the main camera to form a candidate target set;
the target selection unit is used for selecting a candidate target with the highest importance at the current moment in the candidate target set;
the position mapping unit is used for establishing a position mapping relation between the master camera and the slave camera, selecting the candidate target selected by shooting of the slave camera and sending the position information of the candidate target to the slave camera;
and a target tracking confirming unit for analyzing the target category according to the target high-quality image shot from the camera, confirming the target belonging to the preset setting type as the attention target, and putting the attention target set.
9. The active target perception device of claim 8, wherein the location mapping unit matches the accelerated robust feature points in the master and slave camera views using a K-nearest neighbor algorithm and a brute force search algorithm,
for each acceleration robust feature point of the auxiliary camera picture, searching 3 feature points with the shortest Euclidean distance in the acceleration robust feature point set of the main camera by using a K nearest neighbor algorithm, recording the result into a set of Matches,
calculating Euclidean distances of all accelerated robust feature point pairs in a set of Matches, taking the minimum distance as d, and taking all point pairs with the distances smaller than min (2d, minDist) in the set of Matches to form a set of GoodMatches, wherein the set of GoodMatches is a matching feature point pair set, and minDist is a preset threshold value and can be adjusted according to actual conditions, but the number of the point pairs in the set of GoodMatches is not less than 15.
10. The active target perception device of claim 8, wherein the position mapping relationship between the master camera and the slave camera in the position mapping unit includes two parts: the picture coordinates of the master camera and the slave camera correspond to each other; the coordinate transformation relation between the pictures of the master camera and the slave camera,
the master camera view coordinates and the slave camera are represented by a convex hull surrounding the matching feature points in the master camera view,
from the pairs of matching feature points in GoodMatches, a convex hull is computed in the master camera view that can enclose all the feature points, and in step C, the candidate targets that fall within the convex hull are assigned to the slave camera.
The master camera and slave camera view coordinate conversion relationship is represented by affine transformation,
and (3) according to the corresponding relation of the image coordinate positions of the point pairs in the set GoodMatches, calculating affine transformation from the picture of the master camera to the picture of the slave camera by using a least square method.
11. The active target perception device of claim 8, wherein the candidate target detection unit detects the candidate target in the main camera view using a frame difference method,
a candidate target in the main camera view is tracked using a continuous adaptive mean shift algorithm,
and the results of the real-time detection of the candidate targets have the following form:
[ObjectID,Time,PosX_Left,PosY_Left,PosX_Right,PosY_Right];
wherein:
ObjectID indicates the number of candidate objects,
time denotes the Time of occurrence of the candidate object,
PosX _ Left, PosY _ Left, PosX _ Right, PosY _ Right represent the time series of coordinates of the upper Left and lower Right corners of the bounding box, respectively.
12. The active target perception device of claim 8, wherein the target selection unit characterizes the importance of the target using the following formula:
E=Eleave+α×Ewait
wherein,
Eleaveto describe the evaluation function of the time for the target to leave the picture, the shorter the time for the target to leave the picture, the larger the value of the function,
Ewaitto describe the evaluation function of the waiting time of the target in the target queue, the longer the uncaptured time is, the larger the value of the function is,
alpha is an adjustable parameter, the larger alpha is, the more attention is paid to the target entering sequence,
the time that the target leaves the frame is characterized by the following function:
wherein,
w and h are the width and height of the main camera image,
(x, y) is the current position of the target,
[x0,y0]is the position of the object when it enters the picture,
as an estimate of the velocity of the movement of the object,
the time characterized by the above equation represents the time for the object to move straight to the boundary of the picture at a uniform velocity in the current motion direction in the main camera picture.
13. The active sensing apparatus of claim 8 or 11, wherein in the target tracking unit, the slave camera calculates the angular coordinate and the focal length of the lens direction of the slave camera using the position mapping relationship between the master camera and the slave camera in the address mapping unit,
the slave camera lens direction is calculated as follows:
converting the coordinates of the candidate target on the master camera into the relative coordinates on the initial position picture of the slave camera according to the coordinate mapping shutdown, and then converting the relative coordinates on the initial position picture of the slave camera into the angular coordinates in the lens direction of the slave camera according to the fisheye spherical projection rule
The camera focal length is calculated as follows:
if the maximum length and width of the given target is l × pixels, the focal length when the position mapping relationship is established from the camera is f, and the width and height of the candidate target in the picture of the slave camera is w, h, then the focal length after adjustment can be deduced as:
CN201711425735.9A 2017-12-25 2017-12-25 Method and device for active perception of moving target based on multi-camera collaboration Active CN108111818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711425735.9A CN108111818B (en) 2017-12-25 2017-12-25 Method and device for active perception of moving target based on multi-camera collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711425735.9A CN108111818B (en) 2017-12-25 2017-12-25 Method and device for active perception of moving target based on multi-camera collaboration

Publications (2)

Publication Number Publication Date
CN108111818A true CN108111818A (en) 2018-06-01
CN108111818B CN108111818B (en) 2019-05-03

Family

ID=62213191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711425735.9A Active CN108111818B (en) 2017-12-25 2017-12-25 Method and device for active perception of moving target based on multi-camera collaboration

Country Status (1)

Country Link
CN (1) CN108111818B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN109522846A (en) * 2018-11-19 2019-03-26 深圳博为教育科技有限公司 One kind is stood up monitoring method, device, server and monitoring system of standing up
CN110059641A (en) * 2019-04-23 2019-07-26 重庆工商大学 Depth birds recognizer based on more preset points
CN110177256A (en) * 2019-06-17 2019-08-27 北京影谱科技股份有限公司 A kind of tracking video data acquisition methods and device
CN110176039A (en) * 2019-04-23 2019-08-27 苏宁易购集团股份有限公司 A kind of video camera adjusting process and system for recognition of face
CN110191324A (en) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110430395A (en) * 2019-07-19 2019-11-08 苏州维众数据技术有限公司 Video data AI processing system and processing method
CN110493569A (en) * 2019-08-12 2019-11-22 苏州佳世达光电有限公司 Monitoring objective shoots method for tracing and system
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
CN110881117A (en) * 2018-09-06 2020-03-13 杭州海康威视数字技术股份有限公司 Inter-picture area mapping method and device and multi-camera observation system
CN111131697A (en) * 2019-12-23 2020-05-08 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111179305A (en) * 2018-11-13 2020-05-19 晶睿通讯股份有限公司 Object position estimation method and object position estimation device
CN111354011A (en) * 2020-05-25 2020-06-30 江苏华丽智能科技股份有限公司 Multi-moving-target information capturing and tracking system and method
CN111541851A (en) * 2020-05-12 2020-08-14 南京甄视智能科技有限公司 Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey
CN111612812A (en) * 2019-02-22 2020-09-01 富士通株式会社 Target detection method, detection device and electronic device
CN111684458A (en) * 2019-05-31 2020-09-18 深圳市大疆创新科技有限公司 Target detection method, target detection device and unmanned aerial vehicle
CN111698467A (en) * 2020-05-08 2020-09-22 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN111815722A (en) * 2020-06-10 2020-10-23 广州市保伦电子有限公司 Double-scene matting method and system
CN111866392A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Shooting prompting method, device, storage medium and electronic device
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN112308924A (en) * 2019-07-29 2021-02-02 浙江宇视科技有限公司 Camera calibration method, device, device and storage medium in augmented reality
CN112492261A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Tracking shooting method and device and monitoring system
CN112767452A (en) * 2021-01-07 2021-05-07 北京航空航天大学 Active sensing method and system for camera
CN112954188A (en) * 2019-12-10 2021-06-11 李思成 Human eye perception imitating active target snapshot method and device
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system
CN113190013A (en) * 2018-08-31 2021-07-30 创新先进技术有限公司 Method and device for controlling terminal movement
CN113518174A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Shooting method, device and system
CN113597596A (en) * 2020-04-24 2021-11-02 深圳市大疆创新科技有限公司 Target calibration method, device and system and remote control terminal of movable platform
CN113792715A (en) * 2021-11-16 2021-12-14 山东金钟科技集团股份有限公司 Granary pest monitoring and early warning method, device, equipment and storage medium
CN114155433A (en) * 2021-11-30 2022-03-08 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114219937A (en) * 2021-11-04 2022-03-22 浙江大华技术股份有限公司 Target recognition method and master-slave multi-end camera system
CN114785983A (en) * 2022-03-07 2022-07-22 复旦大学 A target cross-domain tracking method based on node communication and device coordination
CN114938426A (en) * 2022-04-28 2022-08-23 湖南工商大学 Method and apparatus for creating a multi-device media presentation
CN115802150A (en) * 2022-11-14 2023-03-14 哲库科技(上海)有限公司 Image processing method, device, terminal and computer-readable storage medium
CN115937301A (en) * 2022-11-19 2023-04-07 广州市奥威亚电子科技有限公司 Dual-camera calibration method and dual-camera positioning method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285810A1 (en) * 2010-05-21 2011-11-24 Qualcomm Incorporated Visual Tracking Using Panoramas on Mobile Devices
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN103607576A (en) * 2013-11-28 2014-02-26 北京航空航天大学深圳研究院 Traffic video monitoring system oriented to cross camera tracking relay
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN105208327A (en) * 2015-08-31 2015-12-30 深圳市佳信捷技术股份有限公司 Master/slave camera intelligent monitoring method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110285810A1 (en) * 2010-05-21 2011-11-24 Qualcomm Incorporated Visual Tracking Using Panoramas on Mobile Devices
CN102291569A (en) * 2011-07-27 2011-12-21 上海交通大学 Double-camera automatic coordination multi-target eagle eye observation system and observation method thereof
CN103198487A (en) * 2013-04-15 2013-07-10 厦门博聪信息技术有限公司 Automatic calibration method for video monitoring system
CN103607576A (en) * 2013-11-28 2014-02-26 北京航空航天大学深圳研究院 Traffic video monitoring system oriented to cross camera tracking relay
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN105208327A (en) * 2015-08-31 2015-12-30 深圳市佳信捷技术股份有限公司 Master/slave camera intelligent monitoring method and device

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029921A1 (en) * 2018-08-07 2020-02-13 华为技术有限公司 Monitoring method and device
US11790504B2 (en) 2018-08-07 2023-10-17 Huawei Technologies Co., Ltd. Monitoring method and apparatus
CN113190013B (en) * 2018-08-31 2023-06-27 创新先进技术有限公司 Method and device for controlling movement of terminal
CN113190013A (en) * 2018-08-31 2021-07-30 创新先进技术有限公司 Method and device for controlling terminal movement
CN110881117A (en) * 2018-09-06 2020-03-13 杭州海康威视数字技术股份有限公司 Inter-picture area mapping method and device and multi-camera observation system
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN111179305B (en) * 2018-11-13 2023-11-14 晶睿通讯股份有限公司 Object position estimation method and object position estimation device thereof
CN111179305A (en) * 2018-11-13 2020-05-19 晶睿通讯股份有限公司 Object position estimation method and object position estimation device
CN109522846A (en) * 2018-11-19 2019-03-26 深圳博为教育科技有限公司 One kind is stood up monitoring method, device, server and monitoring system of standing up
CN109522846B (en) * 2018-11-19 2020-08-14 深圳博为教育科技有限公司 Standing monitoring method, device, server and standing monitoring system
CN111612812A (en) * 2019-02-22 2020-09-01 富士通株式会社 Target detection method, detection device and electronic device
CN111612812B (en) * 2019-02-22 2023-11-03 富士通株式会社 Target detection method, detection device and electronic equipment
CN110059641B (en) * 2019-04-23 2023-02-03 重庆工商大学 Depth bird recognition algorithm based on multiple preset points
CN110059641A (en) * 2019-04-23 2019-07-26 重庆工商大学 Depth birds recognizer based on more preset points
CN110176039A (en) * 2019-04-23 2019-08-27 苏宁易购集团股份有限公司 A kind of video camera adjusting process and system for recognition of face
CN111684458B (en) * 2019-05-31 2024-03-12 深圳市大疆创新科技有限公司 Target detection method, target detection device and drone
CN111684458A (en) * 2019-05-31 2020-09-18 深圳市大疆创新科技有限公司 Target detection method, target detection device and unmanned aerial vehicle
CN110177256B (en) * 2019-06-17 2021-12-14 北京影谱科技股份有限公司 Tracking video data acquisition method and device
CN110177256A (en) * 2019-06-17 2019-08-27 北京影谱科技股份有限公司 A kind of tracking video data acquisition methods and device
CN110191324A (en) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN112215048B (en) * 2019-07-12 2024-03-22 中国移动通信有限公司研究院 3D target detection method, device and computer readable storage medium
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN110430395A (en) * 2019-07-19 2019-11-08 苏州维众数据技术有限公司 Video data AI processing system and processing method
CN112308924B (en) * 2019-07-29 2024-02-13 浙江宇视科技有限公司 Method, device, equipment and storage medium for calibrating camera in augmented reality
CN112308924A (en) * 2019-07-29 2021-02-02 浙江宇视科技有限公司 Camera calibration method, device, device and storage medium in augmented reality
CN110493569A (en) * 2019-08-12 2019-11-22 苏州佳世达光电有限公司 Monitoring objective shoots method for tracing and system
CN112492261A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Tracking shooting method and device and monitoring system
CN112954188A (en) * 2019-12-10 2021-06-11 李思成 Human eye perception imitating active target snapshot method and device
CN111131697B (en) * 2019-12-23 2022-01-04 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111131697A (en) * 2019-12-23 2020-05-08 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN113518174A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Shooting method, device and system
CN113597596A (en) * 2020-04-24 2021-11-02 深圳市大疆创新科技有限公司 Target calibration method, device and system and remote control terminal of movable platform
CN113597596B (en) * 2020-04-24 2025-04-01 深圳市大疆创新科技有限公司 Target calibration method, device and system and remote control terminal for movable platform
CN111698467A (en) * 2020-05-08 2020-09-22 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN111541851A (en) * 2020-05-12 2020-08-14 南京甄视智能科技有限公司 Face recognition equipment accurate installation method based on unmanned aerial vehicle hovering survey
CN111354011A (en) * 2020-05-25 2020-06-30 江苏华丽智能科技股份有限公司 Multi-moving-target information capturing and tracking system and method
CN111815722A (en) * 2020-06-10 2020-10-23 广州市保伦电子有限公司 Double-scene matting method and system
CN111918023B (en) * 2020-06-29 2021-10-22 北京大学 A monitoring target tracking method and device
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN111866392A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Shooting prompting method, device, storage medium and electronic device
CN111866392B (en) * 2020-07-31 2021-10-08 Oppo广东移动通信有限公司 Shooting prompting method, device, storage medium and electronic device
CN112767452B (en) * 2021-01-07 2022-08-05 北京航空航天大学 Camera active perception method and system
CN112767452A (en) * 2021-01-07 2021-05-07 北京航空航天大学 Active sensing method and system for camera
CN113179371A (en) * 2021-04-21 2021-07-27 新疆爱华盈通信息技术有限公司 Shooting method, device and snapshot system
CN114219937A (en) * 2021-11-04 2022-03-22 浙江大华技术股份有限公司 Target recognition method and master-slave multi-end camera system
CN113792715A (en) * 2021-11-16 2021-12-14 山东金钟科技集团股份有限公司 Granary pest monitoring and early warning method, device, equipment and storage medium
CN114155433A (en) * 2021-11-30 2022-03-08 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114785983A (en) * 2022-03-07 2022-07-22 复旦大学 A target cross-domain tracking method based on node communication and device coordination
CN114785983B (en) * 2022-03-07 2024-10-18 复旦大学 A cross-domain target tracking method based on node communication and device collaboration
CN114938426A (en) * 2022-04-28 2022-08-23 湖南工商大学 Method and apparatus for creating a multi-device media presentation
CN114938426B (en) * 2022-04-28 2023-04-07 湖南工商大学 Method and apparatus for creating a multi-device media presentation
CN115802150A (en) * 2022-11-14 2023-03-14 哲库科技(上海)有限公司 Image processing method, device, terminal and computer-readable storage medium
CN115937301A (en) * 2022-11-19 2023-04-07 广州市奥威亚电子科技有限公司 Dual-camera calibration method and dual-camera positioning method

Also Published As

Publication number Publication date
CN108111818B (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN108111818B (en) Method and device for active perception of moving target based on multi-camera collaboration
CN109887040B (en) Active sensing method and system of moving target for video surveillance
JP5688456B2 (en) Security camera tracking and monitoring system and method using thermal image coordinates
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
US10339386B2 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
CN109151439B (en) A vision-based automatic tracking shooting system and method
CN103761514B (en) The system and method for recognition of face is realized based on wide-angle gunlock and many ball machines
Wheeler et al. Face recognition at a distance system for surveillance applications
CN112307912A (en) Method and system for determining personnel track based on camera
WO2014080613A1 (en) Color correction device, method, and program
WO2018101247A1 (en) Image recognition imaging apparatus
CN109905641B (en) A target monitoring method, device, equipment and system
KR20200010690A (en) Moving Object Linkage Tracking System and Method Using Multiple Cameras
CN114037923A (en) Target activity hotspot graph drawing method, system, equipment and storage medium
KR20220163895A (en) Video surveillance system that provides linked tracking
CN110633648B (en) Face recognition method and system in natural walking state
CN113033350B (en) Pedestrian re-identification method based on overlook image, storage medium and electronic equipment
KR20170133666A (en) Method and apparatus for camera calibration using image analysis
CN111465937A (en) Face detection and recognition method using light field camera system
KR20130058172A (en) System and the method thereof for sensing the face of intruder
TW201205449A (en) Video camera and a controlling method thereof
JP2001285849A (en) Photographing system
JP2009068935A (en) Stereo measurement device, stereo measurement method, and stereo measurement processing program
CN115037870A (en) Imaging device control method, imaging device control device, electronic apparatus, and storage medium
CN110595443A (en) Projection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210427

Address after: No.18 Chuanghui street, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: BUAA HANGZHOU INNOVATION INSTITUTE

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: BEIHANG University

TR01 Transfer of patent right