[go: up one dir, main page]

CN102201121A - System and method for detecting article in video scene - Google Patents

System and method for detecting article in video scene Download PDF

Info

Publication number
CN102201121A
CN102201121A CN2010101294945A CN201010129494A CN102201121A CN 102201121 A CN102201121 A CN 102201121A CN 2010101294945 A CN2010101294945 A CN 2010101294945A CN 201010129494 A CN201010129494 A CN 201010129494A CN 102201121 A CN102201121 A CN 102201121A
Authority
CN
China
Prior art keywords
background model
pixel
scene
video scene
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010101294945A
Other languages
Chinese (zh)
Inventor
陈建霖
杨智程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongfujin Precision Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Original Assignee
Hongfujin Precision Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongfujin Precision Industry Shenzhen Co Ltd, Hon Hai Precision Industry Co Ltd filed Critical Hongfujin Precision Industry Shenzhen Co Ltd
Priority to CN2010101294945A priority Critical patent/CN102201121A/en
Publication of CN102201121A publication Critical patent/CN102201121A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a system and a method for detecting an article in a video scene. The method comprises the following steps of: (a) setting an empty background model and receiving a first frame of scene in N frames of colored video scenes; (b) comparing each pixel of a second frame of scene serving as the current scene with a corresponding pixel in the background model of the first frame of scene, and judging whether a difference of the pixel values and a brightness difference value are smaller than or equal to preset threshold values; (c) if the difference of the pixel values and the brightness difference value are smaller than or equal to the preset threshold values, determining that the pixel is a background pixel and adding the pixel into the background model so as to generate a new background model, wherein all background pixels form a background article; or (d) if the difference of the pixel values and the brightness difference value are not smaller than or equal to the preset threshold values, determining that the pixel is a foreground pixel, wherein all foreground pixels form a foreground article; and (e) taking one of other frames of scenes in the N frames of scenes as the current scene, taking the background model obtained by detecting all the scenes before the current scene as the current background model, and performing the steps from (b) to (d) so as to detect the foreground article and the background article in each scene. By the invention, a dynamic background can be updated in real time.

Description

视频场景中的物件侦测系统与方法Object detection system and method in video scene

技术领域technical field

本发明涉及一种侦测系统与方法,尤其涉及一种视频场景中的物件侦测系统与方法。The present invention relates to a detection system and method, in particular to an object detection system and method in a video scene.

背景技术Background technique

目前,通过现有技术可实现对拍摄监控区域所获取的视频场景中的前景物进行侦测,但该现有技术还无一标准的测试规范或流程,面对以下问题还未有方法进行克服:监控区域的亮度变化、周期性物体干扰、影像晃动等。At present, the detection of foreground objects in the video scene obtained by shooting the monitoring area can be realized through the existing technology, but the existing technology does not have a standard test specification or process, and there is no way to overcome the following problems : Brightness changes in the monitoring area, periodic object interference, image shaking, etc.

发明内容Contents of the invention

鉴于以上内容,有必要提供一种视频场景中的物件侦测方法,通过建立背景模型和利用前景侦测算法对视频场景中的前景物件进行侦测,并容忍光线的变化,对动态背景进行更新,以达到有效地侦测、监控等目的。In view of the above, it is necessary to provide a method for object detection in a video scene, by establishing a background model and using a foreground detection algorithm to detect foreground objects in a video scene, and tolerating changes in light to update the dynamic background , in order to achieve effective detection, monitoring and other purposes.

鉴于以上内容,还有必要提供一种视频场景中的物件侦测系统,通过建立背景模型和利用前景侦测算法对视频场景中的前景物件进行侦测,并容忍光线的变化,对动态背景进行更新,以达到有效地侦测、监控等目的。In view of the above content, it is also necessary to provide an object detection system in a video scene, by establishing a background model and utilizing a foreground detection algorithm to detect foreground objects in a video scene, and tolerate changes in light, and detect dynamic backgrounds. Update, in order to achieve effective detection, monitoring and other purposes.

一种视频场景中的物件侦测方法,包括如下步骤:(a)设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景;(b)将该存入了第一幅视频场景的背景模型作为现有背景模型,以第二幅视频场景为当前场景;(c)将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值;(d)当所述确定的像素值之差和亮度差值均小于或等于预先设定的门槛值时,判定该像素为背景像素,将该像素加入现有背景模型中生成新背景模型,且由所有背景像素组成的物件为背景物件;或(e)当上述确定的像素值之差和亮度差值均大于所述预先设定的门槛值时,判定该像素为前景像素,由所有前景像素组成的物件为前景物件;及(f)依次将所述N幅视频场景中的第三至第N幅视频场景中的一幅视频场景作为当前场景,并将侦测该当前场景之前的所有场景得到的背景模型作为现有背景模型,执行步骤(c)至步骤(e)以侦测出每幅视频场景中的前景物件和背景物件。A method for object detection in a video scene, comprising the steps of: (a) setting an empty background model to receive the first video scene in N color video scenes; (b) storing the first video scene in the first video scene The background model of the video scene is used as the existing background model, and the second video scene is the current scene; (c) each pixel in the current scene is compared with the pixels in the existing background model to determine the pixels between the corresponding pixels value difference and brightness difference; (d) when the determined pixel value difference and brightness difference are both less than or equal to the preset threshold value, it is determined that the pixel is a background pixel, and the pixel is added to the existing background Generate a new background model in the model, and the object composed of all background pixels is a background object; or (e) when the above-mentioned determined pixel value difference and brightness difference are greater than the preset threshold value, determine the pixel is the foreground pixel, and the object made up of all the foreground pixels is the foreground object; and (f) one of the third to Nth video scenes in the N video scenes is taken as the current scene in turn, and the detected The background models obtained by measuring all the scenes before the current scene are used as the existing background models, and the steps (c) to (e) are performed to detect the foreground objects and background objects in each video scene.

一种视频场景中的物件侦测系统,运行于电子设备中,该系统包括:模型建立单元用于设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景;像素分离单元用于将该存入了第一幅视频场景的背景模型作为现有背景模型,以第二幅视频场景为当前场景,将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值,当所确定的像素值之差和亮度差值均小于或等于预先设定的门槛值时,判定该像素为背景像素,由所有背景像素组成的物件为背景物件,或当所确定的像素值之差和亮度差值均大于所述预先设定的门槛值时,判定该像素为前景像素,由所有前景像素组成的物件为前景物件;及存储单元用于将所述背景像素加入现有背景模型中生成新背景模型。所述像素分离单元还用于依次将所述N幅视频场景中的第三至第N幅视频场景中的一幅视频场景作为当前场景,并将侦测该当前场景之前的所有场景得到的背景模型作为现有背景模型,继续将上述当前场景与现有背景模型中相应像素进行比较,直到侦测出每幅视频场景中的前景物件和背景物件。An object detection system in a video scene, running in an electronic device, the system includes: a model building unit is used to set an empty background model, and receives the first video scene in N color video scenes; a pixel separation unit The background model stored in the first video scene is used as the existing background model, and the second video scene is used as the current scene, and each pixel in the current scene is compared with the pixels in the existing background model, To determine the pixel value difference and brightness difference between corresponding pixels, when the determined pixel value difference and brightness difference are less than or equal to the preset threshold value, it is determined that the pixel is a background pixel, which is composed of all background pixels The object is a background object, or when the determined pixel value difference and brightness difference are greater than the preset threshold value, it is determined that the pixel is a foreground pixel, and an object composed of all foreground pixels is a foreground object; and storing The unit is used for adding the background pixels into the existing background model to generate a new background model. The pixel separation unit is also used to sequentially use one of the third to Nth video scenes in the N video scenes as the current scene, and detect the background of all the scenes before the current scene. The model serves as an existing background model, and continues to compare the above-mentioned current scene with corresponding pixels in the existing background model until the foreground objects and background objects in each video scene are detected.

相较于现有技术,所述视频场景中的物件侦测系统与方法,利用彩色像素建立背景模型,相较于一般利用灰色像素所建立的模型,该彩色像素建立的背景模型具有更佳的判断力,另外,通过该彩色像素建立的背景模型和前景侦测演算法可将视频场景中停留一段时间的物体或场景视为背景,排除突然亮度变化、周期性物体的干扰,对视频场景中的前景物件进行侦测,以达到有效地侦测、监控等目的。Compared with the prior art, the object detection system and method in the video scene uses color pixels to build a background model. Compared with the model built by generally using gray pixels, the background model built by the color pixels has better performance. In addition, the background model and foreground detection algorithm established by the color pixels can regard the objects or scenes that stay in the video scene for a period of time as the background, and eliminate the interference of sudden brightness changes and periodic objects. Foreground objects are detected to achieve effective detection and monitoring purposes.

附图说明Description of drawings

图1是本发明视频场景中的物件侦测系统较佳实施例的功能单元图。FIG. 1 is a functional unit diagram of a preferred embodiment of an object detection system in a video scene of the present invention.

图2是本发明视频场景中的物件侦测方法较佳实施例的作业流程图。FIG. 2 is a flow chart of the preferred embodiment of the object detection method in the video scene of the present invention.

图3和图4是本发明侦测到的前景物件及背景模型变化示意图。FIG. 3 and FIG. 4 are schematic diagrams of changes in foreground objects and background models detected by the present invention.

主要元件符号说明Description of main component symbols

  电子设备 Electronic equipment   1 1   视频场景中的物件侦测系统Object detection system in video scene   1010   存储设备storage device   2020   处理器Processor   3030   显示设备 display screen   4040   模型建立单元Model building unit   100100   像素分离单元pixel separation unit   102102   存储单元storage unit   104104   暂存背景模型监控单元Temporary background model monitoring unit   106106   背景模型更新单元Background model update unit   108108

具体实施方式Detailed ways

如图1所示,是本发明视频场景中的物件侦测系统10较佳实施例的功能单元图。该视频场景中的物件侦测系统10安装并运行于电子设备1中。该电子设备1还包括存储设备20、处理器30及显示设备40。该电子设备1可以为监控设备、计算机或其它任意适用的具有数据处理功能的装置。As shown in FIG. 1 , it is a functional unit diagram of a preferred embodiment of an object detection system 10 in a video scene of the present invention. The object detection system 10 in the video scene is installed and runs in the electronic device 1 . The electronic device 1 further includes a storage device 20 , a processor 30 and a display device 40 . The electronic device 1 can be a monitoring device, a computer or any other suitable device with a data processing function.

存储设备20用于存储所述物件侦测系统10的计算机化程序代码,及存储由监控设备所拍摄的彩色视频场景。该存储设备20可以为电子设备1内置的存储器,也可以为电子设备1外接的存储器。The storage device 20 is used for storing the computerized program code of the object detection system 10 and storing the color video scene captured by the monitoring device. The storage device 20 may be a built-in memory of the electronic device 1 or may be an external memory of the electronic device 1 .

处理器30执行所述物件侦测系统10的计算机化程序代码,通过建立空背景模型,有效地侦测出监控设备所拍摄的彩色视频场景中的前景物件,并对所监控区域内停留一段时间的物体或场景自动视为背景,以改善突然亮度变化、排除周期性物体干扰等监控效果。其中,所述前景物件是指出现在监控区域内的物件或场景,如出现在监控区域内的人、车等,通过对前景物件的侦测可及时排查出监控区域内出现的人或物。The processor 30 executes the computerized program code of the object detection system 10, effectively detects the foreground objects in the color video scene captured by the monitoring equipment by establishing an empty background model, and stays in the monitored area for a period of time Objects or scenes are automatically regarded as the background to improve monitoring effects such as sudden brightness changes and elimination of periodic object interference. Wherein, the foreground object refers to an object or scene present in the monitoring area, such as a person or a car appearing in the monitoring area, and the person or object appearing in the monitoring area can be sorted out in time by detecting the foreground object.

显示设备40用于显示所述监控设备所拍摄的彩色视频场景。The display device 40 is used to display the color video scene captured by the monitoring device.

该物件侦测系统10包括:模型建立单元100、像素分离单元102、存储单元104、暂存背景模型监控单元106和背景模型更新单元108,其功能可通过图2至图4进行具体描述。The object detection system 10 includes: a model building unit 100 , a pixel separation unit 102 , a storage unit 104 , a temporary background model monitoring unit 106 and a background model updating unit 108 , the functions of which can be described in detail with reference to FIGS. 2 to 4 .

在此需说明的是,本实施例中的前景物件侦测由三部分组成:第一部分为背景模型的训练与建立,即接收N幅彩色视频场景,对于其中的彩色像素建立背景模型;第二部分为前景侦测,即对于N幅之后的视频场景,利用第一部分所建立的背景模型来判定前景和背景;第三部分为背景模型更新,此部分使用双层背景模型机制,即第二部分判断前景和背景所使用的背景模型和新增的暂存背景模型,利用预设的时间判断是否需要更新此双层背景模型,通过更新该双层背景模型可实现对监控区域的光线变化进行容忍、实现动态背景的自动更新。具体流程如图2所述:It should be noted here that the foreground object detection in this embodiment consists of three parts: the first part is the training and establishment of the background model, that is, receiving N color video scenes, and establishing a background model for the color pixels therein; Part is foreground detection, that is, for the video scene after N frames, use the background model established in the first part to determine the foreground and background; the third part is background model update, this part uses the double-layer background model mechanism, that is, the second part The background model used to judge the foreground and the background and the newly added temporary background model, use the preset time to judge whether the double-layer background model needs to be updated. By updating the double-layer background model, the tolerance of light changes in the monitoring area can be achieved , Realize automatic update of dynamic background. The specific process is described in Figure 2:

如图2所示,是本发明视频场景中的物件侦测方法较佳实施例的作业流程图。该流程仅以N幅彩色视频场景中的某两幅视频场景的前景物件侦测为例进行说明,其他视频场景中的前景物件侦测均依照该侦测方法进行。As shown in FIG. 2 , it is a flow chart of the preferred embodiment of the object detection method in the video scene of the present invention. This process is only illustrated by taking the detection of foreground objects in two video scenes in N color video scenes as an example, and the detection of foreground objects in other video scenes is performed according to the detection method.

步骤S300,通过模型建立单元100设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景,也就是说,该空背景模型用于存储第一幅视频场景。本实施例中,第2幅~第N幅以及第N幅之后的视频场景的前景侦测无需再重新设立空背景模型。In step S300, an empty background model is set by the model building unit 100 to receive the first video scene in the N color video scenes, that is, the empty background model is used to store the first video scene. In this embodiment, the foreground detection of the second to Nth and after Nth video scenes does not need to re-establish an empty background model.

步骤S302,依次将该N幅视频场景中的一幅场景作为当前场景,以侦测该场景之前一幅场景所生成的背景模型为现有背景模型。In step S302, one of the N video scenes is used as the current scene in turn, and the background model generated by the scene before the detected scene is the current background model.

步骤S304,像素分离单元102将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值。本实施例中,第二幅视频场景是以存入空背景模型中的第一幅视频场景为现有背景模型;当该第二幅视频场景处理完后,再取出第三幅视频场景进行处理,该第三幅视频场景是以由侦测第一幅、第二幅视频场景所生成的背景模型为现有背景模型,以此类推,直到将所有的视频场景处理完毕。例如,如图3所示,第N幅视频场景是以侦测第1~第N-1幅视频场景所取得的背景模型A0为现有背景模型,第N+1幅视频场景以背景模型A为现有背景模型。In step S304, the pixel separation unit 102 compares each pixel in the current scene with the pixels in the existing background model to determine the pixel value difference and brightness difference between corresponding pixels. In this embodiment, the second video scene is based on the first video scene stored in the empty background model as the existing background model; when the second video scene is processed, the third video scene is taken out for processing , the third video scene uses the background model generated by detecting the first and second video scenes as the existing background model, and so on until all the video scenes are processed. For example, as shown in Figure 3, the Nth video scene uses the background model A0 obtained by detecting the 1st to N-1th video scenes as the existing background model, and the N+1th video scene uses the background model A for the existing background model.

步骤S306,像素分离单元102判断上述确定的像素值之差和亮度差值是否均小于或等于预先设定的门槛值。In step S306, the pixel separation unit 102 judges whether the above-mentioned determined pixel value difference and brightness difference are both smaller than or equal to a preset threshold value.

若所述像素与现有背景模型中的相应像素间的像素值之差和亮度差值均小于或等于预先设定的门槛值时,于步骤S308,像素分离单元102判定该像素为背景像素,存储单元104将该像素加入现有背景模型中,从而生成了新背景模型,然后进入步骤S318,其中,由所有背景像素组成的物件本实施例称之为背景物件。例如,假设监控区域无外界物体(如人或车)涉入,仅光线有轻微变化,而由该变化的光线不会导致当前场景中的像素较现有背景模型有太大变化时,像素分离单元102仍会继续将当前场景中的像素判定为背景像素,存储单元104将该像素加入现有背景模型中生成新背景模型。If the pixel value difference and brightness difference between the pixel and the corresponding pixel in the existing background model are both less than or equal to the preset threshold value, in step S308, the pixel separation unit 102 determines that the pixel is a background pixel, The storage unit 104 adds the pixel into the existing background model, thereby generating a new background model, and then enters step S318, wherein the object composed of all background pixels is called a background object in this embodiment. For example, assuming that there are no external objects (such as people or cars) involved in the monitoring area, only a slight change in light, and the light caused by the change will not cause the pixels in the current scene to change much compared with the existing background model, the pixel separation The unit 102 will continue to determine the pixels in the current scene as background pixels, and the storage unit 104 will add the pixels to the existing background model to generate a new background model.

反之,若所述像素与现有背景模型中的相应像素间的像素值之差和亮度差值均大于所述预先设定的门槛值,于步骤S310,像素分离单元102判定该像素为前景像素,由所有前景像素组成的物件本实施例称之为前景物件。如图3和图4所示,若由上述第1~第N-1幅彩色视频场景组成的背景模型为A0,该背景模型A0由监控区域内停留的树、马路组成,在第N幅视频场景中,若有车辆进入监控区域,则经过步骤S306的侦测过程可判定组成车辆的像素为前景物件。On the contrary, if the pixel value difference and brightness difference between the pixel and the corresponding pixel in the existing background model are greater than the preset threshold value, in step S310, the pixel separation unit 102 determines that the pixel is a foreground pixel , an object composed of all foreground pixels is called a foreground object in this embodiment. As shown in Figure 3 and Figure 4, if the background model composed of the above 1st to N-1th color video scenes is A0, the background model A0 is composed of trees and roads staying in the monitoring area, and the Nth video In the scene, if a vehicle enters the monitoring area, it can be determined that the pixels forming the vehicle are foreground objects after the detection process in step S306.

步骤S312,存储单元104将步骤S310中的前景物件的像素及现有背景模型进行暂存,得到一个暂存背景模型B。In step S312, the storage unit 104 temporarily stores the pixels of the foreground object and the existing background model in step S310 to obtain a temporarily stored background model B.

步骤S314,暂存背景模型监控单元106实时监控所述暂存背景模型B中的像素的像素值和亮度值在预设时间间隔内是否有变化。若在该预设时间间隔内所述暂存背景模型B中的像素的像素值和亮度值有变化,假设变化后的暂存背景模型为B`,则暂存背景模型监控单元106重复执行步骤S314,判断该暂存背景模型B`在预设的时间间隔内是否有变化。反之,若在该预设时间间隔内所述暂存背景模型B(或暂存背景模型B`)中的像素的像素值和亮度值没有变化,则流程进入步骤S316。In step S314, the temporarily stored background model monitoring unit 106 monitors in real time whether the pixel values and brightness values of the pixels in the temporarily stored background model B change within a preset time interval. If the pixel value and brightness value of the pixels in the temporary background model B change within the preset time interval, assuming that the changed temporary background model is B′, the temporary background model monitoring unit 106 repeats the steps S314. Determine whether the temporarily stored background model B' has changed within a preset time interval. On the contrary, if the pixel values and luminance values of the pixels in the temporary background model B (or temporary background model B′) do not change within the preset time interval, the process goes to step S316.

步骤S316,背景模型更新单元108以所述暂存背景模型B或B`更新所述现有背景模型,从而生成了新背景模型,例如,如图4所示,背景模型更新单元108以暂存背景模型B更新所述现有背景模型得到新背景模型(如背景模型A)。针对所述第N幅之后的视频场景,如图3中的第N+1幅视频场景,在像素分离单元102侦测到前景物件且该前景物件被暂存到暂存背景模型B`后,若暂存背景模型监控单元106监控到所述暂存背景模型B`在所述预设时间间隔内没有变化,则背景模型更新单元108会以该暂存背景模型B`更新所述背景模型A得到背景模型A`,以此类推,背景模型会不断的得到更新,此背景实时更新的方法可以避免影像晃动、光线变化、周期性物体的干扰,更精确地侦测出视频场景中的前景物件,以达到对监控区域有效监控等目的。另外,利用该方法还可将在监控区域内停留一段时间的物件自动视为背景。Step S316, the background model updating unit 108 updates the existing background model with the temporarily stored background model B or B′, thereby generating a new background model, for example, as shown in FIG. 4, the background model updating unit 108 temporarily stores Background model B updates the existing background model to obtain a new background model (such as background model A). For the video scene after the Nth picture, such as the N+1 video scene in Figure 3, after the pixel separation unit 102 detects the foreground object and the foreground object is temporarily stored in the temporary background model B`, If the temporary background model monitoring unit 106 monitors that the temporary background model B′ has no change within the preset time interval, the background model updating unit 108 will update the background model A with the temporary background model B′ Get the background model A`, and so on, the background model will be continuously updated. This method of real-time background update can avoid the interference of image shaking, light changes, and periodic objects, and more accurately detect foreground objects in the video scene. , in order to achieve the purpose of effective monitoring of the monitoring area. In addition, using this method, objects that stay in the monitoring area for a period of time can also be automatically regarded as the background.

步骤S318,像素分离单元102通过核对所接收的彩色视频场景判断是否还有视频场景未被侦测,也就是说,像素分离单元102判断是否还有彩色视频场景的前景物件和背景物件对应的像素未进行分离。若判断结果为否,则直接结束流程。若判断结果为是,则返回步骤S304以未侦测的视频场景为当前场景,以侦测该视频场景之前的视频场景所生成的背景模型为现有背景模型,依次执行步骤S304至步骤S316。In step S318, the pixel separation unit 102 judges whether there are still undetected video scenes by checking the received color video scene, that is, the pixel separation unit 102 judges whether there are pixels corresponding to the foreground object and the background object of the color video scene No separation was performed. If the judgment result is no, the process is directly ended. If the judgment result is yes, then return to step S304, take the undetected video scene as the current scene, and use the background model generated by the detected video scene before the video scene as the current background model, and execute steps S304 to S316 in sequence.

最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention without limitation. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent replacements can be made without departing from the spirit and scope of the technical solutions of the present invention.

Claims (6)

1. the object method for detecting in the video scene is characterized in that this method comprises the steps:
(a) set an empty background model, receive first width of cloth video scene in the N width of cloth color video scene;
(b) background model that this has been deposited in first width of cloth video scene is current scene as existing background model with second width of cloth video scene;
(c) will deserve before each pixel in the scene compare with the pixel in the existing background model, with the difference and the luminance difference of the pixel value between definite respective pixel;
(d) when the difference of described definite pixel value and luminance difference all are less than or equal to predefined threshold value, judge that this pixel is a background pixel, this pixel added in the existing background model generate new background model, and be the background object by the object that all background pixels are formed; Or
(e) when the difference of above-mentioned definite pixel value and luminance difference during all greater than described predefined threshold value, judge that this pixel is a foreground pixel, the object of being made up of all foreground pixels is the prospect object; And
(f) successively with the width of cloth video scene of the 3rd in the described N width of cloth video scene to the N width of cloth video scene as current scene, and will detect deserve before the background model that obtains of all scenes before the scene as existing background model, execution in step (c) to step (e) to detect prospect object and the background object in every width of cloth video scene.
2. the object method for detecting in the video scene as claimed in claim 1 is characterized in that, between step (d) and step (e), this method also comprises step:
(d1) temporary described foreground pixel and described existing background model obtain a temporary background model B;
(d2) whether pixel value and the brightness value of monitoring each pixel among this temporary background model B changes within a preset time interval;
(d3), then upgrade the new background model of generation after the described existing background model with this temporary background model B if the pixel value of each pixel in should temporary background model B and brightness value do not change at interval at described Preset Time; Or
(d4) changed in described Preset Time interval if should keep in the pixel value and the brightness value of the pixel among the background model B, temporary background model after the variation is B`, and whether then return step (d2) monitoring should change by temporary background model B` within a preset time interval.
3. the object detecting system in the video scene runs in the electronic equipment, it is characterized in that this system comprises:
The modelling unit is used to set an empty background model, receives first width of cloth video scene in the N width of cloth color video scene;
The pixel separation unit, be used for this background model that has deposited first width of cloth video scene in as existing background model, with second width of cloth video scene is current scene, each pixel in the scene before deserving and the pixel in the existing background model are compared, to determine the difference and the luminance difference of the pixel value between respective pixel, when the difference of determined pixel value and luminance difference all are less than or equal to predefined threshold value, judge that this pixel is a background pixel, the object of being made up of all background pixels is the background object, or when the difference of determined pixel value and luminance difference during all greater than described predefined threshold value, judge that this pixel is a foreground pixel, the object of being made up of all foreground pixels is the prospect object; And
Storage unit is used for that described background pixel is added existing background model and generates new background model; And
Described pixel separation unit also is used for successively the 3rd a width of cloth video scene to the N width of cloth video scene with described N width of cloth video scene as current scene, and will detect and deserve the existing background model of background model conduct that preceding scene all scenes before obtain, continuation compares respective pixel in above-mentioned current scene and the existing background model, prospect object in detecting every width of cloth video scene and background object.
4. the object detecting system in the video scene as claimed in claim 3 is characterized in that, described storage unit also is used for temporary described foreground pixel and existing background model obtains a temporary background model B.
5. the object detecting system in the video scene as claimed in claim 4 is characterized in that, this system also comprises:
Temporary background model monitoring unit, whether the pixel value and the brightness value that are used for monitoring in real time the pixel of described temporary background model B change within a preset time interval; And
The background model updating block, be used for when the result of above-mentioned monitoring for the pixel value of each pixel of this temporary background model B and brightness value when described Preset Time does not change at interval, generate new background model after upgrading described existing background model with this temporary background model B.
6. the object detecting system in the video scene as claimed in claim 5, it is characterized in that, described temporary background model monitoring unit also is used for changing at interval at described Preset Time for the pixel value of the pixel of this temporary background model B and brightness value as the result of above-mentioned monitoring, and when the temporary background model after changing was B`, whether monitoring should change in described Preset Time interval by temporary background model B`.
CN2010101294945A 2010-03-23 2010-03-23 System and method for detecting article in video scene Pending CN102201121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101294945A CN102201121A (en) 2010-03-23 2010-03-23 System and method for detecting article in video scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101294945A CN102201121A (en) 2010-03-23 2010-03-23 System and method for detecting article in video scene

Publications (1)

Publication Number Publication Date
CN102201121A true CN102201121A (en) 2011-09-28

Family

ID=44661771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101294945A Pending CN102201121A (en) 2010-03-23 2010-03-23 System and method for detecting article in video scene

Country Status (1)

Country Link
CN (1) CN102201121A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136742A (en) * 2011-11-28 2013-06-05 财团法人工业技术研究院 Foreground detection device and method
CN103414855A (en) * 2013-08-23 2013-11-27 北京奇艺世纪科技有限公司 Video processing method and system
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
CN108924423A (en) * 2018-07-18 2018-11-30 曾文斌 A method of eliminating interfering object in the picture photo of fixed camera position
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, rainfall measurement device, computer equipment and storage medium
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
CN111260695A (en) * 2020-01-17 2020-06-09 桂林理工大学 A kind of debris identification algorithm, system, server and medium
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
CN111510668A (en) * 2019-01-30 2020-08-07 原盛科技股份有限公司 Motion detection method for motion sensor
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
EP3699878A1 (en) * 2019-02-20 2020-08-26 Toshiba TEC Kabushiki Kaisha Article information reading apparatus
US10769799B2 (en) 2018-08-24 2020-09-08 Ford Global Technologies, Llc Foreground detection
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11460851B2 (en) 2019-05-24 2022-10-04 Ford Global Technologies, Llc Eccentricity image fusion
US11521494B2 (en) 2019-06-11 2022-12-06 Ford Global Technologies, Llc Vehicle eccentricity mapping
US11662741B2 (en) 2019-06-28 2023-05-30 Ford Global Technologies, Llc Vehicle visual odometry
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11783707B2 (en) 2018-10-09 2023-10-10 Ford Global Technologies, Llc Vehicle path planning
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12046047B2 (en) 2021-12-07 2024-07-23 Ford Global Technologies, Llc Object detection
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions
US12482298B2 (en) 2014-03-13 2025-11-25 Ultrahaptics IP Two Limited Biometric aware object detection and tracking

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110903A (en) * 2007-08-31 2008-01-23 湖北科创高新网络视频股份有限公司 Method and system for video data real-time de-noising
CN101281596A (en) * 2007-04-05 2008-10-08 三菱电机株式会社 Method for detecting legacy objects in a scene
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281596A (en) * 2007-04-05 2008-10-08 三菱电机株式会社 Method for detecting legacy objects in a scene
CN101110903A (en) * 2007-08-31 2008-01-23 湖北科创高新网络视频股份有限公司 Method and system for video data real-time de-noising
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LAURO SNIDARO ET AL.: "Video Security for Ambient Intelligence", 《IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS--PART A: SYSTEMS AND HUMANS》, vol. 35, no. 1, 31 January 2005 (2005-01-31), pages 134 - 136, XP011123558, DOI: doi:10.1109/TSMCA.2004.838478 *
吴众山等: "一种实用的背景提取与更新算法", 《厦门大学学报(自然科学版)》, vol. 47, no. 3, 31 May 2008 (2008-05-31), pages 349 - 350 *

Cited By (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136742A (en) * 2011-11-28 2013-06-05 财团法人工业技术研究院 Foreground detection device and method
US9697643B2 (en) 2012-01-17 2017-07-04 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US9778752B2 (en) 2012-01-17 2017-10-03 Leap Motion, Inc. Systems and methods for machine control
US9153028B2 (en) 2012-01-17 2015-10-06 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US9436998B2 (en) 2012-01-17 2016-09-06 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US10699155B2 (en) 2012-01-17 2020-06-30 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9495613B2 (en) 2012-01-17 2016-11-15 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging using formed difference images
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US9626591B2 (en) 2012-01-17 2017-04-18 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9652668B2 (en) 2012-01-17 2017-05-16 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9672441B2 (en) 2012-01-17 2017-06-06 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US11782516B2 (en) 2012-01-17 2023-10-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US10565784B2 (en) 2012-01-17 2020-02-18 Ultrahaptics IP Two Limited Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space
US11994377B2 (en) 2012-01-17 2024-05-28 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US11308711B2 (en) 2012-01-17 2022-04-19 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9934580B2 (en) 2012-01-17 2018-04-03 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9945660B2 (en) 2012-01-17 2018-04-17 Leap Motion, Inc. Systems and methods of locating a control object appendage in three dimensional (3D) space
US9767345B2 (en) 2012-01-17 2017-09-19 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US10767982B2 (en) 2012-01-17 2020-09-08 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US9741136B2 (en) 2012-01-17 2017-08-22 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10366308B2 (en) 2012-01-17 2019-07-30 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US10410411B2 (en) 2012-01-17 2019-09-10 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US12086327B2 (en) 2012-01-17 2024-09-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US10097754B2 (en) 2013-01-08 2018-10-09 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US12204695B2 (en) 2013-01-15 2025-01-21 Ultrahaptics IP Two Limited Dynamic, free-space user interactions for machine control
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11353962B2 (en) 2013-01-15 2022-06-07 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10739862B2 (en) 2013-01-15 2020-08-11 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11874970B2 (en) 2013-01-15 2024-01-16 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US12405673B2 (en) 2013-01-15 2025-09-02 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11693115B2 (en) 2013-03-15 2023-07-04 Ultrahaptics IP Two Limited Determining positional information of an object in space
US12306301B2 (en) 2013-03-15 2025-05-20 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US12333081B2 (en) 2013-04-26 2025-06-17 Ultrahaptics IP Two Limited Interacting with a machine using gestures in first and second user-specific virtual planes
US11099653B2 (en) 2013-04-26 2021-08-24 Ultrahaptics IP Two Limited Machine responsiveness to dynamic user movements and gestures
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US10452151B2 (en) 2013-04-26 2019-10-22 Ultrahaptics IP Two Limited Non-tactile interface systems and methods
CN103414855B (en) * 2013-08-23 2017-06-20 北京奇艺世纪科技有限公司 A kind of method for processing video frequency and system
CN103414855A (en) * 2013-08-23 2013-11-27 北京奇艺世纪科技有限公司 Video processing method and system
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11776208B2 (en) 2013-08-29 2023-10-03 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12086935B2 (en) 2013-08-29 2024-09-10 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12236528B2 (en) 2013-08-29 2025-02-25 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US11282273B2 (en) 2013-08-29 2022-03-22 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11461966B1 (en) 2013-08-29 2022-10-04 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US12242312B2 (en) 2013-10-03 2025-03-04 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US12265761B2 (en) 2013-10-31 2025-04-01 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US12482298B2 (en) 2014-03-13 2025-11-25 Ultrahaptics IP Two Limited Biometric aware object detection and tracking
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US12095969B2 (en) 2014-08-08 2024-09-17 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN108924423A (en) * 2018-07-18 2018-11-30 曾文斌 A method of eliminating interfering object in the picture photo of fixed camera position
US10769799B2 (en) 2018-08-24 2020-09-08 Ford Global Technologies, Llc Foreground detection
US11783707B2 (en) 2018-10-09 2023-10-10 Ford Global Technologies, Llc Vehicle path planning
CN111510668A (en) * 2019-01-30 2020-08-07 原盛科技股份有限公司 Motion detection method for motion sensor
CN113992887B (en) * 2019-01-30 2024-05-17 原相科技股份有限公司 Motion detection method using motion sensor
CN111510668B (en) * 2019-01-30 2021-10-19 原相科技股份有限公司 Motion detection method for motion sensor
CN113992887A (en) * 2019-01-30 2022-01-28 原相科技股份有限公司 Motion detection method for motion sensor
US11336869B2 (en) 2019-01-30 2022-05-17 Pixart Imaging Inc. Motion detection methods and motion sensors capable of more accurately detecting true motion event
EP3699878A1 (en) * 2019-02-20 2020-08-26 Toshiba TEC Kabushiki Kaisha Article information reading apparatus
CN111599118A (en) * 2019-02-20 2020-08-28 东芝泰格有限公司 Article information reading apparatus, article information reading control method, readable storage medium, and electronic device
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, rainfall measurement device, computer equipment and storage medium
US11460851B2 (en) 2019-05-24 2022-10-04 Ford Global Technologies, Llc Eccentricity image fusion
US11521494B2 (en) 2019-06-11 2022-12-06 Ford Global Technologies, Llc Vehicle eccentricity mapping
US11662741B2 (en) 2019-06-28 2023-05-30 Ford Global Technologies, Llc Vehicle visual odometry
CN111260695A (en) * 2020-01-17 2020-06-09 桂林理工大学 A kind of debris identification algorithm, system, server and medium
US12046047B2 (en) 2021-12-07 2024-07-23 Ford Global Technologies, Llc Object detection

Similar Documents

Publication Publication Date Title
CN102201121A (en) System and method for detecting article in video scene
US11145039B2 (en) Dynamic tone mapping method, mobile terminal, and computer readable storage medium
TW201133358A (en) System and method for detecting objects in a video image
JP4668978B2 (en) Flame detection method and apparatus
CN106851263B (en) Video quality diagnosis method and system based on timing self-learning module
CN101930610A (en) Moving Object Detection Method Using Adaptive Background Model
CN110572636B (en) Camera contamination detection method and device, storage medium and electronic equipment
CN105828065B (en) A kind of video pictures overexposure detection method and device
CN108088654A (en) Projector quality determining method and its electronic equipment
CN109584175B (en) Image processing method and device
JP2004157979A (en) Image motion detection apparatus and computer program
CN105809710B (en) System and method for detecting moving objects
US7982774B2 (en) Image processing apparatus and image processing method
CN113596344A (en) Shooting processing method and device, electronic equipment and readable storage medium
CN111127358A (en) Image processing method, device and storage medium
CN110210401B (en) Intelligent target detection method under weak light
CN120198858B (en) A batch image data processing method and system for intelligent manufacturing production line
CN115035443A (en) Method, system and device for detecting fallen garbage based on picture shooting
CN112449115B (en) Shooting method and device and electronic equipment
CN114495414B (en) Smoke detection system and smoke detection method
CN103810691B (en) Video-based automatic teller machine monitoring scene detection method and apparatus
CN116012785B (en) Fire level determining method, device, equipment and medium
CN114663843B (en) Road fog detection method, device, electronic device and storage medium
KR20140143918A (en) Method and Apparatus for Detecting Foregroud Image with Separating Foregroud and Background in Image
CN110853001B (en) Transformer substation foreign matter interference prevention image recognition method, system and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110928