CN102201121A - System and method for detecting article in video scene - Google Patents
System and method for detecting article in video scene Download PDFInfo
- Publication number
- CN102201121A CN102201121A CN2010101294945A CN201010129494A CN102201121A CN 102201121 A CN102201121 A CN 102201121A CN 2010101294945 A CN2010101294945 A CN 2010101294945A CN 201010129494 A CN201010129494 A CN 201010129494A CN 102201121 A CN102201121 A CN 102201121A
- Authority
- CN
- China
- Prior art keywords
- background model
- pixel
- scene
- video scene
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种侦测系统与方法,尤其涉及一种视频场景中的物件侦测系统与方法。The present invention relates to a detection system and method, in particular to an object detection system and method in a video scene.
背景技术Background technique
目前,通过现有技术可实现对拍摄监控区域所获取的视频场景中的前景物进行侦测,但该现有技术还无一标准的测试规范或流程,面对以下问题还未有方法进行克服:监控区域的亮度变化、周期性物体干扰、影像晃动等。At present, the detection of foreground objects in the video scene obtained by shooting the monitoring area can be realized through the existing technology, but the existing technology does not have a standard test specification or process, and there is no way to overcome the following problems : Brightness changes in the monitoring area, periodic object interference, image shaking, etc.
发明内容Contents of the invention
鉴于以上内容,有必要提供一种视频场景中的物件侦测方法,通过建立背景模型和利用前景侦测算法对视频场景中的前景物件进行侦测,并容忍光线的变化,对动态背景进行更新,以达到有效地侦测、监控等目的。In view of the above, it is necessary to provide a method for object detection in a video scene, by establishing a background model and using a foreground detection algorithm to detect foreground objects in a video scene, and tolerating changes in light to update the dynamic background , in order to achieve effective detection, monitoring and other purposes.
鉴于以上内容,还有必要提供一种视频场景中的物件侦测系统,通过建立背景模型和利用前景侦测算法对视频场景中的前景物件进行侦测,并容忍光线的变化,对动态背景进行更新,以达到有效地侦测、监控等目的。In view of the above content, it is also necessary to provide an object detection system in a video scene, by establishing a background model and utilizing a foreground detection algorithm to detect foreground objects in a video scene, and tolerate changes in light, and detect dynamic backgrounds. Update, in order to achieve effective detection, monitoring and other purposes.
一种视频场景中的物件侦测方法,包括如下步骤:(a)设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景;(b)将该存入了第一幅视频场景的背景模型作为现有背景模型,以第二幅视频场景为当前场景;(c)将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值;(d)当所述确定的像素值之差和亮度差值均小于或等于预先设定的门槛值时,判定该像素为背景像素,将该像素加入现有背景模型中生成新背景模型,且由所有背景像素组成的物件为背景物件;或(e)当上述确定的像素值之差和亮度差值均大于所述预先设定的门槛值时,判定该像素为前景像素,由所有前景像素组成的物件为前景物件;及(f)依次将所述N幅视频场景中的第三至第N幅视频场景中的一幅视频场景作为当前场景,并将侦测该当前场景之前的所有场景得到的背景模型作为现有背景模型,执行步骤(c)至步骤(e)以侦测出每幅视频场景中的前景物件和背景物件。A method for object detection in a video scene, comprising the steps of: (a) setting an empty background model to receive the first video scene in N color video scenes; (b) storing the first video scene in the first video scene The background model of the video scene is used as the existing background model, and the second video scene is the current scene; (c) each pixel in the current scene is compared with the pixels in the existing background model to determine the pixels between the corresponding pixels value difference and brightness difference; (d) when the determined pixel value difference and brightness difference are both less than or equal to the preset threshold value, it is determined that the pixel is a background pixel, and the pixel is added to the existing background Generate a new background model in the model, and the object composed of all background pixels is a background object; or (e) when the above-mentioned determined pixel value difference and brightness difference are greater than the preset threshold value, determine the pixel is the foreground pixel, and the object made up of all the foreground pixels is the foreground object; and (f) one of the third to Nth video scenes in the N video scenes is taken as the current scene in turn, and the detected The background models obtained by measuring all the scenes before the current scene are used as the existing background models, and the steps (c) to (e) are performed to detect the foreground objects and background objects in each video scene.
一种视频场景中的物件侦测系统,运行于电子设备中,该系统包括:模型建立单元用于设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景;像素分离单元用于将该存入了第一幅视频场景的背景模型作为现有背景模型,以第二幅视频场景为当前场景,将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值,当所确定的像素值之差和亮度差值均小于或等于预先设定的门槛值时,判定该像素为背景像素,由所有背景像素组成的物件为背景物件,或当所确定的像素值之差和亮度差值均大于所述预先设定的门槛值时,判定该像素为前景像素,由所有前景像素组成的物件为前景物件;及存储单元用于将所述背景像素加入现有背景模型中生成新背景模型。所述像素分离单元还用于依次将所述N幅视频场景中的第三至第N幅视频场景中的一幅视频场景作为当前场景,并将侦测该当前场景之前的所有场景得到的背景模型作为现有背景模型,继续将上述当前场景与现有背景模型中相应像素进行比较,直到侦测出每幅视频场景中的前景物件和背景物件。An object detection system in a video scene, running in an electronic device, the system includes: a model building unit is used to set an empty background model, and receives the first video scene in N color video scenes; a pixel separation unit The background model stored in the first video scene is used as the existing background model, and the second video scene is used as the current scene, and each pixel in the current scene is compared with the pixels in the existing background model, To determine the pixel value difference and brightness difference between corresponding pixels, when the determined pixel value difference and brightness difference are less than or equal to the preset threshold value, it is determined that the pixel is a background pixel, which is composed of all background pixels The object is a background object, or when the determined pixel value difference and brightness difference are greater than the preset threshold value, it is determined that the pixel is a foreground pixel, and an object composed of all foreground pixels is a foreground object; and storing The unit is used for adding the background pixels into the existing background model to generate a new background model. The pixel separation unit is also used to sequentially use one of the third to Nth video scenes in the N video scenes as the current scene, and detect the background of all the scenes before the current scene. The model serves as an existing background model, and continues to compare the above-mentioned current scene with corresponding pixels in the existing background model until the foreground objects and background objects in each video scene are detected.
相较于现有技术,所述视频场景中的物件侦测系统与方法,利用彩色像素建立背景模型,相较于一般利用灰色像素所建立的模型,该彩色像素建立的背景模型具有更佳的判断力,另外,通过该彩色像素建立的背景模型和前景侦测演算法可将视频场景中停留一段时间的物体或场景视为背景,排除突然亮度变化、周期性物体的干扰,对视频场景中的前景物件进行侦测,以达到有效地侦测、监控等目的。Compared with the prior art, the object detection system and method in the video scene uses color pixels to build a background model. Compared with the model built by generally using gray pixels, the background model built by the color pixels has better performance. In addition, the background model and foreground detection algorithm established by the color pixels can regard the objects or scenes that stay in the video scene for a period of time as the background, and eliminate the interference of sudden brightness changes and periodic objects. Foreground objects are detected to achieve effective detection and monitoring purposes.
附图说明Description of drawings
图1是本发明视频场景中的物件侦测系统较佳实施例的功能单元图。FIG. 1 is a functional unit diagram of a preferred embodiment of an object detection system in a video scene of the present invention.
图2是本发明视频场景中的物件侦测方法较佳实施例的作业流程图。FIG. 2 is a flow chart of the preferred embodiment of the object detection method in the video scene of the present invention.
图3和图4是本发明侦测到的前景物件及背景模型变化示意图。FIG. 3 and FIG. 4 are schematic diagrams of changes in foreground objects and background models detected by the present invention.
主要元件符号说明Description of main component symbols
具体实施方式Detailed ways
如图1所示,是本发明视频场景中的物件侦测系统10较佳实施例的功能单元图。该视频场景中的物件侦测系统10安装并运行于电子设备1中。该电子设备1还包括存储设备20、处理器30及显示设备40。该电子设备1可以为监控设备、计算机或其它任意适用的具有数据处理功能的装置。As shown in FIG. 1 , it is a functional unit diagram of a preferred embodiment of an
存储设备20用于存储所述物件侦测系统10的计算机化程序代码,及存储由监控设备所拍摄的彩色视频场景。该存储设备20可以为电子设备1内置的存储器,也可以为电子设备1外接的存储器。The
处理器30执行所述物件侦测系统10的计算机化程序代码,通过建立空背景模型,有效地侦测出监控设备所拍摄的彩色视频场景中的前景物件,并对所监控区域内停留一段时间的物体或场景自动视为背景,以改善突然亮度变化、排除周期性物体干扰等监控效果。其中,所述前景物件是指出现在监控区域内的物件或场景,如出现在监控区域内的人、车等,通过对前景物件的侦测可及时排查出监控区域内出现的人或物。The
显示设备40用于显示所述监控设备所拍摄的彩色视频场景。The
该物件侦测系统10包括:模型建立单元100、像素分离单元102、存储单元104、暂存背景模型监控单元106和背景模型更新单元108,其功能可通过图2至图4进行具体描述。The
在此需说明的是,本实施例中的前景物件侦测由三部分组成:第一部分为背景模型的训练与建立,即接收N幅彩色视频场景,对于其中的彩色像素建立背景模型;第二部分为前景侦测,即对于N幅之后的视频场景,利用第一部分所建立的背景模型来判定前景和背景;第三部分为背景模型更新,此部分使用双层背景模型机制,即第二部分判断前景和背景所使用的背景模型和新增的暂存背景模型,利用预设的时间判断是否需要更新此双层背景模型,通过更新该双层背景模型可实现对监控区域的光线变化进行容忍、实现动态背景的自动更新。具体流程如图2所述:It should be noted here that the foreground object detection in this embodiment consists of three parts: the first part is the training and establishment of the background model, that is, receiving N color video scenes, and establishing a background model for the color pixels therein; Part is foreground detection, that is, for the video scene after N frames, use the background model established in the first part to determine the foreground and background; the third part is background model update, this part uses the double-layer background model mechanism, that is, the second part The background model used to judge the foreground and the background and the newly added temporary background model, use the preset time to judge whether the double-layer background model needs to be updated. By updating the double-layer background model, the tolerance of light changes in the monitoring area can be achieved , Realize automatic update of dynamic background. The specific process is described in Figure 2:
如图2所示,是本发明视频场景中的物件侦测方法较佳实施例的作业流程图。该流程仅以N幅彩色视频场景中的某两幅视频场景的前景物件侦测为例进行说明,其他视频场景中的前景物件侦测均依照该侦测方法进行。As shown in FIG. 2 , it is a flow chart of the preferred embodiment of the object detection method in the video scene of the present invention. This process is only illustrated by taking the detection of foreground objects in two video scenes in N color video scenes as an example, and the detection of foreground objects in other video scenes is performed according to the detection method.
步骤S300,通过模型建立单元100设定一个空背景模型,接收N幅彩色视频场景中的第一幅视频场景,也就是说,该空背景模型用于存储第一幅视频场景。本实施例中,第2幅~第N幅以及第N幅之后的视频场景的前景侦测无需再重新设立空背景模型。In step S300, an empty background model is set by the
步骤S302,依次将该N幅视频场景中的一幅场景作为当前场景,以侦测该场景之前一幅场景所生成的背景模型为现有背景模型。In step S302, one of the N video scenes is used as the current scene in turn, and the background model generated by the scene before the detected scene is the current background model.
步骤S304,像素分离单元102将该当前场景中的各像素与现有背景模型中的像素进行比较,以确定相应像素间的像素值之差和亮度差值。本实施例中,第二幅视频场景是以存入空背景模型中的第一幅视频场景为现有背景模型;当该第二幅视频场景处理完后,再取出第三幅视频场景进行处理,该第三幅视频场景是以由侦测第一幅、第二幅视频场景所生成的背景模型为现有背景模型,以此类推,直到将所有的视频场景处理完毕。例如,如图3所示,第N幅视频场景是以侦测第1~第N-1幅视频场景所取得的背景模型A0为现有背景模型,第N+1幅视频场景以背景模型A为现有背景模型。In step S304, the
步骤S306,像素分离单元102判断上述确定的像素值之差和亮度差值是否均小于或等于预先设定的门槛值。In step S306, the
若所述像素与现有背景模型中的相应像素间的像素值之差和亮度差值均小于或等于预先设定的门槛值时,于步骤S308,像素分离单元102判定该像素为背景像素,存储单元104将该像素加入现有背景模型中,从而生成了新背景模型,然后进入步骤S318,其中,由所有背景像素组成的物件本实施例称之为背景物件。例如,假设监控区域无外界物体(如人或车)涉入,仅光线有轻微变化,而由该变化的光线不会导致当前场景中的像素较现有背景模型有太大变化时,像素分离单元102仍会继续将当前场景中的像素判定为背景像素,存储单元104将该像素加入现有背景模型中生成新背景模型。If the pixel value difference and brightness difference between the pixel and the corresponding pixel in the existing background model are both less than or equal to the preset threshold value, in step S308, the
反之,若所述像素与现有背景模型中的相应像素间的像素值之差和亮度差值均大于所述预先设定的门槛值,于步骤S310,像素分离单元102判定该像素为前景像素,由所有前景像素组成的物件本实施例称之为前景物件。如图3和图4所示,若由上述第1~第N-1幅彩色视频场景组成的背景模型为A0,该背景模型A0由监控区域内停留的树、马路组成,在第N幅视频场景中,若有车辆进入监控区域,则经过步骤S306的侦测过程可判定组成车辆的像素为前景物件。On the contrary, if the pixel value difference and brightness difference between the pixel and the corresponding pixel in the existing background model are greater than the preset threshold value, in step S310, the
步骤S312,存储单元104将步骤S310中的前景物件的像素及现有背景模型进行暂存,得到一个暂存背景模型B。In step S312, the
步骤S314,暂存背景模型监控单元106实时监控所述暂存背景模型B中的像素的像素值和亮度值在预设时间间隔内是否有变化。若在该预设时间间隔内所述暂存背景模型B中的像素的像素值和亮度值有变化,假设变化后的暂存背景模型为B`,则暂存背景模型监控单元106重复执行步骤S314,判断该暂存背景模型B`在预设的时间间隔内是否有变化。反之,若在该预设时间间隔内所述暂存背景模型B(或暂存背景模型B`)中的像素的像素值和亮度值没有变化,则流程进入步骤S316。In step S314, the temporarily stored background
步骤S316,背景模型更新单元108以所述暂存背景模型B或B`更新所述现有背景模型,从而生成了新背景模型,例如,如图4所示,背景模型更新单元108以暂存背景模型B更新所述现有背景模型得到新背景模型(如背景模型A)。针对所述第N幅之后的视频场景,如图3中的第N+1幅视频场景,在像素分离单元102侦测到前景物件且该前景物件被暂存到暂存背景模型B`后,若暂存背景模型监控单元106监控到所述暂存背景模型B`在所述预设时间间隔内没有变化,则背景模型更新单元108会以该暂存背景模型B`更新所述背景模型A得到背景模型A`,以此类推,背景模型会不断的得到更新,此背景实时更新的方法可以避免影像晃动、光线变化、周期性物体的干扰,更精确地侦测出视频场景中的前景物件,以达到对监控区域有效监控等目的。另外,利用该方法还可将在监控区域内停留一段时间的物件自动视为背景。Step S316, the background
步骤S318,像素分离单元102通过核对所接收的彩色视频场景判断是否还有视频场景未被侦测,也就是说,像素分离单元102判断是否还有彩色视频场景的前景物件和背景物件对应的像素未进行分离。若判断结果为否,则直接结束流程。若判断结果为是,则返回步骤S304以未侦测的视频场景为当前场景,以侦测该视频场景之前的视频场景所生成的背景模型为现有背景模型,依次执行步骤S304至步骤S316。In step S318, the
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention without limitation. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent replacements can be made without departing from the spirit and scope of the technical solutions of the present invention.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2010101294945A CN102201121A (en) | 2010-03-23 | 2010-03-23 | System and method for detecting article in video scene |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2010101294945A CN102201121A (en) | 2010-03-23 | 2010-03-23 | System and method for detecting article in video scene |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN102201121A true CN102201121A (en) | 2011-09-28 |
Family
ID=44661771
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN2010101294945A Pending CN102201121A (en) | 2010-03-23 | 2010-03-23 | System and method for detecting article in video scene |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102201121A (en) |
Cited By (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103136742A (en) * | 2011-11-28 | 2013-06-05 | 财团法人工业技术研究院 | Foreground detection device and method |
| CN103414855A (en) * | 2013-08-23 | 2013-11-27 | 北京奇艺世纪科技有限公司 | Video processing method and system |
| US8693731B2 (en) | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
| US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
| US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
| US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
| US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
| US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
| US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
| US9945660B2 (en) | 2012-01-17 | 2018-04-17 | Leap Motion, Inc. | Systems and methods of locating a control object appendage in three dimensional (3D) space |
| CN108924423A (en) * | 2018-07-18 | 2018-11-30 | 曾文斌 | A method of eliminating interfering object in the picture photo of fixed camera position |
| CN110018529A (en) * | 2019-02-22 | 2019-07-16 | 南方科技大学 | Rainfall measurement method, rainfall measurement device, computer equipment and storage medium |
| US10585193B2 (en) | 2013-03-15 | 2020-03-10 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
| US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
| CN111260695A (en) * | 2020-01-17 | 2020-06-09 | 桂林理工大学 | A kind of debris identification algorithm, system, server and medium |
| US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| CN111510668A (en) * | 2019-01-30 | 2020-08-07 | 原盛科技股份有限公司 | Motion detection method for motion sensor |
| US10739862B2 (en) | 2013-01-15 | 2020-08-11 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
| EP3699878A1 (en) * | 2019-02-20 | 2020-08-26 | Toshiba TEC Kabushiki Kaisha | Article information reading apparatus |
| US10769799B2 (en) | 2018-08-24 | 2020-09-08 | Ford Global Technologies, Llc | Foreground detection |
| US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
| US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
| US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
| US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| US11740705B2 (en) | 2013-01-15 | 2023-08-29 | Ultrahaptics IP Two Limited | Method and system for controlling a machine according to a characteristic of a control object |
| US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
| US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
| US11783707B2 (en) | 2018-10-09 | 2023-10-10 | Ford Global Technologies, Llc | Vehicle path planning |
| US11868687B2 (en) | 2013-10-31 | 2024-01-09 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US12046047B2 (en) | 2021-12-07 | 2024-07-23 | Ford Global Technologies, Llc | Object detection |
| US12154238B2 (en) | 2014-05-20 | 2024-11-26 | Ultrahaptics IP Two Limited | Wearable augmented reality devices with object detection and tracking |
| US12260023B2 (en) | 2012-01-17 | 2025-03-25 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| US12299207B2 (en) | 2015-01-16 | 2025-05-13 | Ultrahaptics IP Two Limited | Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments |
| US12314478B2 (en) | 2014-05-14 | 2025-05-27 | Ultrahaptics IP Two Limited | Systems and methods of tracking moving hands and recognizing gestural interactions |
| US12482298B2 (en) | 2014-03-13 | 2025-11-25 | Ultrahaptics IP Two Limited | Biometric aware object detection and tracking |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101110903A (en) * | 2007-08-31 | 2008-01-23 | 湖北科创高新网络视频股份有限公司 | Method and system for video data real-time de-noising |
| CN101281596A (en) * | 2007-04-05 | 2008-10-08 | 三菱电机株式会社 | Method for detecting legacy objects in a scene |
| CN101510304A (en) * | 2009-03-30 | 2009-08-19 | 北京中星微电子有限公司 | Method, device and pick-up head for dividing and obtaining foreground image |
-
2010
- 2010-03-23 CN CN2010101294945A patent/CN102201121A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101281596A (en) * | 2007-04-05 | 2008-10-08 | 三菱电机株式会社 | Method for detecting legacy objects in a scene |
| CN101110903A (en) * | 2007-08-31 | 2008-01-23 | 湖北科创高新网络视频股份有限公司 | Method and system for video data real-time de-noising |
| CN101510304A (en) * | 2009-03-30 | 2009-08-19 | 北京中星微电子有限公司 | Method, device and pick-up head for dividing and obtaining foreground image |
Non-Patent Citations (2)
| Title |
|---|
| LAURO SNIDARO ET AL.: "Video Security for Ambient Intelligence", 《IEEE TRANSACTIONS ON SYSTEMS,MAN,AND CYBERNETICS--PART A: SYSTEMS AND HUMANS》, vol. 35, no. 1, 31 January 2005 (2005-01-31), pages 134 - 136, XP011123558, DOI: doi:10.1109/TSMCA.2004.838478 * |
| 吴众山等: "一种实用的背景提取与更新算法", 《厦门大学学报(自然科学版)》, vol. 47, no. 3, 31 May 2008 (2008-05-31), pages 349 - 350 * |
Cited By (79)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103136742A (en) * | 2011-11-28 | 2013-06-05 | 财团法人工业技术研究院 | Foreground detection device and method |
| US9697643B2 (en) | 2012-01-17 | 2017-07-04 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
| US9778752B2 (en) | 2012-01-17 | 2017-10-03 | Leap Motion, Inc. | Systems and methods for machine control |
| US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
| US12260023B2 (en) | 2012-01-17 | 2025-03-25 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| US9436998B2 (en) | 2012-01-17 | 2016-09-06 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
| US10699155B2 (en) | 2012-01-17 | 2020-06-30 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US9495613B2 (en) | 2012-01-17 | 2016-11-15 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging using formed difference images |
| US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| US9626591B2 (en) | 2012-01-17 | 2017-04-18 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
| US9652668B2 (en) | 2012-01-17 | 2017-05-16 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US9672441B2 (en) | 2012-01-17 | 2017-06-06 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
| US11782516B2 (en) | 2012-01-17 | 2023-10-10 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
| US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
| US8693731B2 (en) | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
| US10565784B2 (en) | 2012-01-17 | 2020-02-18 | Ultrahaptics IP Two Limited | Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space |
| US11994377B2 (en) | 2012-01-17 | 2024-05-28 | Ultrahaptics IP Two Limited | Systems and methods of locating a control object appendage in three dimensional (3D) space |
| US11308711B2 (en) | 2012-01-17 | 2022-04-19 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US9934580B2 (en) | 2012-01-17 | 2018-04-03 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US9945660B2 (en) | 2012-01-17 | 2018-04-17 | Leap Motion, Inc. | Systems and methods of locating a control object appendage in three dimensional (3D) space |
| US9767345B2 (en) | 2012-01-17 | 2017-09-19 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
| US10767982B2 (en) | 2012-01-17 | 2020-09-08 | Ultrahaptics IP Two Limited | Systems and methods of locating a control object appendage in three dimensional (3D) space |
| US9741136B2 (en) | 2012-01-17 | 2017-08-22 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
| US10366308B2 (en) | 2012-01-17 | 2019-07-30 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
| US10410411B2 (en) | 2012-01-17 | 2019-09-10 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
| US12086327B2 (en) | 2012-01-17 | 2024-09-10 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
| US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
| US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
| US10097754B2 (en) | 2013-01-08 | 2018-10-09 | Leap Motion, Inc. | Power consumption in motion-capture systems with audio and optical signals |
| US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
| US12204695B2 (en) | 2013-01-15 | 2025-01-21 | Ultrahaptics IP Two Limited | Dynamic, free-space user interactions for machine control |
| US11740705B2 (en) | 2013-01-15 | 2023-08-29 | Ultrahaptics IP Two Limited | Method and system for controlling a machine according to a characteristic of a control object |
| US11353962B2 (en) | 2013-01-15 | 2022-06-07 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
| US10739862B2 (en) | 2013-01-15 | 2020-08-11 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
| US11874970B2 (en) | 2013-01-15 | 2024-01-16 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
| US12405673B2 (en) | 2013-01-15 | 2025-09-02 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
| US11693115B2 (en) | 2013-03-15 | 2023-07-04 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
| US12306301B2 (en) | 2013-03-15 | 2025-05-20 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
| US10585193B2 (en) | 2013-03-15 | 2020-03-10 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
| US12333081B2 (en) | 2013-04-26 | 2025-06-17 | Ultrahaptics IP Two Limited | Interacting with a machine using gestures in first and second user-specific virtual planes |
| US11099653B2 (en) | 2013-04-26 | 2021-08-24 | Ultrahaptics IP Two Limited | Machine responsiveness to dynamic user movements and gestures |
| US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
| US10452151B2 (en) | 2013-04-26 | 2019-10-22 | Ultrahaptics IP Two Limited | Non-tactile interface systems and methods |
| CN103414855B (en) * | 2013-08-23 | 2017-06-20 | 北京奇艺世纪科技有限公司 | A kind of method for processing video frequency and system |
| CN103414855A (en) * | 2013-08-23 | 2013-11-27 | 北京奇艺世纪科技有限公司 | Video processing method and system |
| US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US11776208B2 (en) | 2013-08-29 | 2023-10-03 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US12086935B2 (en) | 2013-08-29 | 2024-09-10 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US12236528B2 (en) | 2013-08-29 | 2025-02-25 | Ultrahaptics IP Two Limited | Determining spans and span lengths of a control object in a free space gesture control environment |
| US11282273B2 (en) | 2013-08-29 | 2022-03-22 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US11461966B1 (en) | 2013-08-29 | 2022-10-04 | Ultrahaptics IP Two Limited | Determining spans and span lengths of a control object in a free space gesture control environment |
| US12242312B2 (en) | 2013-10-03 | 2025-03-04 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
| US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
| US12265761B2 (en) | 2013-10-31 | 2025-04-01 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US11868687B2 (en) | 2013-10-31 | 2024-01-09 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
| US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
| US12482298B2 (en) | 2014-03-13 | 2025-11-25 | Ultrahaptics IP Two Limited | Biometric aware object detection and tracking |
| US12314478B2 (en) | 2014-05-14 | 2025-05-27 | Ultrahaptics IP Two Limited | Systems and methods of tracking moving hands and recognizing gestural interactions |
| US12154238B2 (en) | 2014-05-20 | 2024-11-26 | Ultrahaptics IP Two Limited | Wearable augmented reality devices with object detection and tracking |
| US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
| US12095969B2 (en) | 2014-08-08 | 2024-09-17 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
| US12299207B2 (en) | 2015-01-16 | 2025-05-13 | Ultrahaptics IP Two Limited | Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments |
| CN108924423A (en) * | 2018-07-18 | 2018-11-30 | 曾文斌 | A method of eliminating interfering object in the picture photo of fixed camera position |
| US10769799B2 (en) | 2018-08-24 | 2020-09-08 | Ford Global Technologies, Llc | Foreground detection |
| US11783707B2 (en) | 2018-10-09 | 2023-10-10 | Ford Global Technologies, Llc | Vehicle path planning |
| CN111510668A (en) * | 2019-01-30 | 2020-08-07 | 原盛科技股份有限公司 | Motion detection method for motion sensor |
| CN113992887B (en) * | 2019-01-30 | 2024-05-17 | 原相科技股份有限公司 | Motion detection method using motion sensor |
| CN111510668B (en) * | 2019-01-30 | 2021-10-19 | 原相科技股份有限公司 | Motion detection method for motion sensor |
| CN113992887A (en) * | 2019-01-30 | 2022-01-28 | 原相科技股份有限公司 | Motion detection method for motion sensor |
| US11336869B2 (en) | 2019-01-30 | 2022-05-17 | Pixart Imaging Inc. | Motion detection methods and motion sensors capable of more accurately detecting true motion event |
| EP3699878A1 (en) * | 2019-02-20 | 2020-08-26 | Toshiba TEC Kabushiki Kaisha | Article information reading apparatus |
| CN111599118A (en) * | 2019-02-20 | 2020-08-28 | 东芝泰格有限公司 | Article information reading apparatus, article information reading control method, readable storage medium, and electronic device |
| CN110018529A (en) * | 2019-02-22 | 2019-07-16 | 南方科技大学 | Rainfall measurement method, rainfall measurement device, computer equipment and storage medium |
| US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
| US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
| US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
| CN111260695A (en) * | 2020-01-17 | 2020-06-09 | 桂林理工大学 | A kind of debris identification algorithm, system, server and medium |
| US12046047B2 (en) | 2021-12-07 | 2024-07-23 | Ford Global Technologies, Llc | Object detection |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102201121A (en) | System and method for detecting article in video scene | |
| US11145039B2 (en) | Dynamic tone mapping method, mobile terminal, and computer readable storage medium | |
| TW201133358A (en) | System and method for detecting objects in a video image | |
| JP4668978B2 (en) | Flame detection method and apparatus | |
| CN106851263B (en) | Video quality diagnosis method and system based on timing self-learning module | |
| CN101930610A (en) | Moving Object Detection Method Using Adaptive Background Model | |
| CN110572636B (en) | Camera contamination detection method and device, storage medium and electronic equipment | |
| CN105828065B (en) | A kind of video pictures overexposure detection method and device | |
| CN108088654A (en) | Projector quality determining method and its electronic equipment | |
| CN109584175B (en) | Image processing method and device | |
| JP2004157979A (en) | Image motion detection apparatus and computer program | |
| CN105809710B (en) | System and method for detecting moving objects | |
| US7982774B2 (en) | Image processing apparatus and image processing method | |
| CN113596344A (en) | Shooting processing method and device, electronic equipment and readable storage medium | |
| CN111127358A (en) | Image processing method, device and storage medium | |
| CN110210401B (en) | Intelligent target detection method under weak light | |
| CN120198858B (en) | A batch image data processing method and system for intelligent manufacturing production line | |
| CN115035443A (en) | Method, system and device for detecting fallen garbage based on picture shooting | |
| CN112449115B (en) | Shooting method and device and electronic equipment | |
| CN114495414B (en) | Smoke detection system and smoke detection method | |
| CN103810691B (en) | Video-based automatic teller machine monitoring scene detection method and apparatus | |
| CN116012785B (en) | Fire level determining method, device, equipment and medium | |
| CN114663843B (en) | Road fog detection method, device, electronic device and storage medium | |
| KR20140143918A (en) | Method and Apparatus for Detecting Foregroud Image with Separating Foregroud and Background in Image | |
| CN110853001B (en) | Transformer substation foreign matter interference prevention image recognition method, system and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20110928 |