[go: up one dir, main page]

WO2012124852A1 - Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same - Google Patents

Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same Download PDF

Info

Publication number
WO2012124852A1
WO2012124852A1 PCT/KR2011/002143 KR2011002143W WO2012124852A1 WO 2012124852 A1 WO2012124852 A1 WO 2012124852A1 KR 2011002143 W KR2011002143 W KR 2011002143W WO 2012124852 A1 WO2012124852 A1 WO 2012124852A1
Authority
WO
WIPO (PCT)
Prior art keywords
zone
monitoring
pixel
image
stereo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2011/002143
Other languages
French (fr)
Korean (ko)
Inventor
강인배
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ITXSECURITY CO Ltd
Original Assignee
ITXSECURITY CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ITXSECURITY CO Ltd filed Critical ITXSECURITY CO Ltd
Publication of WO2012124852A1 publication Critical patent/WO2012124852A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19652Systems using zones in a single scene defined for different treatment, e.g. outer zone gives pre-alarm, inner zone gives alarm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Definitions

  • the present invention relates to a stereo camera apparatus capable of recognizing an object based on 3D depth map data obtained by using two cameras, and more particularly, to track a location of a recognized object.
  • a method of using a stereo camera, a method of using a laser scan, or a method of using a time of flight (TOF) The back is known.
  • stereo matching using a stereo camera is a hardware implementation of a process of recognizing a stereoscopic object using two eyes, and a pair of images obtained by photographing the same subject with two cameras. It is a method of extracting information about depth (or distance) in space through the interpretation process of.
  • binocular differences on the same Epipolar Line of images obtained from two cameras are calculated.
  • the binocular difference includes distance information, and the geometrical characteristic calculated from the binocular difference becomes the depth.
  • the binocular difference value is calculated in real time from the input image, three-dimensional distance information of the observation space can be measured.
  • stereo matching algorithm for example, "image matching method using a plurality of image lines” of the Republic of Korea Patent No. 0517876 or "binocular difference estimation method for three-dimensional object recognition” of the Republic of Korea Patent No. 0601958.
  • Applicant has already invented an image recognition apparatus capable of distinguishing and recognizing objects in a space, in particular, objects to be monitored by a manager using such a stereo matching algorithm, and Korean Patent Application Nos. 10-2010-0039302 and 10-. 2010-0039366 is pending.
  • movement information in the surveillance zone of the monitor may be divided into sensing information about points and lines acquired through a detector and sensing information about a space.
  • Detection of points or lines is the detection of the opening of a door or window, or the detection of an object on an infrared line using an infrared sensor.
  • the detection of the space is to detect specific changes in the space by using a method of detecting temperature change through infrared rays, and so on, and to monitor the detailed movement in the space by simply detecting whether an object moves in the space. This is not possible.
  • An object of the present invention is a stereo camera device for recognizing an object based on 3D depth map data acquired by using two cameras and simultaneously monitoring and tracking the position of the recognized object in space. To provide a monitoring system and method used.
  • the stereo camera device of the present invention for achieving the above object, a stereo camera having a first camera and a second camera for photographing the same surveillance zone to generate a pair of stereo digital image; And an image processor extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera.
  • the image processor is configured to set at least one monitoring zone defined by a pixel area set to belong to the image and a predetermined distance range for each pixel of the pixel area, and based on the calculated distance information for each pixel. If the extracted object is determined to be located in the surveillance zone includes a surveillance zone-alarm unit for generating a surveillance zone entry alarm and output to the external monitoring device.
  • the surveillance zone entry alert may preferably include an identifier of the surveillance zone in which the extracted object is located.
  • the image processor may include a distance information calculator configured to calculate 3D depth map data using the stereo digital image; An object extractor configured to extract an area of a moving object by comparing one of the stereo digital images with a reference background image; And an object recognizing unit for calculating an area or a representative length of the extracted object and recognizing the object as a monitoring target when the calculated area or representative length of the object corresponds to a preset range.
  • the stereo camera device and a monitoring device for receiving an alarm from the stereo camera device to display to the administrator in the form of a digital map.
  • a method for monitoring a stereo camera device including: generating a pair of stereo digital images using two cameras photographing the same surveillance zone; Extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera; Setting at least one surveillance zone defined by a pixel area set to belong to the image and a preset distance range for each pixel of the pixel area; And generating a surveillance zone entry alarm when the extracted object is determined to be located in the surveillance zone based on the calculated pixel-by-pixel distance information, and outputting the generated surveillance zone alert to an external monitoring apparatus.
  • the stereo camera device may provide location information on the space of the object in addition to providing a simple alarm for recognition of the object.
  • an alarm may be generated by detecting an object moving on a specific surveillance zone, and the location and movement path of the corresponding surveillance zone may be tracked.
  • This feature enables a stereo camera to provide intelligent surveillance beyond the function of a surveillance sensor, and provide stereoscopic information to a manager beyond a simple alarm level, depending on the device coupled to the stereo camera device of the present invention.
  • the position information on the surveillance space generated by the present invention is provided in units of detailed surveillance zones, the position information can be displayed on the digital drawing even with a lighter system resource, such as a portable monitoring apparatus.
  • FIG. 1 is a block diagram of a surveillance system including a stereo camera device according to an embodiment of the present invention
  • FIG. 3 is a view showing the monitoring zone (S) and the monitoring zone (L1, L2, L3) according to an embodiment of the present invention
  • FIG. 4 is an example of an image photographing the surveillance zone S of FIG. 3;
  • FIG. 6 is a view provided to explain a method of extracting a central axis of an object.
  • the surveillance system 100 of the present invention includes a stereo camera device 130 and a monitoring device 150 connected through a predetermined network 110 to monitor a moving subject in a three-dimensional space, and to alert an alarm. Can be output and its location displayed on the digital drawing.
  • This monitoring system 100 can be used not only for security purposes, but also for any application that tracks the location of a specific object entering a specific space and needs the location information.
  • the stereo camera device 130 is installed in a specific surveillance zone for crime prevention and other purposes, and the monitoring device 150 is preferably located in a manager area away from the surveillance zone.
  • the network 110 may be an internal dedicated communication network or a commercial public network such as the Internet, a mobile communication network, a public line network (PSTN), or a wired as well as a wireless network.
  • PSTN public line network
  • the stereo camera device 130 and the monitoring device 150 should include interface means for connecting to the network 110.
  • the stereo camera device 130 includes a stereo camera 131 and an image processor 140, and may recognize an object moving on the surveillance area to determine whether the object is a specific monitoring target.
  • the stereo camera 131 includes a first camera 133, a second camera 135, and an image receiver 137.
  • the first camera 133 and the second camera 135 are a pair of cameras spaced apart from each other to photograph the same surveillance zone, and are called a stereo camera.
  • the first camera 133 and the second camera 135 output an analog (or digital) stereo video signal of the surveillance zone to the image receiver 137.
  • the image receiver 137 converts a video signal (or image) of a continuous frame input from the first camera 133 and the second camera 135 into a digital image, and synchronizes the frame to the image processor 140 in synchronization with the frame. to provide.
  • the image processor 140 extracts an area of an object moving on a shooting area (monitoring area) from a pair of digital image frames output from the image receiver 137 to determine whether the object is of interest, and the stereo camera device 130.
  • the above determination process may be performed in real time on all frames of the image (video) which are continuously input from the.
  • the image processor 140 includes a distance information calculator 141, an object extractor 143, an object recognizer 145, an object tracker 147, and a monitoring zone-alarm 149.
  • a distance information calculator 141 the distance information calculator 141, the object extractor 143, the object recognizer 145, the object tracker 147, and the monitoring zone-alarm 149 will be described with reference to FIGS. 2 to 4.
  • FIGS. 2 to 4. Operations of the distance information calculator 141, the object extractor 143, the object recognizer 145, the object tracker 147, and the monitoring zone-alarm 149 will be described with reference to FIGS. 2 to 4.
  • the first camera 133 and the second camera 135 are arranged to photograph a specific surveillance zone.
  • the image receiver 137 converts the analog image signal into a digital image signal and then provides the image processor 140 in synchronization with a frame. (Step S201).
  • the distance information calculator 141 calculates 3D depth map data including distance information of each pixel from a pair of digital images received in real time from the image receiver 137.
  • the object extractor 143 and the object recognizer 145 extract a region of a moving object from at least one image of a pair of digital images input through the image receiver 137.
  • the object extractor 143 first extracts the moving object using a conventionally known image processing technique. Extraction of a moving object is performed by obtaining a differential image obtained by subtracting a basic background image from a newly input image.
  • the object extractor 143 and the object recognizer 145 determine whether the extracted object is an object of a type monitored by the manager. For example, it is determined whether the object is a person, a car, or an animal, or if it is a person, it is determined whether or not the person is over a certain height. If the monitoring target is a person, the object extraction unit 143 and the object recognition unit 145 extracts and recognizes an object that is specifically determined as a person.
  • the object extraction unit 143 detects the outline of the object from the difference image.
  • the object recognition unit 145 calculates the area of the object or the representative length of the object by using the depth map data calculated by the object line extracting unit 143 and the outline of the object extracted by the distance information calculating unit 141.
  • the object recognition unit 145 may determine whether the extracted object is an object of interest by determining whether the calculated area or representative length of the object falls within a preset area or length range of the object of interest.
  • the object tracking unit 147 tracks the recognized movement of the object of interest and provides the location information to the monitoring zone-alarm unit 149.
  • image processing techniques already known in the art regarding such motion tracking, and such a method can be used as appropriate.
  • the operation of the monitoring zone-alarm section 149 below is the distance information calculation unit 141, the object extraction unit 143 and the object recognition unit 145 It is also possible to repeat the operation described above with respect to the stereo image input in real time from the image receiving unit 137.
  • the monitoring zone-alarm unit 149 generates an alarm and outputs the alarm to the monitoring device 150 once the object of interest appears in the image.
  • the alarm may correspond to an alarm in a pure sense including information on the surveillance zone in which the stereo camera device 130 is installed and information indicating that the monitored object appears in the surveillance zone.
  • FIG. 3 shows a surveillance zone S in which the stereo camera device 130 of the present invention is installed, and a plurality of surveillance zones L1, L2, and L3 existing in the surveillance zone S.
  • FIG. 3 is an example of an image P photographing the surveillance zone S of FIG. 3.
  • the surveillance zone-alarm unit 149 monitors the image of FIG. 4. Generate an alert when the target object appears in m1.
  • the monitoring zone-alarm 149 determines that the object being tracked enters the special monitoring zone based on the information provided by the object tracking unit 147, the monitoring zone-alarm unit 149 generates the monitoring zone entry alarm and together with the location information. Output to the monitoring device 150.
  • the surveillance zone S is determined according to the installation position of the stereo camera device 130 and the angle of view of the camera device 130, while the surveillance zones L1, L2, and L3 are surveillance zones. It is set by the administrator within (S).
  • the monitoring zones L1, L2, and L3 are specified by pixel ranges L1-1, L2-2, and L3-3 indicating a corresponding zone in the image, and a distance range from the stereo camera device 130 to the corresponding zone. do.
  • the pixel range L1-1 shown in FIG. 4 and the distance range d1 to d2 are specified.
  • the pixel ranges of the second monitoring zone L2-1 and L2-2 overlap somewhat, but the distance ranges are different, and thus may be distinguished from each other.
  • the monitoring zone-alarm unit 149 may obtain the distance to the object by grasping the distance information for each pixel based on the depth map data provided from the distance information calculator 141.
  • the monitoring zone-alarm unit 149 determines that the object is located at m1 when the object being tracked from the object tracking unit 147 is in the L1-1 pixel range and the distance to the object is in the range d1 to d2.
  • the monitoring zone entry alarm is generated and output to the monitoring device 150.
  • the watched zone entry alert may basically include information (eg, watched zone identifier) and detection time information about a watched zone to which an object enters from a plurality of watched zones.
  • the surveillance zone entry information may include the image itself of at least one frame in which the corresponding sensing moment is captured.
  • the monitoring zone-alarm unit 149 is also provided.
  • the monitoring zone entry information is output to the monitoring device 150.
  • Object tracking and monitoring of the stereo camera device 130 of the present invention is performed by the above method.
  • the monitoring device 150 of the present invention may be connected to the stereo camera device 130 through the network 110, and may receive various alarms from the stereo camera device 130.
  • the monitoring device 150 may correspond to not only a general computer but also a mobile phone, a PDA, a smart phone, and other dedicated terminals carried by an individual.
  • the monitoring device 150 may include a display unit (b) to visually check whether the object enters a specific monitoring zone in the monitoring zone in the case of the monitoring zone entry warning.
  • the monitoring device 150 may visually display the movement of the object in the surveillance area to the manager using an alarm provided by the stereo camera device 130.
  • the monitoring device 150 uses an alarm provided by the monitoring zone-alarm unit 149 while preserving a digital map as shown in FIG. 3 in which the monitoring zone S and the monitoring zones L1, L2, and L3 are displayed. 3 may be displayed to the user.
  • Information of the degree of this drawing is useful because it can be a small enough capacity to be transmitted to the portable monitoring device 150 by wire or wireless, and to be processed and visually displayed by the portable monitoring device 150.
  • the monitoring apparatus 150 further includes a voice guidance processing unit (not shown), and when the object approaches a specific surveillance zone, a predetermined guidance message (eg, through a speaker (not shown) installed in the surveillance zone S) (eg, will be able to output "step back from ooo").
  • a voice guidance processing unit not shown
  • a predetermined guidance message eg, through a speaker (not shown) installed in the surveillance zone S
  • the monitoring device 150 may retain the function of the monitoring zone-alarm unit 149 of the stereo camera device 130.
  • the monitoring apparatus 150 may further include a monitoring zone unit (not shown) that determines whether the object enters a specific monitoring zone by using information calculated and provided by the object tracking unit 147.
  • the actual area per pixel (hereinafter, referred to as a 'unit area' of a pixel) at a distance (do) at which the object is extracted in operation S203 is obtained, and then the pixel included in the outline of the object is calculated. This is done by multiplying numbers.
  • the actual area M corresponding to the entire frame at the maximum depth D and the actual area m corresponding to the entire frame at the position do of the extracted object based on the existing background image ( do) is displayed.
  • the actual area m (do) corresponding to the entire frame at a distance do where the object is located may be obtained as in Equation 1 below.
  • M is an actual area corresponding to the entire frame (eg, 720 ⁇ 640 pixels) at the maximum distance do based on the existing background image.
  • Equation 2 the total number of pixels. According to Equation 2, it can be seen that m p (do) depends on the distance do to the corresponding object confirmed from the distance information of the 3D depth map data.
  • the area of the object can be obtained as shown in Equation 3 by multiplying the unit area m p (do) of the pixel by the number qc of the pixels included in the outline.
  • qc is the number of pixels included in the object.
  • the object recognizing unit 145 extracts a media axis of an object having a width of 1 pixel by applying a skeletal or thinning algorithm to the object extracted by the object extracting unit 143.
  • a skeletal or thinning algorithm e.g., a Medial Axis Transform (MAT) algorithm or Zhang Suen algorithm can be applied.
  • the central axis a of the object is a set of points having a plurality of boundary points among the respective points (or pixels) in the object R as shown in FIG. 6.
  • the boundary point refers to a point closest to the point in the object among the points on the outline B, and the points b1 and b2 on the outline become the boundary point of the point P1 in the object R. Therefore, the central axis algorithm is a process of extracting points having a plurality of boundary points and may be expressed as in Equation 4 below.
  • P ma is a central axis represented by a set of x
  • x is a point present in the object R
  • b min (x) is the number of boundary points of the point x.
  • the central axis is a set of points x whose number of boundary points is greater than one.
  • the structure of the skeleton may change somewhat according to a method of obtaining a distance from an internal point x to an arbitrary pixel on the outline (for example, 4-Distance, 8-Distance, Euclidean Distance, etc.). .
  • the center line may be extracted by extracting a peak value of the Gaussian value for the object.
  • the representative length of the object is obtained using the depth map data.
  • the representative length of the object is a value calculated from an image as an actual length of an object set to represent the object, and may correspond to an actual length of a central axis, an actual width of an object, or an actual height of an object. However, the representative length of the object is affected by the position of the camera, the shooting angle, and the characteristics of the shooting area.
  • the calculation of the actual length of an object is a method of obtaining the actual length per pixel (hereinafter referred to as the 'unit length' of a pixel) at a distance (do) where the object is located, and then multiplying the number of pixels representing the object. Is done.
  • the number of pixels representing the object may correspond to the number of pixels forming the central axis, the number of pixels to be the width or height of the object.
  • the width or height of the object as the number of pixels representing the object, can be obtained through the range of the x-axis coordinate or the y-axis coordinate of the object area, and the length of the central axis is, for example, the number of pixels included in the central axis. It can be obtained by adding.
  • the unit length of a particular pixel varies from pixel to pixel (exactly depending on the depth of the pixel), and can be obtained as follows with reference to FIG. 5.
  • the size of the image frame is 720x640 pixels.
  • the corresponding actual length L (do) is indicated.
  • the actual length L (do) corresponding to the vertical axis (or the horizontal axis) of the entire frame at the depth do where the object is located may be obtained as in Equation 5 below.
  • L (do) is the actual length corresponding to the vertical axis (or horizontal axis) of the entire frame at the depth do
  • Lmax is the vertical axis (or horizontal axis) of the entire frame at the maximum depth D based on the existing background image. The corresponding actual length.
  • L p (do) is the unit length of the pixel included in the object region located at the depth do
  • Qy is the number of pixels along the vertical axis of the entire frame.
  • the object recognition unit 145 obtains the representative length of the object.
  • the representative length of the object may be calculated by Equation 7 by multiplying the unit length L p (do) of the pixel by the number qo of the pixels representing the object.
  • qo is the number of pixels representing the object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are a stereo camera device capable of tracking the path of an object in a monitored area, and a method thereof. The stereo camera device of the present invention extracts a moving object to be monitored from a stereo image captured within a specific monitored area, and then generates and provides an alarm to a monitoring device when the extracted object enters a specifically monitored zone while tracking the movement of the corresponding object.

Description

감시구역 상의 객체의 경로를 추적할 수 있는 스테레오 카메라 장치, 그를 이용한 감시시스템 및 방법Stereo camera device capable of tracking the path of an object on the surveillance area, surveillance system and method using the same

본 발명은, 2개의 카메라를 이용하여 획득한 3D 심도 맵(Depth Map) 데이터를 기반으로 사물을 인식할 수 있는 스테레오 카메라 장치에 관한 것으로서, 더욱 상세하게는 인식된 객체의 위치를 추적할 수 있는 스테레오 카메라 장치, 그를 이용한 감시시스템 및 방법에 관한 것이다.The present invention relates to a stereo camera apparatus capable of recognizing an object based on 3D depth map data obtained by using two cameras, and more particularly, to track a location of a recognized object. A stereo camera device, a surveillance system and method using the same.

영상으로부터 3차원 공간상의 심도 정보(Depth Map), 다시 말해 3차원 공간상의 피사체와의 거리를 얻기 위한 방법에는, 스테레오 카메라를 이용하는 방법, 레이저 스캔을 이용하는 방법, TOF(Time of Flight)를 이용하는 방법 등이 알려지고 있다. In order to obtain a depth map on a three-dimensional space from an image, that is, a distance from a subject in a three-dimensional space, a method of using a stereo camera, a method of using a laser scan, or a method of using a time of flight (TOF) The back is known.

이 중에서, 스테레오 카메라를 이용하는 스테레오 정합(Stereo Matching)은, 사람이 두 눈을 이용하여 입체를 인지하는 과정을 하드웨어적으로 구현한 것으로서, 동일한 피사체를 두 개의 카메라로 촬영하여 획득한 한 쌍의 이미지에 대한 해석과정을 통해 공간에서의 깊이(또는 거리)에 대한 정보를 추출하는 방법이다. 이를 위해, 두 개의 카메라로부터 획득한 영상의 동일한 에피폴라 선(Epipolar Line)상의 양안차를 계산한다. 양안차는 거리 정보를 포함하며, 이러한 양안차로부터 계산된 기하학적 특성이 깊이(depth)가 된다. 입력 영상으로부터 실시간으로 양안차값을 계산하면 관측 공간의 삼차원 거리 정보 등을 측정할 수 있다.Among these, stereo matching using a stereo camera is a hardware implementation of a process of recognizing a stereoscopic object using two eyes, and a pair of images obtained by photographing the same subject with two cameras. It is a method of extracting information about depth (or distance) in space through the interpretation process of. To this end, binocular differences on the same Epipolar Line of images obtained from two cameras are calculated. The binocular difference includes distance information, and the geometrical characteristic calculated from the binocular difference becomes the depth. When the binocular difference value is calculated in real time from the input image, three-dimensional distance information of the observation space can be measured.

스테레오 정합 알고리즘으로 알려진 것에는, 예컨대, 대한민국 등록특허 제0517876호의 "복수 영상 라인을 이용한 영상 정합 방법"이나, 대한민국 등록특허 제0601958호의 "3차원 객체 인식을 위한 양안차 추정방법"이 있다. Known as a stereo matching algorithm, for example, "image matching method using a plurality of image lines" of the Republic of Korea Patent No. 0517876 or "binocular difference estimation method for three-dimensional object recognition" of the Republic of Korea Patent No. 0601958.

출원인은 이미 이러한 스테레오 정합 알고리즘을 이용하여 공간 상의 사물, 특히 관리자의 감시 대상이 될만한 사물을 구별하여 인식할 수 있는 영상인식장치에 관한 발명을 하여 대한민국 특허출원 제10-2010-0039302호 및 10-2010-0039366호를 출원 중에 있다. Applicant has already invented an image recognition apparatus capable of distinguishing and recognizing objects in a space, in particular, objects to be monitored by a manager using such a stereo matching algorithm, and Korean Patent Application Nos. 10-2010-0039302 and 10-. 2010-0039366 is pending.

출원인의 이러한 카메라 장치 내지 영상인식장치들은 다양한 기능을 수행하면서 이전에 경험하지 못한 정보를 생성할 수 있음에 착안하여, 출원인은 그러한 정보에 대한 다양한 응용을 시도할 필요를 느끼고 있다. Applicants have found that such camera apparatuses and image recognition apparatuses can generate information not previously experienced while performing various functions, and the applicant feels the need to attempt various applications of such information.

예컨대, 사무실, 주택 또는 박물관 등에 설치하여 침입자를 감시하는 종래의 감시 시스템의 경우, 감시자의 감시구역 내 이동정보는 감지기를 통해 획득되는 점 내지 선에 대한 감지정보와 공간에 대한 감지정보로 구분될 수 있다. 점 또는 선에 대한 감지는 문 또는 창문의 열림을 감지하거나 적외선 센서를 이용하여 적외선 라인 상의 물체를 감지하는 것이다. For example, in a conventional surveillance system installed in an office, a house, or a museum to monitor an intruder, movement information in the surveillance zone of the monitor may be divided into sensing information about points and lines acquired through a detector and sensing information about a space. Can be. Detection of points or lines is the detection of the opening of a door or window, or the detection of an object on an infrared line using an infrared sensor.

그러나 공간에 대한 감지는 적외선 등을 통한 온도 변화 등을 감지하는 방법을 이용하여 공간 내의 특정 변화를 감지함으로써, 단순히 공간에 임의의 물체가 움직이는지를 감지하는 것으로서 그 공간 내에서의 세부적인 움직임을 감시할 수 없는 상황이다. However, the detection of the space is to detect specific changes in the space by using a method of detecting temperature change through infrared rays, and so on, and to monitor the detailed movement in the space by simply detecting whether an object moves in the space. This is not possible.

본 발명의 목적은 2개의 카메라를 이용하여 획득한 3D 심도 맵(Depth Map) 데이터를 기반으로 사물을 인식함과 동시에 인식된 사물의 공간상 위치를 모니터링하고 추적할 수 있도록 하는 스테레오 카메라 장치, 그를 이용한 감시시스템 및 방법을 제공함에 있다.SUMMARY OF THE INVENTION An object of the present invention is a stereo camera device for recognizing an object based on 3D depth map data acquired by using two cameras and simultaneously monitoring and tracking the position of the recognized object in space. To provide a monitoring system and method used.

상기 목적을 달성하기 위한 본 발명의 스테레오 카메라 장치는, 동일한 감시구역을 촬영하는 제1 카메라와 제2 카메라를 구비하여 한 쌍의 스테레오 디지털 영상을 생성하는 스테레오 카메라; 및 상기 스테레오 카메라에서 출력되는 스테레오 디지털 영상에 대한 영상처리를 통해 각 픽셀의 거리정보를 계산하면서 상기 감시구역 내에서 움직이는 객체를 추출하는 영상처리부를 포함한다. The stereo camera device of the present invention for achieving the above object, a stereo camera having a first camera and a second camera for photographing the same surveillance zone to generate a pair of stereo digital image; And an image processor extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera.

여기서, 상기 영상처리부에는, 상기 영상에 속하도록 설정된 픽셀 영역과 상기 픽셀 영역의 각 픽셀에 대해 기 설정된 거리범위로 정의되는 적어도 하나의 감시존을 설정하고, 상기 계산된 픽셀별 거리정보를 기초로 상기 추출된 객체가 상기 감시존에 위치하는 것으로 판단된 경우에 감시존 진입 경보를 생성하여 외부의 모니터링 장치로 출력하는 감시존-경보부를 포함한다.Here, the image processor is configured to set at least one monitoring zone defined by a pixel area set to belong to the image and a predetermined distance range for each pixel of the pixel area, and based on the calculated distance information for each pixel. If the extracted object is determined to be located in the surveillance zone includes a surveillance zone-alarm unit for generating a surveillance zone entry alarm and output to the external monitoring device.

실시 예에 따라, 상기 감시존 진입 경보에는 상기 추출된 객체가 위치한 감시존의 식별자를 포함하는 것이 바람직하다.According to an embodiment, the surveillance zone entry alert may preferably include an identifier of the surveillance zone in which the extracted object is located.

여기서, 상기 영상처리부는, 상기 스테레오 디지털 영상을 이용하여 3차원 심도 맵 데이터를 계산하는 거리정보계산부; 상기 스테레오 디지털 영상 중 하나를 기준 배경영상과 비교하여 움직이는 객체의 영역을 추출하는 객체추출부; 및 상기 추출된 객체의 면적 또는 대표 길이를 계산하고, 상기 계산된 객체의 면적 또는 대표 길이가 기 설정된 범위에 해당하는 경우 상기 객체를 감시 대상으로 인식하는 객체인식부를 포함할 수 있다.The image processor may include a distance information calculator configured to calculate 3D depth map data using the stereo digital image; An object extractor configured to extract an area of a moving object by comparing one of the stereo digital images with a reference background image; And an object recognizing unit for calculating an area or a representative length of the extracted object and recognizing the object as a monitoring target when the calculated area or representative length of the object corresponds to a preset range.

본 발명의 다른 실시 예에 따른 감시 시스템은, 상기 스테레오 카메라 장치와, 상기 스테레오 카메라 장치로부터 경보를 제공받아 관리자에게 디지털 지도 형태로 표시하는 모니터링 장치를 포함한다.Surveillance system according to another embodiment of the present invention, the stereo camera device and a monitoring device for receiving an alarm from the stereo camera device to display to the administrator in the form of a digital map.

본 발명의 또 다른 실시 예에 따른 스테레오 카메라 장치의 감시 방법은, 동일한 감시구역을 촬영하는 두 개의 카메라를 이용하여, 한 쌍의 스테레오 디지털 영상을 생성하는 단계; 상기 스테레오 카메라에서 출력되는 스테레오 디지털 영상에 대한 영상처리를 통해 각 픽셀의 거리정보를 계산하면서 상기 감시구역 내에서 움직이는 객체를 추출하는 단계; 상기 영상에 속하도록 설정된 픽셀 영역과 상기 픽셀 영역의 각 픽셀에 대해 기 설정된 거리범위로 정의되는 적어도 하나의 감시존을 설정하는 단계; 및 상기 계산된 픽셀별 거리정보를 기초로 상기 추출된 객체가 상기 감시존에 위치하는 것으로 판단된 경우에 감시존 진입 경보를 생성하여 외부의 모니터링 장치로 출력하는 단계를 포함하게 된다.According to another aspect of the present invention, there is provided a method for monitoring a stereo camera device, the method including: generating a pair of stereo digital images using two cameras photographing the same surveillance zone; Extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera; Setting at least one surveillance zone defined by a pixel area set to belong to the image and a preset distance range for each pixel of the pixel area; And generating a surveillance zone entry alarm when the extracted object is determined to be located in the surveillance zone based on the calculated pixel-by-pixel distance information, and outputting the generated surveillance zone alert to an external monitoring apparatus.

본 발명의 스테레오 카메라 장치는 감시 구역에 출현하여 이동하는 물체를 인식한 경우에 해당 물체의 인식에 대한 단순한 경보를 제공하는 것을 넘어, 해당 물체의 공간 상의 위치 정보를 제공할 수 있다. When the stereo camera apparatus of the present invention recognizes an object moving in a surveillance zone and recognizes a moving object, the stereo camera device may provide location information on the space of the object in addition to providing a simple alarm for recognition of the object.

이에 따라, 특정 감시구역 상에서 움직이는 물체를 감지하여 경보를 발생시킬 수 있을 뿐만 아니라, 해당 물체의 감시구역 상의 위치와 이동경로를 추적할 수 있다. Accordingly, an alarm may be generated by detecting an object moving on a specific surveillance zone, and the location and movement path of the corresponding surveillance zone may be tracked.

이러한 특징은 스테레오 카메라가 단순히 감시용 센서의 기능을 넘어 지능형 감시를 가능하게 하여, 본 발명의 스테레오 카메라 장치에 결합하는 장치에 따라서는 단순 경보 수준을 넘어 입체적인 정보를 관리자에게 제공할 수 있다. This feature enables a stereo camera to provide intelligent surveillance beyond the function of a surveillance sensor, and provide stereoscopic information to a manager beyond a simple alarm level, depending on the device coupled to the stereo camera device of the present invention.

또한, 본 발명에 의해 생성되는 감시공간 상의 이동정보는 세부 감시존 단위로 제공되므로, 휴대용 모니터링 장치처럼 보다 가벼운 시스템 자원으로도 그 위치 정보를 디지털 도면상에 표시할 수 있다. In addition, since the movement information on the surveillance space generated by the present invention is provided in units of detailed surveillance zones, the position information can be displayed on the digital drawing even with a lighter system resource, such as a portable monitoring apparatus.

도 1은 본 발명의 일 실시 예에 따른 스테레오 카메라 장치를 포함하는 감시 시스템의 블록도,1 is a block diagram of a surveillance system including a stereo camera device according to an embodiment of the present invention;

도 2는 본 발명의 스테레오 카메라 장치의 경보 출력방법의 설명에 제공되는 흐름도,2 is a flowchart provided to explain an alarm output method of the stereo camera device of the present invention;

도 3은 본 발명의 일 실시 예에 따라 감시 구역(S) 및 감시존(L1, L2, L3)을 표시한 도면, 3 is a view showing the monitoring zone (S) and the monitoring zone (L1, L2, L3) according to an embodiment of the present invention,

도 4는 도 3의 감시구역(S)을 촬영한 영상의 일 예, 4 is an example of an image photographing the surveillance zone S of FIG. 3;

도 5는 객체의 면적 계산방법의 설명에 제공되는 도면, 그리고5 is a view provided to explain a method for calculating an area of an object; and

도 6은 객체의 중심축의 추출방법의 설명에 제공되는 도면이다.6 is a view provided to explain a method of extracting a central axis of an object.

이하 도면을 참조하여 본 발명을 더욱 상세히 설명한다.Hereinafter, the present invention will be described in more detail with reference to the accompanying drawings.

도 1을 참조하면, 본 발명의 감시 시스템(100)은 소정의 네트워크(110)를 통해 연결된 스테레오 카메라 장치(130)와 모니터링 장치(150)를 포함하여 3차원 공간상에서 움직이는 피사체를 감시하여, 경보를 출력하고 그 위치를 추적하여 디지털 도면 상에 표시할 수 있다. 이러한 감시 시스템(100)은 방범용으로 사용될 수 있을 뿐만 아니라, 특정 공간 내에 진입한 특정 객체의 위치를 추적하고 그 위치 정보를 필요로 하는 어떠한 응용 분야에도 사용될 수 있다. Referring to FIG. 1, the surveillance system 100 of the present invention includes a stereo camera device 130 and a monitoring device 150 connected through a predetermined network 110 to monitor a moving subject in a three-dimensional space, and to alert an alarm. Can be output and its location displayed on the digital drawing. This monitoring system 100 can be used not only for security purposes, but also for any application that tracks the location of a specific object entering a specific space and needs the location information.

스테레오 카메라 장치(130)는 방범 기타 목적의 특정 감시구역 내에 설치되고, 모니터링 장치(150)는 해당 감시구역으로부터 떨어진 관리자 영역에 위치하는 것이 바람직하다.The stereo camera device 130 is installed in a specific surveillance zone for crime prevention and other purposes, and the monitoring device 150 is preferably located in a manager area away from the surveillance zone.

네트워크(110)는 내부 전용 통신망일 수도 있고 인터넷, 이동통신망, 공중회선망(PSTN)과 같은 상용의 공중망일 수도 있으며, 유선뿐만 아니라 무선 네트워크도 가능하다. 도 1에 도시되지는 않았으나, 스테레오 카메라 장치(130)와 모니터링 장치(150)는 네트워크(110)에 접속하기 위한 인테페이스 수단을 포함해야 한다. The network 110 may be an internal dedicated communication network or a commercial public network such as the Internet, a mobile communication network, a public line network (PSTN), or a wired as well as a wireless network. Although not shown in FIG. 1, the stereo camera device 130 and the monitoring device 150 should include interface means for connecting to the network 110.

스테레오 카메라 장치(130)는 스테레오 카메라(131)와 영상처리부(140)를 포함하며, 감시 구역상에서 움직이는 객체를 인식하여 해당 객체가 특정 감시 대상인지 여부를 판단할 수 있다. 스테레오 카메라(131)는 제1 카메라(133), 제2 카메라(135) 및 영상수신부(137)를 포함한다. The stereo camera device 130 includes a stereo camera 131 and an image processor 140, and may recognize an object moving on the surveillance area to determine whether the object is a specific monitoring target. The stereo camera 131 includes a first camera 133, a second camera 135, and an image receiver 137.

제1 카메라(133) 및 제2 카메라(135)는 동일한 감시 구역을 촬영하도록 상호 이격되어 설치된 한 쌍의 카메라들로서, 소위 스테레오 카메라라고 한다. 제1 카메라(133) 및 제2 카메라(135)는 감시 구역을 촬영한 아날로그(또는 디지털) 스테레오 영상신호를 영상수신부(137)로 출력한다. The first camera 133 and the second camera 135 are a pair of cameras spaced apart from each other to photograph the same surveillance zone, and are called a stereo camera. The first camera 133 and the second camera 135 output an analog (or digital) stereo video signal of the surveillance zone to the image receiver 137.

영상수신부(137)는 제1 카메라(133) 및 제2 카메라(135)에서 입력되는 연속적인 프레임의 영상신호(또는 이미지)를 디지털 영상으로 변환하고, 그 프레임 동기를 맞추어 영상처리부(140)에게 제공한다. The image receiver 137 converts a video signal (or image) of a continuous frame input from the first camera 133 and the second camera 135 into a digital image, and synchronizes the frame to the image processor 140 in synchronization with the frame. to provide.

영상처리부(140)는 영상수신부(137)로부터 출력되는 한 쌍의 디지털 영상 프레임으로부터 촬영영역(감시 구역) 상에서 움직이는 객체의 영역을 추출하여 해당 객체가 관심 대상인지를 판단하며, 스테레오 카메라 장치(130)로부터 연속적으로 입력되는 영상(동영상)의 모든 프레임에 대해 실시간으로 이상의 판단과정을 수행할 수 있다.The image processor 140 extracts an area of an object moving on a shooting area (monitoring area) from a pair of digital image frames output from the image receiver 137 to determine whether the object is of interest, and the stereo camera device 130. The above determination process may be performed in real time on all frames of the image (video) which are continuously input from the.

이상의 처리를 위해, 영상처리부(140)는 거리정보계산부(141), 객체추출부(143), 객체인식부(145), 객체추적부(147) 및 감시존-경보부(149)를 포함한다. 이하에서는 도 2 내지 도 4를 참조하여, 거리정보계산부(141), 객체추출부(143), 객체인식부(145), 객체추적부(147) 및 감시존-경보부(149)의 동작을 설명한다. For the above process, the image processor 140 includes a distance information calculator 141, an object extractor 143, an object recognizer 145, an object tracker 147, and a monitoring zone-alarm 149. . Hereinafter, operations of the distance information calculator 141, the object extractor 143, the object recognizer 145, the object tracker 147, and the monitoring zone-alarm 149 will be described with reference to FIGS. 2 to 4. Explain.

제1 카메라(133) 및 제2 카메라(135)는 특정 감시구역을 촬영하도록 배치된다. 제1 카메라(133) 및 제2 카메라(135)가 아날로그 영상신호를 생성하면, 영상수신부(137)가 해당 아날로그 영상신호를 디지털 영상신호로 변환한 다음 프레임 동기를 맞추어 영상처리부(140)에게 제공한다(S201 단계).The first camera 133 and the second camera 135 are arranged to photograph a specific surveillance zone. When the first camera 133 and the second camera 135 generate an analog image signal, the image receiver 137 converts the analog image signal into a digital image signal and then provides the image processor 140 in synchronization with a frame. (Step S201).

<객체 추출 및 인식 단계: S203><Object extraction and recognition step: S203>

거리정보계산부(141)는 영상수신부(137)로부터 실시간으로 입력받는 한 쌍의 디지털 영상으로부터 각 픽셀의 거리정보를 포함하는 3차원 심도 맵(3D Depth Map) 데이터를 계산한다.The distance information calculator 141 calculates 3D depth map data including distance information of each pixel from a pair of digital images received in real time from the image receiver 137.

객체추출부(143)와 객체인식부(145)는 영상수신부(137)를 통해 입력되는 한 쌍의 디지털 이미지 중 적어도 하나의 이미지로부터 움직이는 객체의 영역을 추출한다. 앞서 설명한 바와 같이, 스테레오 카메라를 이용한 객체 인식에 관하여 출원인은 이미 특허출원 제10-2010-0039302호 및 제10-2010-0039366호를 한 바 있다. 이에 의하면, 객체추출부(143)는 종래의 알려진 영상처리 기법을 이용하여 먼저 움직이는 객체를 추출한다. 움직이는 객체의 추출은 새롭게 입력되는 영상에서 기본 배경영상을 뺀 차 영상(Different Image)를 구하는 방법으로 이루어진다. The object extractor 143 and the object recognizer 145 extract a region of a moving object from at least one image of a pair of digital images input through the image receiver 137. As described above, the applicant has already issued the patent application Nos. 10-2010-0039302 and 10-2010-0039366 regarding object recognition using a stereo camera. According to this, the object extractor 143 first extracts the moving object using a conventionally known image processing technique. Extraction of a moving object is performed by obtaining a differential image obtained by subtracting a basic background image from a newly input image.

<감시 대상 객체인지 판단: S205><Determine whether or not to monitor: S205>

객체추출부(143)와 객체인식부(145)는 추출된 객체가 관리자의 감시 대상 종류의 객체인지 여부를 판단한다. 예컨대, 해당 객체가 사람인지 자동차인지 동물인지를 판단하거나, 사람이라면 일정한 키 이상의 사람(어른)인지 아닌지를 판단하게 된다. 감시 대상이 사람이라면 객체추출부(143)와 객체인식부(145)는 특별히 사람으로 판단되는 객체를 추출하여 인식하게 된다.The object extractor 143 and the object recognizer 145 determine whether the extracted object is an object of a type monitored by the manager. For example, it is determined whether the object is a person, a car, or an animal, or if it is a person, it is determined whether or not the person is over a certain height. If the monitoring target is a person, the object extraction unit 143 and the object recognition unit 145 extracts and recognizes an object that is specifically determined as a person.

앞서 설명한 바와 같이, 스테레오 카메라를 이용한 객체 인식에 관한 출원인의 특허출원 제10-2010-0039302호 및 제10-2010-0039366호에 의하면, 객체추출부(143)는 차 영상으로부터 객체의 외곽선을 검출하고, 객체인식부(145)는 객체추출부(143)가 추출한 객체의 외곽선과 거리정보계산부(141)가 계산한 심도 맵 데이터를 이용하여 객체의 면적 또는 객체의 대표 길이를 구한다. As described above, according to the applicant's patent application Nos. 10-2010-0039302 and 10-2010-0039366 regarding object recognition using a stereo camera, the object extraction unit 143 detects the outline of the object from the difference image. The object recognition unit 145 calculates the area of the object or the representative length of the object by using the depth map data calculated by the object line extracting unit 143 and the outline of the object extracted by the distance information calculating unit 141.

객체인식부(145)는 계산된 객체의 면적 또는 대표 길이가 기 설정된 관심 대상의 면적 또는 길이 범위 내에 속하는지를 판단하는 방법으로 추출된 객체가 관심 대상 객체인지를 판단할 수 있다. The object recognition unit 145 may determine whether the extracted object is an object of interest by determining whether the calculated area or representative length of the object falls within a preset area or length range of the object of interest.

<경보 발생 및 객체 추적: S207, S209><Alarm occurrence and object tracking: S207, S209>

객체추적부(147)는 인식된 관심 대상 객체의 움직임을 추적하여 그 위치 정보를 감시존-경보부(149)에게 제공한다. 이러한 움직임 추적에 관하여 이미 종래에 알려진 많은 다양한 영상처리 기법이 존재하며, 그러한 방법을 적절히 사용할 수 있다. The object tracking unit 147 tracks the recognized movement of the object of interest and provides the location information to the monitoring zone-alarm unit 149. There are many different image processing techniques already known in the art regarding such motion tracking, and such a method can be used as appropriate.

또한, 특별히 객체추적부(147)에 의한 객체 추적 알고리즘이 아니더라도, 아래의 감시존-경보부(149)의 동작은 거리정보계산부(141), 객체추출부(143) 및 객체인식부(145)가 영상수신부(137)에서 실시간으로 입력되는 스테레오 영상에 대하여 이상에서 설명된 동작을 반복하는 것으로도 가능하다. In addition, even if the object tracking algorithm by the object tracking unit 147 in particular, the operation of the monitoring zone-alarm section 149 below is the distance information calculation unit 141, the object extraction unit 143 and the object recognition unit 145 It is also possible to repeat the operation described above with respect to the stereo image input in real time from the image receiving unit 137.

감시존-경보부(149)는 일단 관심 대상인 객체가 영상 내에 나타날 경우에 경보를 생성하여 모니터링 장치(150)으로 출력한다. 여기서, 경보는 스테레오 카메라 장치(130)가 설치된 감시 구역에 대한 정보와, 해당 감시 구역 내에 감시 대상 객체가 나타났음을 알리는 정보를 포함하는 순수한 의미의 경보가 해당할 수 있다. The monitoring zone-alarm unit 149 generates an alarm and outputs the alarm to the monitoring device 150 once the object of interest appears in the image. Here, the alarm may correspond to an alarm in a pure sense including information on the surveillance zone in which the stereo camera device 130 is installed and information indicating that the monitored object appears in the surveillance zone.

도 3은 본 발명의 스테레오 카메라 장치(130)가 설치된 감시 구역(S)과, 해당 감시구역(S) 내에 존재하는 복수 개의 감시존(L1, L2, L3)을 표시하고 있으며, 도 4는 도 3의 감시구역(S)을 촬영한 영상(P)의 일 예이다. FIG. 3 shows a surveillance zone S in which the stereo camera device 130 of the present invention is installed, and a plurality of surveillance zones L1, L2, and L3 existing in the surveillance zone S. FIG. 3 is an example of an image P photographing the surveillance zone S of FIG. 3.

예컨대, 도 3을 참조하면, 확인된 감시 대상 객체가 감시 구역(S) 내를 움직여 m1, m2, m3, m4로 이동했다고 가정할 때, 감시존-경보부(149)는 도 4의 영상에 감시 대상 객체가 m1에 나타난 때에 경보를 생성하게 된다. For example, referring to FIG. 3, assuming that the identified monitored object moves within the surveillance zone S and moves to m1, m2, m3, and m4, the surveillance zone-alarm unit 149 monitors the image of FIG. 4. Generate an alert when the target object appears in m1.

<특정 감시 존 경계 모드: S211, S213><Specific Surveillance Zone Boundary Modes: S211, S213>

감시존-경보부(149)는 객체추적부(147)가 제공하는 정보를 기초로 추적 중인 객체가 특별 감시 존(Zone)에 진입한 것으로 판단되면, 감시존 진입경보를 생성하여 그 위치 정보와 함께 모니터링 장치(150)으로 출력한다. If the monitoring zone-alarm 149 determines that the object being tracked enters the special monitoring zone based on the information provided by the object tracking unit 147, the monitoring zone-alarm unit 149 generates the monitoring zone entry alarm and together with the location information. Output to the monitoring device 150.

도 3 및 도 4를 참조할 때, 감시 구역(S)은 스테레오 카메라 장치(130)의 설치 위치와 카메라 장치(130)의 화각에 따라 결정되지만, 감시존(L1, L2, L3)은 감시 구역(S) 내에서 관리자에 의해 설정된다.3 and 4, the surveillance zone S is determined according to the installation position of the stereo camera device 130 and the angle of view of the camera device 130, while the surveillance zones L1, L2, and L3 are surveillance zones. It is set by the administrator within (S).

감시존(L1, L2, L3)은 영상에서의 해당 존을 지시하는 픽셀 범위(L1-1, L2-2, L3-3)와, 스테레오 카메라 장치(130)로부터 해당 존까지의 거리 범위로 특정된다. The monitoring zones L1, L2, and L3 are specified by pixel ranges L1-1, L2-2, and L3-3 indicating a corresponding zone in the image, and a distance range from the stereo camera device 130 to the corresponding zone. do.

예컨대, 제1 감시존(L1)의 경우, 도 4에 표시된 픽셀 범위 L1-1과, 거리 범위 d1 ~ d2로 특정된다. 제2 감시존(L2)과 제3 감시존(L3)의 경우, 그 픽셀범위가 L2-1과 L2-2로 다소 겹치게 되나 그 거리 범위가 다르므로 상호 구분될 수 있다. 감시존-경보부(149)는 거리정보계산부(141)로부터 제공받는 심도 맵 데이터를 기초로 각 픽셀 별 거리정보를 파악하여 객체까지의 거리를 구할 수 있다. For example, in the case of the first monitoring zone L1, the pixel range L1-1 shown in FIG. 4 and the distance range d1 to d2 are specified. In the case of the second monitoring zone L2 and the third monitoring zone L3, the pixel ranges of the second monitoring zone L2-1 and L2-2 overlap somewhat, but the distance ranges are different, and thus may be distinguished from each other. The monitoring zone-alarm unit 149 may obtain the distance to the object by grasping the distance information for each pixel based on the depth map data provided from the distance information calculator 141.

따라서, 감시존-경보부(149)는 객체추적부(147)로부터 추적 중인 객체가 L1-1 픽셀범위에 속하면서 그 객체까지의 거리가 d1 ~ d2 범위 내에 속하는 경우, 객체가 m1에 위치한 것으로 판단하고 감시존 진입 경보를 생성하여 모니터링 장치(150)으로 출력한다. Therefore, the monitoring zone-alarm unit 149 determines that the object is located at m1 when the object being tracked from the object tracking unit 147 is in the L1-1 pixel range and the distance to the object is in the range d1 to d2. The monitoring zone entry alarm is generated and output to the monitoring device 150.

여기서, 감시존 진입 경보는 복수 개의 감시존 중에서 객체가 진입한 감시존에 대한 정보(예컨대, 감시존 식별자)와, 감지 시각 정보 등이 기본적으로 포함될 수 있다. 더불어, 감시존 진입 정보는 해당 감지 순간이 포착된 적어도 1개 프레임의 이미지 자체를 포함할 수도 있을 것이다. Here, the watched zone entry alert may basically include information (eg, watched zone identifier) and detection time information about a watched zone to which an object enters from a plurality of watched zones. In addition, the surveillance zone entry information may include the image itself of at least one frame in which the corresponding sensing moment is captured.

이후에 객체가 m2 → m3 → m4로 이동하면서 제2 감시존(L2)과 제3 감시존(L3) 그리고 제1 감시존(L1)에 차례로 진입하게 될 때에도, 감시존-경보부(149)는 감시존 진입정보를 모니터링 장치(150)으로 출력하게 된다.Subsequently, even when the object moves to m2 → m3 → m4 and subsequently enters the second monitoring zone L2, the third monitoring zone L3, and the first monitoring zone L1, the monitoring zone-alarm unit 149 is also provided. The monitoring zone entry information is output to the monitoring device 150.

이상의 방법에 의해 본 발명의 스테레오 카메라 장치(130)의 객체 추적 및 감시가 이루어진다. Object tracking and monitoring of the stereo camera device 130 of the present invention is performed by the above method.

본 발명의 모니터링 장치(150)는 네트워크(110)를 통해 스테레오 카메라 장치(130)에 연결될 수 있으며, 스테레오 카메라 장치(130)로부터 각종 경보를 제공받을 수 있다. The monitoring device 150 of the present invention may be connected to the stereo camera device 130 through the network 110, and may receive various alarms from the stereo camera device 130.

모니터링 장치(150) 일반적인 컴퓨터 뿐만 아니라, 개인이 휴대하는 휴대전화, 피디에이(PDA), 스마트폰(Smart Phone), 기타 전용 단말기 등이 모두 해당할 수 있다. The monitoring device 150 may correspond to not only a general computer but also a mobile phone, a PDA, a smart phone, and other dedicated terminals carried by an individual.

모니터링 장치(150)는 표시부()를 포함하여, 감시존 진입 경보의 경우에 해당 객체가 감시구역 내의 특정 감시존에 진입하였는지를 시각적으로 확인할 수 있다. The monitoring device 150 may include a display unit (b) to visually check whether the object enters a specific monitoring zone in the monitoring zone in the case of the monitoring zone entry warning.

나아가, 모니터링 장치(150)는 스테레오 카메라 장치(130)가 제공하는 경보를 이용하여 감시 구역 내의 객체의 움직임을 관리자에게 시각적으로 표시할 수 있다. 예컨대, 모니터링 장치(150)는 감시구역(S)과 감시존(L1, L2, L3)이 표시된 도 3과 같이 디지털 지도를 기 저장해 둔 상태에서 감시존-경보부(149)가 제공하는 경보를 이용하여 도 3과 같은 도면을 사용자에게 표시할 수 있을 것이다. In addition, the monitoring device 150 may visually display the movement of the object in the surveillance area to the manager using an alarm provided by the stereo camera device 130. For example, the monitoring device 150 uses an alarm provided by the monitoring zone-alarm unit 149 while preserving a digital map as shown in FIG. 3 in which the monitoring zone S and the monitoring zones L1, L2, and L3 are displayed. 3 may be displayed to the user.

이러한 도면 정도의 정보는 휴대용 모니터링 장치(150)에게 유선 또는 무선으로 전송하고, 휴대용 모니터링 장치(150)이 처리하여 시각적으로 표시하기에 충분할 만큼 작은 용량일 수 있어 유용하다. Information of the degree of this drawing is useful because it can be a small enough capacity to be transmitted to the portable monitoring device 150 by wire or wireless, and to be processed and visually displayed by the portable monitoring device 150.

또한, 모니터링 장치(150)는 음성안내처리부(미도시)를 더 포함하여 특정 감시존에 객체가 접근한 경우에 감시 구역(S) 내에 설치된 스피커(미도시)를 통해 소정의 안내 메시지(예컨대, "ooo으로부터 한 걸음 뒤로 물러나십시오")를 출력하도록 할 수 있을 것이다. In addition, the monitoring apparatus 150 further includes a voice guidance processing unit (not shown), and when the object approaches a specific surveillance zone, a predetermined guidance message (eg, through a speaker (not shown) installed in the surveillance zone S) (eg, will be able to output "step back from ooo").

실시 예에 따라, 스테레오 카메라 장치(130)의 감시존-경보부(149)의 기능 자체를 모니터링 장치(150)이 보유할 수 있다. 이러한 경우, 모니터링 장치(150)는 객체추적부(147)가 계산하여 제공하는 정보를 이용하여 객체가 특정 감시존에 진입하였는지를 판단하는 감시존부(미도시)를 더 포함하게 된다. According to an embodiment, the monitoring device 150 may retain the function of the monitoring zone-alarm unit 149 of the stereo camera device 130. In this case, the monitoring apparatus 150 may further include a monitoring zone unit (not shown) that determines whether the object enters a specific monitoring zone by using information calculated and provided by the object tracking unit 147.

이하에서는, S205 단계의 객체의 면적 및 대표 길이 계산에 대하여 먼저 한 특허출원 제10-2010-0039302호 및 제10-2010-0039366호를 기초로 간단히 설명한다. Hereinafter, the calculation of the area and the representative length of the object of step S205 will be briefly described based on the first patent application Nos. 10-2010-0039302 and 10-2010-0039366.

객체의 면적 계산은, S203 단계에서 추출된 객체가 위치한 거리(do)에서의 픽셀 당 실제 면적(이하, 픽셀의 '단위 면적'이라 함)을 구한 다음, 해당 객체의 외곽선 내부에 포함된 픽셀의 수를 곱하는 방법으로 이루어진다.In calculating the area of an object, the actual area per pixel (hereinafter, referred to as a 'unit area' of a pixel) at a distance (do) at which the object is extracted in operation S203 is obtained, and then the pixel included in the outline of the object is calculated. This is done by multiplying numbers.

도 5를 참조하면, 기존 배경영상을 기준으로 최대 심도(D)에서의 전체 프레임에 대응하는 실제면적(M)과, 추출된 객체의 위치(do)에서의 전체 프레임에 대응하는 실제면적 m(do)이 표시되어 있다. 먼저 해당 객체가 위치하는 거리(do)에서의 프레임 전체에 대응되는 실제면적 m(do)은 다음의 수학식 1과 같이 구할 수 있다. Referring to FIG. 5, the actual area M corresponding to the entire frame at the maximum depth D and the actual area m corresponding to the entire frame at the position do of the extracted object based on the existing background image ( do) is displayed. First, the actual area m (do) corresponding to the entire frame at a distance do where the object is located may be obtained as in Equation 1 below.

수학식 1

Figure PCTKR2011002143-appb-M000001
Equation 1
Figure PCTKR2011002143-appb-M000001

여기서, M은 기존 배경영상을 기준으로 최대 거리(do)에서의 전체 프레임(예컨대, 720×640 픽셀)에 대응되는 실제 면적이다. Here, M is an actual area corresponding to the entire frame (eg, 720 × 640 pixels) at the maximum distance do based on the existing background image.

다음으로, 객체가 위치하는 거리(do)에서의 전체 프레임에 대응되는 실제 면적 m(do)을 프레임 전체의 픽셀 수(C, 예컨대, 460,800=720×640)로 나눔으로써, 객체 영역에 포함된 픽셀의 단위 면적 mp(do)을 다음의 수학식 2와 같이 구한다. Next, by dividing the actual area m (do) corresponding to the entire frame at the distance (do) where the object is located by the number of pixels (C, for example, 460,800 = 720 x 640) of the entire frame, The unit area m p (do) of the pixel is obtained as in Equation 2 below.

수학식 2

Figure PCTKR2011002143-appb-M000002
Equation 2
Figure PCTKR2011002143-appb-M000002

여기서, Q는 전체 픽셀의 수이다. 수학식 2에 의하면, mp(do)은 3차원 심도 맵 데이터의 거리 정보로부터 확인한 해당 객체까지의 거리(do)에 따라 달라짐을 알 수 있다.Where Q is the total number of pixels. According to Equation 2, it can be seen that m p (do) depends on the distance do to the corresponding object confirmed from the distance information of the 3D depth map data.

마지막으로, 객체의 면적은 앞에서 설명한 것처럼 픽셀의 단위 면적 mp(do)에 해당 외곽선 내부에 포함되는 픽셀의 수(qc)를 곱함으로써 다음의 수학식 3과 같이 구할 수 있다. Finally, the area of the object can be obtained as shown in Equation 3 by multiplying the unit area m p (do) of the pixel by the number qc of the pixels included in the outline.

수학식 3

Figure PCTKR2011002143-appb-M000003
Equation 3
Figure PCTKR2011002143-appb-M000003

여기서, qc는 객체에 포함된 픽셀의 수이다. Where qc is the number of pixels included in the object.

이하에서는 S205 단계의 객체의 대표 길이를 계산하는 과정에 대하여 간단히 설명한다. Hereinafter, the process of calculating the representative length of the object of step S205 will be briefly described.

<움직이는 객체의 중심축 추출: S209 단계><Extract central axis of moving object: step S209>

객체인식부(145)는 객체추출부(143)가 추출한 객체에 대해 골격화 또는 세선화 알고리즘을 적용하여 1 픽셀의 폭을 가지는 객체의 중심축(Medial Axis}을 추출한다. 골격화 알고리즘에는 외곽선을 이용하는 중심축변환(MAT: Medial Axis Transform)알고리즘 또는 Zhang Suen 알고리즘과 같이 기 알려진 다양한 방식을 적용할 수 있다. The object recognizing unit 145 extracts a media axis of an object having a width of 1 pixel by applying a skeletal or thinning algorithm to the object extracted by the object extracting unit 143. Various known techniques such as the Medial Axis Transform (MAT) algorithm or Zhang Suen algorithm can be applied.

예컨대, 중심축 변환에 의할 경우, 객체의 중심축(a)은 도 6에서처럼 객체(R) 내의 각 점(또는 픽셀)들 중에서 복수 개의 경계점을 가지는 점들의 집합이다. 여기서, 경계점은 외곽선(B) 상의 점들 중에서 객체 내의 해당 점과의 거리가 가장 가까운 점을 말하는 것으로, 외곽선상의 점 b1, b2는 객체(R) 내의 점 P1의 경계점이 된다. 따라서, 중심축 알고리즘은 경계점이 복수 개인 점들을 추출하는 과정이 되며 다음의 수학식 4와 같이 표현될 수 있다.For example, according to the central axis transformation, the central axis a of the object is a set of points having a plurality of boundary points among the respective points (or pixels) in the object R as shown in FIG. 6. Here, the boundary point refers to a point closest to the point in the object among the points on the outline B, and the points b1 and b2 on the outline become the boundary point of the point P1 in the object R. Therefore, the central axis algorithm is a process of extracting points having a plurality of boundary points and may be expressed as in Equation 4 below.

수학식 4

Figure PCTKR2011002143-appb-M000004
Equation 4
Figure PCTKR2011002143-appb-M000004

여기서, Pma는 x의 집합으로 표시되는 중심축이고, x는 객체(R) 내에 존재하는 점, bmin(x)는 점 x의 경계점의 수이다. 따라서, 중심축은 경계점의 수가 1보다 큰 점 x들의 집합이 된다. 여기서, 경계점을 계산하기 위해, 내부의 점 x에서 외곽선상의 임의의 픽셀까지의 거리를 구하는 방법(예컨대, 4-Distance, 8-Distance, Euclidean Distance 등)에 따라, 골격의 구조가 다소 바뀔 수 있다. Here, P ma is a central axis represented by a set of x, x is a point present in the object R, b min (x) is the number of boundary points of the point x. Thus, the central axis is a set of points x whose number of boundary points is greater than one. Here, in order to calculate the boundary point, the structure of the skeleton may change somewhat according to a method of obtaining a distance from an internal point x to an arbitrary pixel on the outline (for example, 4-Distance, 8-Distance, Euclidean Distance, etc.). .

그 밖에도, 객체가 비교적 간단한 형태의 것인 경우, 객체에 대한 가우시안 값의 피크값을 추출하는 방법으로 중심선을 추출할 수 있다.In addition, when the object is a relatively simple form, the center line may be extracted by extracting a peak value of the Gaussian value for the object.

중심선이 추출되면, 심도 맵 데이터를 이용하여 객체의 대표 길이를 구한다. 객체의 대표 길이는 객체를 대표하는 것으로 설정된 객체의 실제 길이로서 영상으로부터 계산된 값이며, 중심축의 실제 길이, 객체의 실제 폭 또는 객체의 실제높이 등이 해당할 수 있다. 다만, 객체의 대표 길이는 카메라의 위치, 촬영각도 및 촬영영역의 특성 등에 따라 영향을 받게 된다. Once the center line is extracted, the representative length of the object is obtained using the depth map data. The representative length of the object is a value calculated from an image as an actual length of an object set to represent the object, and may correspond to an actual length of a central axis, an actual width of an object, or an actual height of an object. However, the representative length of the object is affected by the position of the camera, the shooting angle, and the characteristics of the shooting area.

나아가, 객체의 실제길이의 계산은, 객체가 위치한 거리(do)에서의 픽셀 당 실제 길이(이하, 픽셀의 '단위 길이'라 함)를 구한 다음, 해당 객체를 대표하는 픽셀의 수를 곱하는 방법으로 이루어진다. 여기서, 객체를 대표하는 픽셀의 수는 앞서 중심축을 형성하는 픽셀의 수, 해당 객체의 폭이나 높이가 되는 픽셀의 수 등이 해당될 수 있다. Further, the calculation of the actual length of an object is a method of obtaining the actual length per pixel (hereinafter referred to as the 'unit length' of a pixel) at a distance (do) where the object is located, and then multiplying the number of pixels representing the object. Is done. Here, the number of pixels representing the object may correspond to the number of pixels forming the central axis, the number of pixels to be the width or height of the object.

객체를 대표하는 픽셀의 수로서의, 객체의 폭이나 높이는 객체 영역의 x축좌표의 범위 또는 y축좌표의 범위를 통해 구해질 수 있으며, 중심축의 길이는 예컨대 중심축에 포함된 픽셀의 수를 모두 더함으로써 구할 수 있다.The width or height of the object, as the number of pixels representing the object, can be obtained through the range of the x-axis coordinate or the y-axis coordinate of the object area, and the length of the central axis is, for example, the number of pixels included in the central axis. It can be obtained by adding.

특정 픽셀의 단위 길이는 픽셀마다(정확하게는 픽셀의 심도에 따라) 달라지며, 도 5를 참조하여 다음과 같이 구할 수 있다. 여기서, 설명의 편리를 위해, 영상 프레임의 크기를 720×640 픽셀이라 가정한다.The unit length of a particular pixel varies from pixel to pixel (exactly depending on the depth of the pixel), and can be obtained as follows with reference to FIG. 5. Here, for convenience of explanation, it is assumed that the size of the image frame is 720x640 pixels.

도 5에서, 기존 배경영상을 기준으로 최대 심도(D)에서의 전체 프레임의 세로축(또는 가로축)에 대응하는 실제길이 Lmax와, 추출된 객체의 위치 l에서의 전체 프레임의 세로축(또는 가로축)에 대응하는 실제길이 L(do)가 표시되어 있다. 먼저 해당 객체가 위치하는 심도 do에서의 프레임 전체의 세로축(또는 가로축)에 대응되는 실제길이 L(do)는 다음의 수학식 5와 같이 구할 수 있다. 5, the actual length Lmax corresponding to the vertical axis (or horizontal axis) of the entire frame at the maximum depth D based on the existing background image, and the vertical axis (or horizontal axis) of the entire frame at the position l of the extracted object. The corresponding actual length L (do) is indicated. First, the actual length L (do) corresponding to the vertical axis (or the horizontal axis) of the entire frame at the depth do where the object is located may be obtained as in Equation 5 below.

수학식 5

Figure PCTKR2011002143-appb-M000005
Equation 5
Figure PCTKR2011002143-appb-M000005

여기서, L(do)는 심도 do에서의 프레임 전체의 세로축(또는 가로축)에 대응되는 실제 길이이고, Lmax는 기존 배경영상을 기준으로 최대 심도(D)에서의 전체 프레임의 세로축(또는 가로축)에 대응되는 실제 길이다. Here, L (do) is the actual length corresponding to the vertical axis (or horizontal axis) of the entire frame at the depth do, and Lmax is the vertical axis (or horizontal axis) of the entire frame at the maximum depth D based on the existing background image. The corresponding actual length.

다음으로, 객체가 위치하는 거리(do)에서의 전체 프레임의 세로축(또는 가로축)에 대응되는 실제 길이 L(do)을 프레임 전체의 세로축(또는 가로축)의 픽셀 수(Px, Py, 예에서 Px=720, Py=640)로 나눔으로써, 객체 영역에 포함된 픽셀의 단위 길이 Lp(do)을 다음의 수학식 6과 같이 구할 수 있다. Next, the actual length L (do) corresponding to the vertical axis (or horizontal axis) of the entire frame at the distance do is located, is the number of pixels Px, Py in the vertical axis (or horizontal axis) of the entire frame. = 720, Py = 640), the unit length L p (do) of the pixels included in the object region can be obtained as in Equation 6 below.

수학식 6

Figure PCTKR2011002143-appb-M000006
Equation 6
Figure PCTKR2011002143-appb-M000006

여기서, Lp(do)는 심도 do에 위치한 객체 영역에 포함된 픽셀의 단위 길이, Qy는 프레임 전체의 세로축의 픽셀 수이다. 수학식 5에 의하면, Lp(do)은 3차원 심도 맵 데이터의 거리 정보로부터 확인한 해당 객체까지의 심도(do)와 맵 데이터 상의 최대 심도에 따라 달라짐을 알 수 있다.Here, L p (do) is the unit length of the pixel included in the object region located at the depth do, and Qy is the number of pixels along the vertical axis of the entire frame. According to Equation 5, it can be seen that L p (do) depends on the depth to the corresponding object confirmed from the distance information of the 3D depth map data and the maximum depth on the map data.

픽셀의 단위 길이가 구해지면, 객체인식부(145)는 객체의 대표 길이를 구한다. 객체의 대표 길이는 픽셀의 단위 길이 Lp(do)에 해당 객체를 대표하는 픽셀의 수 qo를 곱함으로써 다음의 수학식 7과 같이 구할 수 있다. When the unit length of the pixel is obtained, the object recognition unit 145 obtains the representative length of the object. The representative length of the object may be calculated by Equation 7 by multiplying the unit length L p (do) of the pixel by the number qo of the pixels representing the object.

수학식 7

Figure PCTKR2011002143-appb-M000007
Equation 7
Figure PCTKR2011002143-appb-M000007

여기서, qo는 해당 객체를 대표하는 픽셀의 수이다.Here, qo is the number of pixels representing the object.

이상에서는 본 발명의 바람직한 실시 예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시 예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어서는 안 될 것이다.Although the above has been illustrated and described with respect to preferred embodiments of the present invention, the present invention is not limited to the above-described specific embodiments, it is usually in the technical field to which the invention belongs without departing from the spirit of the invention claimed in the claims. Various modifications can be made by those skilled in the art, and these modifications should not be individually understood from the technical spirit or the prospect of the present invention.

Claims (5)

동일한 감시구역을 촬영하는 제1 카메라와 제2 카메라를 구비하여 한 쌍의 스테레오 디지털 영상을 생성하는 스테레오 카메라; 및A stereo camera having a first camera and a second camera for photographing the same surveillance zone to generate a pair of stereo digital images; And 상기 스테레오 카메라에서 출력되는 스테레오 디지털 영상에 대한 영상처리를 통해 각 픽셀의 거리정보를 계산하면서 상기 감시구역 내에서 움직이는 객체를 추출하는 영상처리부를 포함하고,And an image processing unit for extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera. 상기 영상처리부는, The image processor, 상기 영상에 속하도록 설정된 픽셀 영역과 상기 픽셀 영역의 각 픽셀에 대해 기 설정된 거리범위로 정의되는 적어도 하나의 감시존을 설정하고, 상기 계산된 픽셀별 거리정보를 기초로 상기 추출된 객체가 상기 감시존에 위치하는 것으로 판단된 경우에 감시존 진입 경보를 생성하여 외부의 모니터링 장치로 출력하는 감시존-경보부를 포함하는 것을 특징으로 하는 스테레오 카메라 장치.Setting at least one monitoring zone defined by a pixel area set to belong to the image and a preset distance range for each pixel of the pixel area, and the extracted object is monitored by the extracted object based on the calculated distance information for each pixel. And a monitoring zone-alarm unit which generates a monitoring zone entry alarm and outputs the monitoring zone entry alarm to the external monitoring device when it is determined to be located in the zone. 제1항에 있어서,The method of claim 1, 상기 감시존 진입 경보에는 상기 추출된 객체가 위치한 감시존의 식별자를 포함하는 것을 특징으로 하는 스테레오 카메라 장치.And the surveillance zone entry alert includes an identifier of a surveillance zone in which the extracted object is located. 제1항에 있어서,The method of claim 1, 상기 영상처리부는, The image processor, 상기 스테레오 디지털 영상을 이용하여 3차원 심도 맵 데이터를 계산하는 거리정보계산부;A distance information calculator configured to calculate 3D depth map data using the stereo digital image; 상기 스테레오 디지털 영상 중 하나를 기준 배경영상과 비교하여 움직이는 객체의 영역을 추출하는 객체추출부; 및An object extractor configured to extract an area of a moving object by comparing one of the stereo digital images with a reference background image; And 상기 추출된 객체의 면적 또는 대표 길이를 계산하고, 상기 계산된 객체의 면적 또는 대표 길이가 기 설정된 범위에 해당하는 경우 상기 객체를 감시 대상으로 인식하는 객체인식부를 포함하는 것을 특징으로 하는 스테레오 카메라 장치.And a object recognition unit calculating an area or a representative length of the extracted object and recognizing the object as a monitoring target when the calculated area or representative length of the object corresponds to a preset range. . 동일한 감시구역을 촬영하는 제1 카메라와 제2 카메라를 구비하여 한 쌍의 스테레오 디지털 영상을 생성하는 스테레오 카메라와, 상기 스테레오 카메라에서 출력되는 스테레오 디지털 영상에 대한 영상처리를 통해 각 픽셀의 거리정보를 계산하면서 상기 감시구역 내에서 움직이는 객체를 추출하는 영상처리부를 구비한 스테레오 카메라 장치; 및 A stereo camera having a first camera and a second camera for photographing the same surveillance zone, and generating a pair of stereo digital images, and the distance information of each pixel through image processing of the stereo digital images output from the stereo camera. A stereo camera device having an image processing unit for extracting an object moving in the surveillance zone while calculating; And 상기 스테레오 카메라 장치로부터 경보를 제공받아 관리자에게 디지털 지도 형태로 표시하는 모니터링 장치를 포함하고Receiving an alarm from the stereo camera device includes a monitoring device for displaying to the administrator in the form of a digital map, 상기 영상처리부는, The image processor, 상기 영상에 속하도록 설정된 픽셀 영역과 상기 픽셀 영역의 각 픽셀에 대해 기 설정된 거리범위로 정의되는 적어도 하나의 감시존을 설정하고, 상기 계산된 픽셀별 거리정보를 기초로 상기 추출된 객체가 상기 감시존에 위치하는 것으로 판단된 경우에 감시존 진입 경보를 생성하여 상기 모니터링 장치로 출력하는 감시존-경보부를 포함하는 것을 특징으로 하는 감시 시스템.Setting at least one monitoring zone defined by a pixel area set to belong to the image and a preset distance range for each pixel of the pixel area, and the extracted object is monitored by the extracted object based on the calculated distance information for each pixel. And a monitoring zone-alarm unit for generating a monitoring zone entry alarm and outputting the monitoring zone entry alarm to the monitoring device when it is determined to be located in the zone. 동일한 감시구역을 촬영하는 두 개의 카메라를 이용하여, 한 쌍의 스테레오 디지털 영상을 생성하는 단계; Generating a pair of stereo digital images using two cameras photographing the same surveillance zone; 상기 스테레오 카메라에서 출력되는 스테레오 디지털 영상에 대한 영상처리를 통해 각 픽셀의 거리정보를 계산하면서 상기 감시구역 내에서 움직이는 객체를 추출하는 단계; Extracting an object moving in the surveillance zone while calculating distance information of each pixel through image processing on the stereo digital image output from the stereo camera; 상기 영상에 속하도록 설정된 픽셀 영역과 상기 픽셀 영역의 각 픽셀에 대해 기 설정된 거리범위로 정의되는 적어도 하나의 감시존을 설정하는 단계; 및Setting at least one surveillance zone defined by a pixel area set to belong to the image and a preset distance range for each pixel of the pixel area; And 상기 계산된 픽셀별 거리정보를 기초로 상기 추출된 객체가 상기 감시존에 위치하는 것으로 판단된 경우에 감시존 진입 경보를 생성하여 외부의 모니터링 장치로 출력하는 단계를 포함하는 것을 특징으로 하는 스테레오 카메라 장치의 감시 방법.And generating a monitoring zone entry alarm and outputting the generated warning signal to an external monitoring apparatus when it is determined that the extracted object is located in the monitoring zone based on the calculated distance information for each pixel. How to monitor the device.
PCT/KR2011/002143 2011-03-14 2011-03-29 Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same Ceased WO2012124852A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110022280A KR20120104711A (en) 2011-03-14 2011-03-14 Stereo camera apparatus capable of tracking object at detecting zone, surveillance system and method thereof
KR10-2011-0022280 2011-03-14

Publications (1)

Publication Number Publication Date
WO2012124852A1 true WO2012124852A1 (en) 2012-09-20

Family

ID=46830903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/002143 Ceased WO2012124852A1 (en) 2011-03-14 2011-03-29 Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same

Country Status (2)

Country Link
KR (1) KR20120104711A (en)
WO (1) WO2012124852A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2851880A1 (en) * 2013-09-19 2015-03-25 Canon Kabushiki Kaisha Method, system and storage medium for controlling a video capture device, in particular of a door station
WO2015083875A1 (en) * 2013-12-03 2015-06-11 전자부품연구원 Method and mobile system for estimating camera location through generation and selection of particle
WO2017071084A1 (en) * 2015-10-28 2017-05-04 小米科技有限责任公司 Alarm method and device
CN108898617A (en) * 2018-05-24 2018-11-27 宇龙计算机通信科技(深圳)有限公司 A kind of tracking and device of target object
CN110942578A (en) * 2019-11-29 2020-03-31 韦达信息技术(深圳)有限公司 Intelligent analysis anti-theft alarm system

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101640527B1 (en) 2012-10-09 2016-07-18 에스케이 텔레콤주식회사 Method and Apparatus for Monitoring Video for Estimating Size of Single Object
KR101519261B1 (en) 2013-12-17 2015-05-11 현대자동차주식회사 Monitoring method and automatic braking apparatus
KR101400169B1 (en) * 2014-02-06 2014-05-28 (주)라이드소프트 Visually patrolling system using virtual reality for security controlling and method thereof
KR101593187B1 (en) 2014-07-22 2016-02-11 주식회사 에스원 Device and method surveiling innormal behavior using 3d image information
EP3026653A1 (en) * 2014-11-27 2016-06-01 Kapsch TrafficCom AB Method of controlling a traffic surveillance system
KR101645451B1 (en) * 2015-04-14 2016-08-12 공간정보기술 주식회사 Spatial analysis system using stereo camera
KR102076531B1 (en) 2015-10-27 2020-02-12 한국전자통신연구원 System and method for tracking position based on multi sensor
KR101748780B1 (en) * 2016-12-02 2017-06-19 (주) 비전에스티 Method for detection of the road sign using stereo camera and apparatus thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003052034A (en) * 2001-08-06 2003-02-21 Sumitomo Osaka Cement Co Ltd Monitoring system using stereoscopic image
JP2003246268A (en) * 2002-02-22 2003-09-02 East Japan Railway Co Home fall person detection method and device
KR20090027410A (en) * 2007-09-12 2009-03-17 한국철도기술연구원 Stereo image-based platform monitoring system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003052034A (en) * 2001-08-06 2003-02-21 Sumitomo Osaka Cement Co Ltd Monitoring system using stereoscopic image
JP2003246268A (en) * 2002-02-22 2003-09-02 East Japan Railway Co Home fall person detection method and device
KR20090027410A (en) * 2007-09-12 2009-03-17 한국철도기술연구원 Stereo image-based platform monitoring system and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2851880A1 (en) * 2013-09-19 2015-03-25 Canon Kabushiki Kaisha Method, system and storage medium for controlling a video capture device, in particular of a door station
US10218899B2 (en) 2013-09-19 2019-02-26 Canon Kabushiki Kaisha Control method in image capture system, control apparatus and a non-transitory computer-readable storage medium
WO2015083875A1 (en) * 2013-12-03 2015-06-11 전자부품연구원 Method and mobile system for estimating camera location through generation and selection of particle
WO2017071084A1 (en) * 2015-10-28 2017-05-04 小米科技有限责任公司 Alarm method and device
US10147288B2 (en) 2015-10-28 2018-12-04 Xiaomi Inc. Alarm method and device
CN108898617A (en) * 2018-05-24 2018-11-27 宇龙计算机通信科技(深圳)有限公司 A kind of tracking and device of target object
CN110942578A (en) * 2019-11-29 2020-03-31 韦达信息技术(深圳)有限公司 Intelligent analysis anti-theft alarm system

Also Published As

Publication number Publication date
KR20120104711A (en) 2012-09-24

Similar Documents

Publication Publication Date Title
WO2012124852A1 (en) Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same
CN103168467B (en) The security monitoring video camera using heat picture coordinate is followed the trail of and monitoring system and method
CN105915846B (en) A kind of the invader monitoring method and system of the multiplexing of list binocular
WO2014073841A1 (en) Method for detecting image-based indoor position, and mobile terminal using same
WO2016072625A1 (en) Vehicle location checking system for a parking lot using imaging technique, and method of controlling same
WO2012005387A1 (en) Method and system for monitoring a moving object in a wide area using multiple cameras and an object-tracking algorithm
WO2011136407A1 (en) Apparatus and method for image recognition using a stereo camera
WO2016099084A1 (en) Security service providing system and method using beacon signal
WO2016107230A1 (en) System and method for reproducing objects in 3d scene
JP2020182146A (en) Monitoring device and monitoring method
WO2014051262A1 (en) Method for setting event rules and event monitoring apparatus using same
WO2018135906A1 (en) Camera and image processing method of camera
CN111601011A (en) Automatic alarm method and system based on video stream image
WO2023074995A1 (en) System for detecting and expressing abnormal temperature at industrial site by using thermal imaging camera and generating alarm to inform of same, and operation method thereof
WO2014035103A1 (en) Apparatus and method for monitoring object from captured image
WO2021020866A1 (en) Image analysis system and method for remote monitoring
KR102374357B1 (en) Video Surveillance Apparatus for Congestion Control
KR101446422B1 (en) Video security system and method
WO2018139847A1 (en) Personal identification method through facial comparison
WO2021025242A1 (en) Electronic device and method thereof for identifying object virtual image by reflection in indoor environment
WO2012137994A1 (en) Image recognition device and image-monitoring method therefor
WO2020111353A1 (en) Method and apparatus for detecting privacy invasion equipment, and system thereof
WO2018097384A1 (en) Crowdedness notification apparatus and method
WO2022097805A1 (en) Method, device, and system for detecting abnormal event
WO2015026002A1 (en) Image matching apparatus and image matching method using same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11860792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11860792

Country of ref document: EP

Kind code of ref document: A1