WO2014162554A1 - 画像処理システムおよび画像処理用プログラム - Google Patents
画像処理システムおよび画像処理用プログラム Download PDFInfo
- Publication number
- WO2014162554A1 WO2014162554A1 PCT/JP2013/060303 JP2013060303W WO2014162554A1 WO 2014162554 A1 WO2014162554 A1 WO 2014162554A1 JP 2013060303 W JP2013060303 W JP 2013060303W WO 2014162554 A1 WO2014162554 A1 WO 2014162554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- display
- moving object
- stereoscopic image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0088—Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image
Definitions
- the present invention relates to an image processing system and an image processing program, and more particularly to an image processing system and an image processing program having a function of detecting a moving object from a photographed image by a camera and displaying it as a stereoscopic image.
- monitoring is performed by displaying a plurality of monitoring images taken by a plurality of monitoring cameras on a plurality of image display devices, or by dividing and displaying a plurality of monitoring images on one image display device. .
- monitoring is performed by switching and displaying a plurality of monitoring images on a single image display device in a time-sharing manner.
- the conventional monitoring system is not suitable for monitoring the inside of a building having a complicated structure.
- the floor configuration becomes complicated and the number of rooms increases.
- the number of cameras installed increases, and the number of monitoring images displayed on the image display device also increases. Therefore, it is difficult to grasp at a glance which room on which floor the plurality of monitoring images displayed on the image display device are, and it is difficult to grasp the situation of the entire building.
- Patent Document 1 A system that can be used has been proposed (see, for example, Patent Document 1). According to the technique described in Patent Document 1, it is possible to simultaneously monitor the monitoring area such as the interior of each room in the building and the corridor on one screen according to the layout of the floor.
- a monitoring image photographed by a monitoring camera installed in each monitoring area is allocated and displayed at the position of each monitoring area of a planar floor image represented by the top view. Only. Therefore, basically, monitoring is performed by dividing and displaying a plurality of monitoring images on one image display device. Therefore, it is possible to grasp where the multiple monitoring images belong to, but in order to grasp the situation of the entire building, the multiple monitoring images must be confirmed individually, There was a problem that it was still difficult for users to use.
- Patent Document 1 is not suitable for monitoring a large-scale building with a complicated floor configuration. That is, since the technique of Patent Document 1 displays a plurality of monitoring images on one screen, there is a limit to the number of images that can be displayed. Actually, as shown in FIG. 7 of Patent Document 1, it is only possible to display monitoring images taken in a plurality of monitoring areas on one floor, and is used for large-scale and complicated buildings such as factories, buildings, and department stores. It cannot be used for overall monitoring. If a large number of monitoring images are forcibly displayed on one screen, the display size of each monitoring image becomes small, which makes it very difficult to see.
- Patent Document 2 Japanese Patent Application Laid-Open No. 2008-118466 cited as the prior art document in paragraph [0004] of Patent Document 1
- an overhead image an image in which each floor is looked down from above
- a floor overlay image of 1F to 3F is generated by further combining the overhead images of the plurality of floors.
- Patent Document 2 is also suitable for monitoring a large-scale building with a complicated floor configuration.
- the present invention has been made to solve such a problem, and it is possible to provide an easy-to-understand image of an entire building even if it is a large-scale building with a complicated floor configuration.
- the purpose is to allow the user to easily grasp the situation.
- an image input unit that inputs captured images from a plurality of cameras installed so as to capture a region to be captured from a plurality of angles, and a plurality of input captured images
- a depth information calculation unit that calculates depth information representing the distance from the camera of the moving object included in each pixel, and a plurality of projection planes according to the relative angular relationship between the plurality of cameras, and a plurality of photographing
- a stereoscopic image generation unit that generates a stereoscopic image
- an overhead image generation unit that generates an overhead image of the shooting target region by combining the stereoscopic image of the moving object with the spatial image representing the space of the shooting target region
- An overhead image and a display control unit for displaying on the display.
- a stereoscopic image of a moving object is generated as point cloud data formed by projecting the value of each pixel in the direction of the projection plane according to the depth information detected from the captured image.
- the image is combined with the spatial image of the shooting target area and displayed as one overall overhead image. Therefore, even if the number of shooting target areas is increased in a large-scale building with a complicated floor configuration, a large number of shot images are not displayed by being divided into screens or switched by time division. That is, a plurality of shooting target areas are included in the entire space of the building, and one bird's-eye view image formed by synthesizing a three-dimensional point cloud image in each shooting target area is displayed on the display. .
- the user does not need to individually check a plurality of captured images displayed for each imaging target area as in the past, and can grasp the entire building situation at a glance by checking the overhead image. Become.
- the value of each pixel of the moving object included in each photographed image is in the direction of the projection plane of each photographed image from the photographed image obtained by photographing the region to be photographed from a plurality of angles.
- the three-dimensional image is synthesized by projection. Therefore, by changing the orientation of the projection plane by moving the position of the virtual viewpoint to see the overhead image including the stereoscopic image of the moving object, it is possible to arbitrarily switch to the overhead image viewed from various angles and display it on the display Can do.
- a bird's-eye view image of a shooting target region including a stereoscopic image of a moving object processed from a plurality of shot images into point cloud data is displayed on the display.
- the viewpoint of the overhead image can be arbitrarily switched and displayed. Accordingly, for example, an object that is hidden behind the object and cannot be seen when viewed from a certain angle can be displayed by changing the angle of the overhead image.
- FIG. 10 is a diagram for explaining processing of a stereoscopic image generation unit according to the first to third embodiments.
- 6 is a diagram illustrating an example of an overhead image displayed on a display by a display control unit in the first to third embodiments.
- FIG. It is a figure which shows an example of the relevant information memorize
- 5 is a flowchart illustrating an operation example of an image processing apparatus which is a component of the image processing system according to the first embodiment.
- FIG. 1 is a diagram illustrating a configuration example of an image processing system according to the first embodiment.
- FIG. 1 an example in which the image processing system of the present invention is implemented in a monitoring system is shown.
- the monitoring system includes a plurality of imaging units 101, 102,..., A plurality of image input devices 201, 202,. Display 400.
- the plurality of imaging units 101, 102,... Are connected to the plurality of image input devices 201, 202,. Further, the plurality of image input devices 201, 202,... Are connected to the image processing device 300 via a local network 500 in the building.
- the plurality of imaging units 101, 102,... are installed for each of a plurality of imaging target areas in the building.
- the plurality of imaging target areas referred to here are, for example, desired spatial areas desired to be monitored, such as the inside of each room on each floor of a building, a hallway, a staircase, and an elevator.
- One imaging unit is assigned to one imaging target area. In the following description, it is assumed that the two imaging units 101 and 102 are provided for convenience of explanation.
- a single imaging unit is equipped with a plurality of cameras.
- the first imaging unit 101 includes four cameras 101A to 101D.
- the second imaging unit 102 is also provided with a plurality of cameras. The number of cameras included in the second imaging unit 102 is not necessarily the same as that of the first imaging unit 101.
- the plurality of cameras 101A to 101D are installed so as to photograph the photographing target region from a plurality of angles.
- a certain room is set as an imaging target area
- four cameras 101A to 101D are installed on the wall surfaces in the four directions of the room so that the entire room can be captured.
- These cameras 101A to 101D are all stereo cameras, and simultaneously photograph the object in the photographing target area from two different directions.
- the image processing apparatus 300 analyzes the captured image by a known method, so that information in the depth direction of the object can be acquired.
- the plurality of image input devices 201 and 202 are used to input captured images from the plurality of imaging units 101 and 102, respectively, and are configured by, for example, a personal computer.
- the first image input device 201 is connected to the first imaging unit 101
- the second image input device 202 is connected to the second imaging unit 102.
- the invention is not limited to such a connection form.
- two image capturing units 101 and 102 may be connected to one image input device, and a captured image may be input from the two image capturing units 101 and 102 to one image input device.
- the image processing apparatus 300 inputs captured images from the plurality of image input apparatuses 201 and 202 and performs image processing described in detail below. Then, an image obtained as a result of the image processing is displayed on the display 400.
- the image processing apparatus 300 is configured by, for example, a personal computer or a server apparatus, and is installed in a monitoring room or the like in a building.
- the display 400 is comprised by a liquid crystal display device, for example.
- FIG. 2 is a block diagram illustrating a functional configuration example of the image processing system.
- the image processing system according to the first embodiment has, as its functional configuration, an image input unit 1, a depth information calculation unit 2, a stereoscopic image generation unit 3, an overhead image generation unit 4, and a display control unit 5.
- each function of the image input unit 1 is provided in each of the image input devices 201 and 202 shown in FIG.
- each function of the depth information calculation unit 2, the stereoscopic image generation unit 3, the overhead image generation unit 4, the display control unit 5, the related information generation unit 6, the operation reception unit 7, the spatial image storage unit 8, and the related information storage unit 9 Is provided in the image processing apparatus 300 shown in FIG.
- the functions of the depth information calculation unit 2, the stereoscopic image generation unit 3, the overhead image generation unit 4, the display control unit 5, the related information generation unit 6, and the operation reception unit 7 included in the image processing apparatus 300 are hardware, DSP (Digital (Signal Processor) and software can be used.
- DSP Digital (Signal Processor)
- each of the above functions is actually configured by including the CPU, RAM, ROM, etc. of the image processing apparatus 300, and image processing stored in a recording medium such as RAM, ROM, hard disk, or semiconductor memory. This is realized by the operation of the program.
- the image input unit 1 inputs captured images from a plurality of cameras. That is, the image input unit 1 included in the first image input device 201 inputs captured images from the plurality of cameras 101A to 101D. Further, the image input unit 1 included in the first image input device 201 inputs captured images from a plurality of cameras (not shown in FIG. 1).
- the depth information calculation unit 2 calculates depth information representing the distance from the camera of the moving object included in the plurality of captured images input by the image input unit 1 for each pixel.
- the captured image input by the image input unit 1 is a parallax image captured by a stereo camera.
- the depth information calculation unit 2 calculates depth information representing the distance of the moving object from the camera for each pixel by analyzing the parallax image by a known method.
- various methods can be applied as a method of extracting a moving object from a captured image. For example, by detecting a difference between frame images sequentially captured by the cameras 101A to 101D according to a predetermined frame rate, it is possible to extract an area where the difference is generated as a moving object area.
- an image captured in advance in a state where there is no moving object in the shooting target area is stored as a background image, and a difference between the captured image and the background image is detected, so that the area where the difference occurs is the area of the moving object. It is also possible to extract as
- the stereoscopic image generation unit 3 uses the depth information calculated by the depth information calculation unit 2 to generate a stereoscopic image of the moving object from the plurality of captured images input by the image input unit 1.
- the stereoscopic image generation unit 3 generates a set of stereoscopic images for one shooting target region.
- a set of stereoscopic images is generated from four captured images input to the image input unit 1 from the four cameras 101A to 101D included in the first imaging unit 101.
- the term “one set” means that when a plurality of moving objects are extracted from the captured image, one set of the three-dimensional images of the plurality of moving objects.
- FIG. 3 is an explanatory diagram of processing for generating a stereoscopic image of a moving object from a plurality of captured images.
- the stereoscopic image generating unit 3 sets a plurality of projection planes 31 to 34 in accordance with the relative angular relationship between the plurality of cameras 101A to 101D.
- the projection planes 31 to 34 are set in four directions at an angle of 90 degrees with each other for ease of explanation.
- each camera 101A to 101D is installed at positions indicated by ⁇ (for example, wall surfaces in four directions in the room), and each camera 101A is photographed so as to capture the directions indicated by arrows AD from the positions. Up to 101D postures are set.
- the stereoscopic image generating unit 3 sets the projection planes 31 to 34 in the shooting direction according to the mounting positions and mounting postures of the cameras 101A to 101D. Information about the mounting positions and mounting postures of the cameras 101A to 101D is registered in the stereoscopic image generation unit 3 by prior calibration.
- the three-dimensional image generation unit 3 sets the value of each pixel of the moving object included in the plurality of captured images (in the example of FIG. 3, captured images of the four cameras 101A to 101D) input by the image input unit 1. By projecting in the direction of the projection planes 31 to 34 corresponding to the captured image according to the depth information, a stereoscopic image in which moving objects included in the plurality of captured images are combined into one is generated.
- the stereoscopic image generating unit 3 has the projection plane 31 from the position of the first camera 101A to the value of each pixel of the moving object included in the captured image input to the image input unit 1 from the first camera 101A. Drawing is performed at a position projected in the direction of arrow A by the distance indicated by the depth information.
- the value of each pixel to be drawn may be an RGB value originally possessed by the captured image, or may be a binary value or a gray scale value.
- the stereoscopic image generation unit 3 performs the same process on the captured image input to the image input unit 1 from the second to fourth cameras 101B to 101D.
- the same shooting target area is shot by the plurality of cameras 101A to 101D, and the values of the pixels of the moving object included in the shot images are combined on one image.
- a plurality of pixel values are projected from a plurality of directions with respect to the position.
- the stereoscopic image generating unit 3 preferentially draws the stereoscopic image to be synthesized when it comes to the front of the stereoscopic when viewed from a virtual viewpoint (a point in front of the display screen) viewed by the user.
- the pixel values included in the images captured by the first camera 101A and the second camera 101B are set to be greater than the pixel values included in the images captured by the third camera 101C and the fourth camera 101D.
- Draw with priority It can be arbitrarily determined which of the image captured by the first camera 101A and the image captured by the second camera 101B is prioritized. For example, it is possible to give priority to a shorter distance based on depth information.
- the bird's-eye view image generation unit 4 combines the three-dimensional image of the moving object generated by the three-dimensional image generation unit 3 with a spatial image representing the space of the shooting target region, thereby generating a bird's-eye view image of the shooting target region.
- the spatial image is a stereoscopic image that three-dimensionally represents the shooting target region, and a spatial image that is created in advance is stored in the spatial image storage unit 8.
- the spatial image stored in the spatial image storage unit 8 is a three-dimensional representation of the space of individual shooting target areas (hereinafter referred to as individual areas) such as rooms, corridors, stairs, and elevators. There is also a three-dimensional representation of a space of a region (hereinafter referred to as a composite region) in which a plurality of shooting target regions are combined such as the entire floor.
- individual areas such as rooms, corridors, stairs, and elevators.
- a composite region in which a plurality of shooting target regions are combined such as the entire floor.
- the user can arbitrarily specify which shooting target area the three-dimensional image of which shooting target area is combined with which spatial image. That is, the user operates an operation unit (not shown) as to which individual area or composite area of the entire building, the entire floor, the room in the floor, the hallway, the stairs, and the elevator is displayed on the display 400 as an overhead image. Can be specified.
- the operation accepting unit 7 accepts an operation for designating this area, and notifies the overhead image generating unit 4 of the designated content.
- the bird's-eye view image generation unit 4 reads a spatial image of a designated region (hereinafter referred to as a display target region) from the spatial image storage unit 8.
- the overhead image is generated by synthesizing the stereoscopic image of the moving object generated by the stereoscopic image generating unit 3 with respect to the display target area with the read spatial image.
- the operation reception unit 7 may notify the stereoscopic image generation unit 3 of the contents of the area designation, and generate a stereoscopic image of the moving object only for the designated display target area.
- the display control unit 5 controls the display 400 to display the overhead view image generated by the overhead view image generation unit 4. As described above, in this embodiment, the user can specify which region of the overhead view image is to be displayed on the display 400 by operating the operation unit.
- the display control unit 5 controls the display 400 to display the overhead image generated by the stereoscopic image generation unit 3 and the overhead image generation unit 4 for the designated display target area.
- the viewpoint can be viewed from various angles. It is also possible to arbitrarily switch to the overhead image and display it on the display 400.
- the operation receiving unit 7 receives the operation for moving the viewpoint and notifies the specified content to the stereoscopic image generating unit 3 and the overhead image generating unit 4.
- the stereoscopic image generation unit 3 sets the projection planes 31 to 34 viewed from the designated viewpoint position, and generates a stereoscopic image of the moving object by the above-described processing. Moreover, the overhead image generation unit 4 generates the stereoscopic image read from the spatial image storage unit 8 into a stereoscopic image viewed from the specified viewpoint position, and is then generated by the stereoscopic image generation unit 3. An overhead image is generated by synthesizing a stereoscopic image of a moving object.
- the display control unit 5 causes the display 400 to display an overhead image viewed from the viewpoint designated in this way.
- FIG. 4 is a diagram illustrating an example of an overhead image displayed on the display 400.
- FIG. 4A shows an overhead image displayed when the entire building is designated as the display target area.
- FIG. 4B shows a bird's-eye view image displayed when a specific floor (seventh floor here) is designated as the display target area.
- 41 -1 to 41 -9 are spatial images showing the floors in the building
- 42 -1 to 42 -2 are spatial images showing the elevators in the building.
- a spatial image of the entire building is composed of the spatial images 41 -1 to 41 -9 of each floor and the spatial images 42 -1 to 42 -2 of the elevators.
- the space image of the entire building is not a rigorous representation of the structure and positional relationship of each floor or elevator, but a simplified representation.
- reference numeral 43 denotes a stereoscopic image of the moving object generated by the stereoscopic image generating unit 3.
- a shooting target area is set in each room or the like on each floor, and a stereoscopic image of the moving object is generated by the stereoscopic image generation unit 3 for each shooting target area.
- FIG. 4 (a) when displaying a bird's-eye view of a composite area such as the entire building, a stereoscopic image of a moving object generated for each of a plurality of shooting target areas is displayed for each spatial image of the entire building. Composite to the corresponding position in each room on the floor.
- 44 -1 to 44 -5 indicate rooms in the floor, and 45 indicates a corridor in the floor.
- Each of these rooms 44 -1 to 44 -5 and the corridor 45 constitute a spatial image of the entire floor. This spatial image of the entire floor is an accurate representation of the structure and positional relationship of each room or hallway to some extent.
- reference numeral 46 denotes a stereoscopic image of the moving object generated by the stereoscopic image generation unit 3.
- the stereoscopic image of the moving object generated by the stereoscopic image generation unit 3 for each shooting target region set for each room or hallway in the floor is a spatial image of the entire floor. It is synthesized at the corresponding position.
- the related information generating unit 6 generates related information that associates the position of each pixel constituting the stereoscopic image of the moving object with the captured image that is the projection source of the value of the pixel, and stores the related information in the related information storage unit 9.
- FIG. 5 is a diagram illustrating an example of related information stored in the related information storage unit 9.
- the related information includes a moving object ID uniquely assigned to a stereoscopic image of each moving object, and a coordinate position on an overhead image of each pixel constituting the stereoscopic image. And an image ID uniquely assigned to the captured image that is a projection source of the value of each pixel constituting the stereoscopic image.
- One stereoscopic image is generated from a plurality of captured images captured by a plurality of cameras. That is, there are a plurality of photographed images that are projection sources of the values of the pixels constituting the stereoscopic image. Therefore, a plurality of image IDs are stored for one moving object ID.
- the moving object ID and the image ID are given when the stereoscopic image generating unit 3 generates a stereoscopic image of the moving object from the captured image, and notified to the related information generating unit 6.
- the coordinate position of each pixel constituting the stereoscopic image on the overhead image is specified when the overhead image generation unit 4 generates the overhead image by synthesizing the stereoscopic image with the spatial image, and the related information generation unit 6 is notified.
- the display control unit 5 stores the related information stored in the related information storage unit 9. With reference to the information, the display 400 is controlled to display a captured image associated with the designated position.
- the display control unit 5 refers to the related information stored in the related information storage unit 9 and determines whether or not the position specified on the overhead image is a position on the stereoscopic image of the moving object. If so, the display control unit 5 controls the display 400 to input the captured image specified by the image ID associated with the designated position from the image input unit 1.
- a captured image corresponding to any one image ID is selectively displayed on one screen.
- captured images corresponding to a plurality of image IDs may be divided and displayed on one screen.
- the selection rule can be arbitrarily set. For example, a rule of selecting a captured image corresponding to the projection plane closest to the viewpoint position can be considered.
- FIG. 6 is a flowchart showing an operation example of the image processing apparatus 300 which is one component of the image processing system according to the first embodiment configured as described above. Note that the flowchart shown in FIG. 6 starts when the image processing apparatus 300 is turned on.
- the image processing apparatus 300 inputs captured images for one frame from the plurality of imaging units 101 and 102 from the plurality of image input apparatuses 201 and 202 (step S1).
- the operation accepting unit 7 displays the area to be displayed on the display 400 as an overhead image from among the entire building, the entire floor, the room in the floor, the corridor, the stairs, and the elevator by the operation of the operating unit. It is determined whether or not an area designation has been accepted (step S2).
- the stereoscopic image generation unit 3 and the overhead image generation unit 4 When the operation reception unit 7 receives an operation for designating the display target area, the stereoscopic image generation unit 3 and the overhead image generation unit 4 generate a stereoscopic image and an overhead image for the designated display target area.
- the setting is changed to (Step S3).
- the stereoscopic image generating unit 3 and the overhead image generating unit 4 do not change the setting as described above. In the initial state, for example, the composite area of the entire building is set as the display target area.
- the operation accepting unit 7 determines whether or not the user has accepted an operation for moving the position of a virtual viewpoint for viewing the overhead image (step S4).
- the stereoscopic image generating unit 3 and the overhead image generating unit 4 set the projection planes 31 to 34 as seen from the position of the moved viewpoint (step S5).
- the operation receiving unit 7 has not received an operation for moving the viewpoint, the stereoscopic image generating unit 3 and the overhead image generating unit 4 do not change the projection planes 31 to 34 as described above.
- the depth information calculation unit 2 detects a moving object from the photographed image for each of the plurality of photographed images input in Step S1 for the display target region (Step S6). Then, for each moving object detected from each captured image, depth information representing the distance from the camera is calculated for each pixel (step S7).
- the stereoscopic image generating unit 3 projects the value of each pixel of the moving object detected from the plurality of photographed images in the direction of the projection planes 31 to 34 according to the depth information for each of the photographing target regions included in the display target region. .
- a stereoscopic image of the moving object is generated for each shooting target area (step S8).
- the bird's-eye view image generation unit 4 displays a stereoscopic image of the moving object generated for each shooting target area in step S8 as a display target area (an individual area consisting of one shooting target area or a composite area consisting of a plurality of shooting target areas). ) To generate a bird's-eye view image of the display target area (step S9). Then, the display control unit 5 displays the generated overhead image on the display 400 (step S10).
- the operation reception unit 7 determines whether or not the user has received an operation for designating an arbitrary position on the overhead view image (step S11).
- the operation accepting unit 7 further determines whether or not an operation for turning off the power of the image processing apparatus 300 has been accepted (step S12).
- step S1 the process returns to step S1 to input the captured image of the next frame.
- the process of the flowchart shown in FIG. 6 ends.
- the display control unit 5 refers to the related information stored in the related information storage unit 9 to view the bird's-eye view. It is determined whether or not the position designated on the image is a position on the stereoscopic image of the moving object (step S13).
- step S12 If the position specified on the overhead image is not the position on the stereoscopic image of the moving object, the process proceeds to step S12.
- the display control unit 5 displays the captured image specified by the image ID associated with the designated position on the display 400. It is displayed (step S14).
- the operation reception unit 7 determines whether or not an operation for returning the captured image display to the overhead image display is received (step S15). When this operation is accepted by the operation accepting unit 7, the process proceeds to step S12. On the other hand, when the operation receiving unit 7 has not received this operation, the operation receiving unit 7 further determines whether or not an operation for turning off the power of the image processing apparatus 300 has been received (step S16).
- step S17 the image processing apparatus 300 inputs the next frame of the photographed image corresponding to the image ID specified in step S14 (step S17). Thereafter, the process returns to step S14, and the display control unit 5 causes the display 400 to display the input captured image.
- the operation accepting unit 7 accepts a power-off operation, the process of the flowchart shown in FIG. 6 ends.
- the three-dimensional image of the moving object is obtained as point cloud data obtained by projecting the value of each pixel in the direction of the projection planes 31 to 34 according to the depth information detected from the captured image.
- An image is generated, and the stereoscopic image is combined with the spatial image of the display target area and displayed as one overall overhead image.
- a large number of shot images are displayed on the display 400 by being divided into screens or switched by time division. There is nothing. That is, a plurality of shooting target areas are included in the entire space of the building, and one bird's-eye view image formed by combining a three-dimensional point cloud image in each shooting target area is displayed on the display 400. Become.
- a polyhedral model is generally used for rendering a three-dimensional image, but continuous and accurate data is required to convert a point cloud data into a polyhedral model.
- Polyhedral modeling is difficult with highly intermittent data and data with low accuracy, and it is difficult to integrate data in a three-dimensional space. According to the first embodiment, it is possible to draw an image of a moving object that can be identified by humans even with point group data with high intermittentness and low accuracy.
- the value of each pixel of the moving object included in each captured image is calculated from the captured image obtained by capturing the imaging target region from a plurality of angles.
- a three-dimensional image is synthesized by projecting in this direction. Therefore, by changing the orientation of the projection plane by moving the position of the virtual viewpoint for viewing the overhead image including the stereoscopic image of the moving object, the display 400 can be arbitrarily switched to the overhead image viewed from various angles and displayed on the display 400. be able to.
- an overhead image of a display target region including a stereoscopic image of a moving object processed from a plurality of captured images into point cloud data is displayed on the display 400, and the viewpoint of the overhead image is displayed.
- any individual area or composite area of the entire building, the entire floor, the room in the floor, the hallway, the stairs, and the elevator is arbitrarily selected as the display target area of the overhead image. Can be switched.
- an individual room, corridor, stairs, elevator, etc. as needed, while confirming the movement of a moving object in each area from a single screen with a bird's-eye view of a complex area such as the entire building or the entire floor. It is possible to instantly switch to a bird's-eye view of the area and check the movement of the moving object precisely.
- each point (each pixel) of the point cloud data constituting the stereoscopic image of the moving object is the position on the overhead image and the projection source of the value of the pixel. It has related information for associating with a certain photographed image. Therefore, by designating the position of the moving object displayed on the bird's-eye view image, it is possible to switch to and display the original captured image in which the designated moving object is shown. Thereby, it is also possible to monitor the moving object more precisely by the captured image itself before being processed into point cloud data.
- FIG. 7 is a diagram illustrating a configuration example of an image processing system according to the second embodiment.
- the second embodiment also shows an example in which the image processing system of the present invention is implemented in a monitoring system.
- the monitoring system includes a mobile terminal 600 such as a tablet, a notebook personal computer, or a smartphone in addition to the components shown in FIG.
- the portable terminal 600 includes a display that displays an image.
- the portable terminal 600 is connected to the image processing apparatus 300 via the local network 500.
- FIG. 8 is a block diagram illustrating a functional configuration example of the image processing system according to the second embodiment.
- the same reference numerals as those shown in FIG. 2 have the same functions, and therefore redundant description is omitted here.
- the image processing system includes a position / orientation detection unit 11, a display target area specifying unit 12, an image transmission unit 15, and display control.
- a portion 16 is provided.
- a stereoscopic image generation unit 13 and an overhead image generation unit 14 are provided instead of the stereoscopic image generation unit 3 and the overhead image generation unit 4.
- the functions of the position / orientation detector 11 and the display controller 16 are provided in the portable terminal 600 shown in FIG. Further, the functions of the display target area specifying unit 12, the stereoscopic image generating unit 13, the overhead image generating unit 14, and the image transmitting unit 15 are provided in the image processing apparatus 300 shown in FIG.
- Each of the 15 functions can be configured by any of hardware, DSP, and software.
- each of the above functions is actually configured by including the CPU, RAM, ROM, and the like of the image processing apparatus 300, and for image processing stored in a recording medium such as RAM, ROM, hard disk, or semiconductor memory. This is realized by the program running.
- the position / orientation detection unit 11 detects the current position of the mobile terminal 600 and the current orientation to which the mobile terminal 600 is directed.
- the position / orientation detection unit 11 is constituted by, for example, a GPS receiver.
- the position / orientation detection unit 11 constantly detects the current position and current orientation of the mobile terminal 600 and constantly transmits the detected information to the image processing apparatus 300 via the local area network 500.
- the display target area specifying unit 12 of the image processing apparatus 300 captures images as a bird's-eye view image from among a plurality of shooting target areas based on the current position and current orientation of the mobile terminal 600 detected by the position / orientation detection unit 11.
- the target area is specified as the display target area.
- FIG. 9 is a diagram for explaining processing by the display target area specifying unit 12.
- the floor plan shown in FIG. 4B is shown.
- Mobile terminal 600 wardens carrying the corridor 45 Niite the floor, and a mobile terminal 600 was in the direction of the room 44 -2.
- the current position PP and the current direction PD of the portable terminal 600 are detected by the position / orientation detection unit 11 and transmitted to the image processing apparatus 300.
- Display target area specifying unit 12 of the image processing apparatus 300 the imaging region of the room 44 -2 from the current position PP of the mobile terminal 600 detected by the position and direction detecting unit 11 in the direction of the current heading PD, overhead image Is specified as a display target area to be displayed.
- the stereoscopic image generating unit 13 has the following functions in addition to the functions described in the first embodiment. That is, when the display target region is specified by the display target region specifying unit 12, the stereoscopic image generating unit 13 uses the method described in the first embodiment for the photographing target regions included in the specified display target region. A stereoscopic image is generated. In the example of FIG. 9, the imaging region included in the display target area, is only one of the imaging region of the room 44 -2.
- the angle at which the projection planes 31 to 34 for generating a stereoscopic image are set can be arbitrarily determined.
- the angles of the projection planes 31 to 34 are set to the current position PP and the current direction PD so that the bird's-eye view image is displayed in a state where the direction of the current direction PD is viewed from the current position PP detected by the position / direction detection unit 11.
- the projection planes 31 to 34 may be set in advance at a predetermined angle so that the overhead image is displayed at a fixed angle as shown in FIG. 4B, for example.
- the overhead image generation unit 14 has the following functions in addition to the functions described in the first embodiment. That is, when the display target region is specified by the display target region specifying unit 12, the overhead image generation unit 14 generates a bird's-eye image for the specified display target region. Then, the overhead image generation unit 4 supplies the generated overhead image to the image transmission unit 15.
- the image transmission unit 15 transmits the overhead image generated by the overhead image generation unit 14 to the mobile terminal 600.
- the display control unit 16 of the portable terminal 600 uses the overhead view image transmitted by the image transmission unit 15 as the overhead view image of the display target area in the direction of the current orientation PD from the current position PP detected by the position / orientation detection unit 11. And displayed on a display (not shown) of the portable terminal 600.
- the mobile terminal 600 can be used to display the movement of a moving object moving on the other side of the wall or on another floor as if it were seen through. it can. Accordingly, the monitor can check the movement of the person in real time from a place different from the place where the person is, for example, when tracking a specific person. Therefore, the monitoring capability is dramatically improved.
- the present invention is not limited to this.
- a portable display such as HUD (Head-Up Display) may be used.
- the portable display is preferably provided with a GPS receiver and a data transmission / reception function.
- FIG. 10 is a block diagram illustrating a functional configuration example of the image processing system according to the third embodiment.
- components having the same reference numerals as those shown in FIG. 2 have the same functions, and thus redundant description is omitted here.
- the image processing system includes a motion detection unit 21, a motion pattern storage unit 22, a motion determination unit 23, an alarm generation unit 24, and A moving object tracking unit 26 is provided. Further, a display control unit 25 is provided instead of the display control unit 5.
- the motion detection unit 21, the motion pattern storage unit 22, the motion determination unit 23, the alarm generation unit 24, the display control unit 25, and the moving object tracking unit 26 are all provided in the image processing apparatus 300 illustrated in FIG.
- Each function of the alarm generation unit 24 and the moving object tracking unit 26 can be configured by any of hardware, DSP, and software.
- each of the above functions is actually configured by including the CPU, RAM, ROM, and the like of the image processing apparatus 300, and for image processing stored in a recording medium such as RAM, ROM, hard disk, or semiconductor memory. This is realized by the program running.
- the motion detection unit 21 detects the movement of the moving object represented by the stereoscopic image by analyzing the change on the time axis of the stereoscopic image generated by the stereoscopic image generation unit 3. That is, since the stereoscopic image generation unit 3 sequentially generates the stereoscopic image using the captured images sequentially captured by the imaging units 101 and 102 in accordance with a predetermined frame rate, the motion detection unit 21 can detect the sequential stereoscopic image. By detecting the inter-frame difference, the movement of the moving object represented by the stereoscopic image is detected.
- the motion detection unit 21 can detect even the motion of the skeleton structure of the human body.
- the motion pattern storage unit 22 stores in advance data of a specific motion pattern related to a moving object. For example, if a person holds a heavy load with his right hand, the right shoulder rises and the left shoulder goes down from the consciousness of pulling up, or if a person walks with a heavy load, the vertical movement becomes larger than usual, data on such movement patterns Is stored in the movement pattern storage unit 22.
- the motion determination unit 23 determines whether or not the motion of the moving object detected by the motion detection unit 21 matches the motion pattern stored in the motion pattern storage unit 22.
- the term “match” is a concept including not only a perfect match but also a case where the match degree is a predetermined value or more.
- the alarm generation unit 24 issues a predetermined alarm when the movement determination unit 23 determines that the movement of the moving object matches the movement pattern. For example, as illustrated in FIG. 11, the alarm generation unit 24 controls the display control unit 25 to display a frame 60 around a moving object determined to match the motion pattern. You may make it sound an alarm with the display of this frame 60.
- FIG. 11 the alarm generation unit 24 controls the display control unit 25 to display a frame 60 around a moving object determined to match the motion pattern. You may make it sound an alarm with the display of this frame 60.
- the moving object tracking unit 26 tracks the movement on the time axis of the moving object determined by the movement determining unit 23 to match the movement with the movement pattern.
- the motion of the skeleton structure of the human body detected by the motion detection unit 21 is inherent to some extent and can be used to specify continuity. As a result, it is possible to identify the same moving object on the basis of the uniqueness of the three-dimensional images sequentially generated from the captured images of one shooting target area and to track the moving object. Become.
- the moving object can be continuously tracked. That is, the three-dimensional image generated for the shooting target area of the room 44 -2 when the moving object is in the room 44 -2 and the shooting target area of the hallway 45 after the moving object has moved to the hallway 45. It can be easily identified that the stereoscopic image is of the same moving object based on the uniqueness of the movement of the moving object. Therefore, it is possible to reliably track a moving object that moves across a plurality of imaging target regions.
- the tracking result of the moving object by the moving object tracking unit 26 is notified to the alarm generation unit 24, for example.
- the alarm generation unit 24 receives the tracking result, and tracks and displays an alarm frame 60 around the moving object determined to match the motion pattern by the motion determination unit 23.
- the present invention is not limited to this.
- the bird's-eye view images sequentially generated by the bird's-eye view image generation unit 4 may be stored in a database, and the analysis process may be performed afterwards on the stored bird's-eye view image.
- the motion detection unit 21 detects a characteristic motion of a person who has acted in the past, and stores the motion as motion pattern data. You may make it memorize
- the present invention is not limited to this.
- the present invention can be applied to systems such as analysis of store visitors (representation of people, recognition of purchase behavior, etc.) at retail stores, and safety confirmation and bag confirmation at nursing care facilities and hospitals.
- the present invention is not limited to this.
- the distance from the imaging units 101 and 102 to the moving object may be measured using a sensor that measures the distance using radar, infrared rays, ultrasonic waves, or the like. .
- a device capable of outputting distance information such as a sensor such as a radar or an infrared ray
- the distance information is obtained from the distance information as color information.
- each of the first to third embodiments described above is merely an example of a specific example for carrying out the present invention, and the technical scope of the present invention should not be construed as being limited thereto. It will not be. That is, the present invention can be implemented in various forms without departing from the gist or the main features thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Closed-Circuit Television Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Analysis (AREA)
Abstract
Description
以下、本発明の第1の実施形態を図面に基づいて説明する。図1は、第1の実施形態による画像処理システムの構成例を示す図である。第1の実施形態では、本発明の画像処理システムを監視システムに実施した例を示す。
以下、本発明の第2の実施形態を図面に基づいて説明する。図7は、第2の実施形態による画像処理システムの構成例を示す図である。第2の実施形態も、本発明の画像処理システムを監視システムに実施した例を示す。
以下、本発明の第3の実施形態を図面に基づいて説明する。第3の実施形態による画像処理システムの構成は、図1または図6と同様である。すなわち、第3の実施形態による画像処理システムは、上述した第1の実施形態または第2の実施形態の応用例である。以下では、第1の実施形態の応用例として第3の実施形態を説明する。
Claims (6)
- 撮影対象領域を複数の角度から撮影するように設置された複数のカメラによりそれぞれ撮影された画像を入力する画像入力部と、
上記画像入力部により入力された複数の撮影画像に含まれる移動物体の上記カメラからの距離を表す奥行情報を画素毎に算出する奥行情報算出部と、
上記複数のカメラ間の相対的な角度関係に合わせて複数の投影面を設定し、上記画像入力部により入力された複数の撮影画像に含まれる上記移動物体の各画素の値を、各撮影画像に対応する投影面の方向に上記奥行情報に従って投影させることにより、上記複数の撮影画像に含まれている上記移動物体が1つに合成された立体画像を生成する立体画像生成部と、
上記立体画像生成部により生成された上記移動物体の立体画像を、上記撮影対象領域の空間を表す空間画像に合成することにより、上記撮影対象領域の俯瞰画像を生成する俯瞰画像生成部と、
上記俯瞰画像生成部により生成された俯瞰画像をディスプレイに表示させる表示制御部とを備えたことを特徴とする画像処理システム。 - 上記立体画像を構成している各画素の位置と、当該画素の値の投影元である撮影画像とを関連付けた関連情報を生成して関連情報記憶部に記憶させる関連情報生成部を更に備え、
上記表示制御部は、上記立体画像上の位置が指定されたときに、上記関連情報記憶部に記憶された関連情報を参照して、上記指定された位置に関連付けられた撮影画像を上記ディスプレイに表示させることを特徴とする請求項1に記載の商品情報提供システム。 - 上記ディスプレイは、携帯型ディスプレイまたは携帯端末に備えられたディスプレイにより構成され、
上記携帯型ディスプレイまたは上記携帯端末の現在位置および上記ディスプレイが向けられている現在方位を検出する位置・方位検出部と、
上記位置・方位検出部により検出された上記現在位置および上記現在方位に基づいて、複数の上記撮影対象領域の中から上記俯瞰画像として表示させる撮影対象領域を表示対象領域として特定する表示対象領域特定部とを更に備え、
上記立体画像生成部は、上記表示対象領域特定部により特定された上記表示対象領域に含まれる上記撮影対象領域について上記立体画像を生成し、
上記俯瞰画像生成部は、上記表示対象領域特定部により特定された上記表示対象領域について上記俯瞰画像を生成し、
上記表示制御部は、上記表示対象領域特定部により特定された上記表示対象領域の俯瞰画像を上記ディスプレイに表示させることを特徴とする請求項1に記載の画像処理システム。 - 上記立体画像生成部により生成された上記立体画像の時間軸上での変化を解析することにより、当該立体画像で表されている上記移動物体の動きを検出する動き検出部と、
上記移動物体に関する特定の動きパターンのデータを記憶する動きパターン記憶部と、
上記動き検出部により検出された上記移動物体の動きが、上記動きパターン記憶部に記憶されている動きパターンと合致するか否かを判定する動き判定部と、
上記動き判定部により上記移動物体の動きが上記動きパターンと合致すると判定された場合に警報を発する警報発生部とを更に備えたことを特徴とする請求項1に記載の画像処理システム。 - 上記動き判定部により上記移動物体の動きが上記動きパターンと合致すると判定された上記移動物体の時間軸上での動きを追跡する移動物体追跡部を更に備えたことを特徴とする請求項4に記載の画像処理システム。
- 撮影対象領域を複数の角度から撮影するように設置された複数のカメラからそれぞれ入力された複数の撮影画像を解析することにより、上記撮影画像に含まれる移動物体の上記カメラからの距離を表す奥行情報を画素毎に算出する奥行情報算出手段、
上記複数のカメラ間の相対的な角度関係に合わせて複数の投影面を設定し、上記複数のカメラからそれぞれ入力された複数の撮影画像に含まれる上記移動物体各画素の値を、各撮影画像に対応する投影面の方向に上記奥行情報に従って投影させることにより、上記複数の撮影画像に含まれている上記移動物体が1つに合成された立体画像を生成する立体画像生成手段、
上記立体画像生成手段により生成された上記移動物体の立体画像を、上記撮影対象領域の空間を表す空間画像に合成することにより、上記撮影対象領域の俯瞰画像を生成する俯瞰画像生成手段、および
上記俯瞰画像生成手段により生成された俯瞰画像をディスプレイに表示させる表示制御手段
としてコンピュータを機能させるための画像処理用プログラム。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015509805A JP6062039B2 (ja) | 2013-04-04 | 2013-04-04 | 画像処理システムおよび画像処理用プログラム |
| PCT/JP2013/060303 WO2014162554A1 (ja) | 2013-04-04 | 2013-04-04 | 画像処理システムおよび画像処理用プログラム |
| US14/781,642 US9832447B2 (en) | 2013-04-04 | 2013-04-04 | Image processing system and image processing program |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2013/060303 WO2014162554A1 (ja) | 2013-04-04 | 2013-04-04 | 画像処理システムおよび画像処理用プログラム |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014162554A1 true WO2014162554A1 (ja) | 2014-10-09 |
Family
ID=51657885
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2013/060303 Ceased WO2014162554A1 (ja) | 2013-04-04 | 2013-04-04 | 画像処理システムおよび画像処理用プログラム |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US9832447B2 (ja) |
| JP (1) | JP6062039B2 (ja) |
| WO (1) | WO2014162554A1 (ja) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016147644A1 (en) * | 2015-03-16 | 2016-09-22 | Canon Kabushiki Kaisha | Image processing apparatus, image processing system, method for image processing, and computer program |
| JP2016201007A (ja) * | 2015-04-13 | 2016-12-01 | Ihi運搬機械株式会社 | 搬送設備の遠隔メンテナンスシステム |
| WO2017029886A1 (ja) * | 2015-08-20 | 2017-02-23 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
| JPWO2018079767A1 (ja) * | 2016-10-31 | 2019-08-08 | パナソニックIpマネジメント株式会社 | 建物画像生成装置および建物画像表示システム |
| US11146773B2 (en) | 2019-02-19 | 2021-10-12 | Media Kobo, Inc. | Point cloud data communication system, point cloud data transmitting apparatus, and point cloud data transmission method |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10057546B2 (en) | 2014-04-10 | 2018-08-21 | Sensormatic Electronics, LLC | Systems and methods for automated cloud-based analytics for security and/or surveillance |
| US11093545B2 (en) | 2014-04-10 | 2021-08-17 | Sensormatic Electronics, LLC | Systems and methods for an automated cloud-based video surveillance system |
| US11120274B2 (en) | 2014-04-10 | 2021-09-14 | Sensormatic Electronics, LLC | Systems and methods for automated analytics for security surveillance in operation areas |
| US10217003B2 (en) * | 2014-04-10 | 2019-02-26 | Sensormatic Electronics, LLC | Systems and methods for automated analytics for security surveillance in operation areas |
| WO2017013806A1 (ja) * | 2015-07-23 | 2017-01-26 | オリンパス株式会社 | 固体撮像装置 |
| US10242455B2 (en) | 2015-12-18 | 2019-03-26 | Iris Automation, Inc. | Systems and methods for generating a 3D world model using velocity data of a vehicle |
| EP3565259A4 (en) * | 2016-12-28 | 2019-11-06 | Panasonic Intellectual Property Corporation of America | DISTRIBUTION METHOD FOR THREE-DIMENSIONAL MODEL, RECEIVING METHOD FOR THREE-DIMENSIONAL MODEL, DISTRIBUTION DEVICE FOR THREE-DIMENSIONAL MODEL AND RECEIVING DEVICE FOR THREE-DIMENSIONAL MODEL |
| JP7132730B2 (ja) * | 2018-03-14 | 2022-09-07 | キヤノン株式会社 | 情報処理装置および情報処理方法 |
| JP7706916B2 (ja) * | 2021-04-01 | 2025-07-14 | キヤノン株式会社 | 光電変換装置 |
| CN115205311B (zh) * | 2022-07-15 | 2024-04-05 | 小米汽车科技有限公司 | 图像处理方法、装置、车辆、介质及芯片 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005252831A (ja) * | 2004-03-05 | 2005-09-15 | Mitsubishi Electric Corp | 設備監視支援装置 |
| JP2006109118A (ja) * | 2004-10-06 | 2006-04-20 | Matsushita Electric Ind Co Ltd | 監視装置 |
| JP2008217602A (ja) * | 2007-03-06 | 2008-09-18 | Toshiba Corp | 不審行動検知システム及び方法 |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS61226722A (ja) * | 1985-03-29 | 1986-10-08 | Canon Inc | 実体顕微鏡 |
| US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
| JP2002031515A (ja) | 2000-07-17 | 2002-01-31 | Central Res Inst Of Electric Power Ind | カメラのキャリブレーション方法およびこれを利用する装置並びにキャリブレーションプログラムを記録したコンピュータ読み取り可能な記録媒体 |
| JP2005173685A (ja) | 2003-12-08 | 2005-06-30 | Canon Inc | 仮想空間構築装置およびその方法、画像合成装置およびその方法 |
| US20090040309A1 (en) | 2004-10-06 | 2009-02-12 | Hirofumi Ishii | Monitoring Device |
| JP4827694B2 (ja) | 2006-11-06 | 2011-11-30 | パナソニック株式会社 | 監視システム |
| JP2012004630A (ja) | 2010-06-14 | 2012-01-05 | Mitsubishi Electric Corp | 監視システム |
| JP6031454B2 (ja) * | 2011-02-24 | 2016-11-24 | カデンス メディカル イメージング インコーポレイテッド | 撮像データにおける潜在異常を特定する方法及び装置並びに医用画像へのその応用 |
| WO2012124331A1 (ja) | 2011-03-17 | 2012-09-20 | パナソニック株式会社 | 3d撮像装置 |
| WO2013067437A1 (en) * | 2011-11-02 | 2013-05-10 | Hoffman Michael Theodor | Systems and methods for dynamic digital product synthesis, commerce, and distribution |
| US8719687B2 (en) * | 2011-12-23 | 2014-05-06 | Hong Kong Applied Science And Technology Research | Method for summarizing video and displaying the summary in three-dimensional scenes |
| US20150178321A1 (en) * | 2012-04-10 | 2015-06-25 | Google Inc. | Image-based 3d model search and retrieval |
| JP6316568B2 (ja) * | 2013-10-31 | 2018-04-25 | 株式会社トプコン | 測量システム |
| US20160012646A1 (en) * | 2014-07-10 | 2016-01-14 | Perfetch, Llc | Systems and methods for constructing a three dimensional (3d) color representation of an object |
-
2013
- 2013-04-04 US US14/781,642 patent/US9832447B2/en not_active Expired - Fee Related
- 2013-04-04 WO PCT/JP2013/060303 patent/WO2014162554A1/ja not_active Ceased
- 2013-04-04 JP JP2015509805A patent/JP6062039B2/ja active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005252831A (ja) * | 2004-03-05 | 2005-09-15 | Mitsubishi Electric Corp | 設備監視支援装置 |
| JP2006109118A (ja) * | 2004-10-06 | 2006-04-20 | Matsushita Electric Ind Co Ltd | 監視装置 |
| JP2008217602A (ja) * | 2007-03-06 | 2008-09-18 | Toshiba Corp | 不審行動検知システム及び方法 |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016147644A1 (en) * | 2015-03-16 | 2016-09-22 | Canon Kabushiki Kaisha | Image processing apparatus, image processing system, method for image processing, and computer program |
| JP2016174252A (ja) * | 2015-03-16 | 2016-09-29 | キヤノン株式会社 | 画像処理装置、画像処理システム、画像処理方法及びコンピュータプログラム |
| US10572736B2 (en) | 2015-03-16 | 2020-02-25 | Canon Kabushiki Kaisha | Image processing apparatus, image processing system, method for image processing, and computer program |
| JP2016201007A (ja) * | 2015-04-13 | 2016-12-01 | Ihi運搬機械株式会社 | 搬送設備の遠隔メンテナンスシステム |
| WO2017029886A1 (ja) * | 2015-08-20 | 2017-02-23 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
| JPWO2018079767A1 (ja) * | 2016-10-31 | 2019-08-08 | パナソニックIpマネジメント株式会社 | 建物画像生成装置および建物画像表示システム |
| US11146773B2 (en) | 2019-02-19 | 2021-10-12 | Media Kobo, Inc. | Point cloud data communication system, point cloud data transmitting apparatus, and point cloud data transmission method |
Also Published As
| Publication number | Publication date |
|---|---|
| US9832447B2 (en) | 2017-11-28 |
| JPWO2014162554A1 (ja) | 2017-02-16 |
| JP6062039B2 (ja) | 2017-01-18 |
| US20160119607A1 (en) | 2016-04-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6062039B2 (ja) | 画像処理システムおよび画像処理用プログラム | |
| KR102366293B1 (ko) | 디지털 트윈을 이용한 증강현실 기반 현장 모니터링 시스템 및 방법 | |
| CN113196208B (zh) | 通过使用采集装置传感器对图像采集的自动化控制 | |
| JP6055823B2 (ja) | 監視カメラ制御装置及び映像監視システム | |
| CN111373347B (zh) | 用于虚拟现实内容的提供的装置、方法和计算机程序 | |
| JP2013076924A5 (ja) | ||
| WO2012056443A2 (en) | Tracking and identification of a moving object from a moving sensor using a 3d model | |
| US11425350B2 (en) | Image display system | |
| JP2018032950A (ja) | 情報処理装置及びその方法、コンピュータプログラム | |
| US11736802B2 (en) | Communication management apparatus, image communication system, communication management method, and recording medium | |
| EP3425905A1 (en) | Apparatus and method for sensing an environment | |
| WO2014182898A1 (en) | User interface for effective video surveillance | |
| US20180350216A1 (en) | Generating Representations of Interior Space | |
| JP2020194493A (ja) | 介護設備又は病院用の監視システム及び監視方法 | |
| JP2010093783A5 (ja) | ||
| CN113228117A (zh) | 创作装置、创作方法和创作程序 | |
| JP6920776B2 (ja) | 監視支援システム及び監視支援方法 | |
| Chen et al. | Camera networks for healthcare, teleimmersion, and surveillance | |
| WO2018051310A1 (en) | System and method for remotely assisted user-orientation | |
| JP2016092693A (ja) | 撮像装置、撮像装置の制御方法およびプログラム | |
| JP2019101476A (ja) | 操作案内システム | |
| CN117716419A (zh) | 图像显示系统及图像显示方法 | |
| JP2016134765A (ja) | 監視システム | |
| WO2015141214A1 (ja) | 多視点画像に対するラベル情報の処理装置及びそのラベル情報の処理方法 | |
| CN114600067A (zh) | 具有成像仪的控制装置的监督设置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13880786 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2015509805 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 14781642 Country of ref document: US |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13880786 Country of ref document: EP Kind code of ref document: A1 |