WO2018035667A1 - Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur - Google Patents
Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2018035667A1 WO2018035667A1 PCT/CN2016/096196 CN2016096196W WO2018035667A1 WO 2018035667 A1 WO2018035667 A1 WO 2018035667A1 CN 2016096196 W CN2016096196 W CN 2016096196W WO 2018035667 A1 WO2018035667 A1 WO 2018035667A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image feature
- monitoring
- tracked target
- trace
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the present invention relates to the field of video surveillance technologies, and in particular, to a display method, apparatus, electronic device, computer program product and non-transitory computer readable storage medium in video surveillance.
- Video surveillance can be applied to many social fields such as urban management, security, and marketing.
- the video surveillance system has undergone the first generation of analog video surveillance system Closed Circuit Television (CCTV), to the second generation of digital video surveillance systems based on computers and multimedia cards, and has developed into the existing third generation network-based video surveillance. system.
- CCTV Closed Circuit Television
- the existing video surveillance system usually consists of a terminal probe, a transmission network and a server. After the terminal probe collects the surveillance video, the monitoring data is transmitted to the server through the transmission network, and then the monitoring video is played for the monitoring personnel, and the monitoring personnel manually identify the monitoring target. And analysis.
- the existing video surveillance system is not intelligent enough to monitor and analyze the target, and is highly dependent on manual analysis.
- the embodiment of the invention provides a display method, a device, an electronic device and a computer program product, so that the video monitoring system can intelligently monitor and analyze the target, and reduce the dependence on the manual analysis.
- an embodiment of the present invention provides a display method, where the method includes:
- a trace of the tracked target is generated based on the acquisition locations of the respective monitor images that conform to the image features and displayed on the electronic map.
- an embodiment of the present invention provides a display device, where the device includes:
- An image feature acquisition module configured to acquire image features of the tracked target
- An image recognition module configured to perform image recognition on a plurality of monitoring images from different collection locations according to image features of the tracked target to determine a monitoring image that conforms to the image features
- a display module configured to generate a trace of the tracked target according to the collected position of each monitoring screen that meets the image feature and display the image on the electronic map.
- an embodiment of the present invention provides an electronic device, including: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored in The memory is configured and executed by the one or more processors, the one or more modules including instructions for performing the various steps of the methods described above.
- embodiments of the present invention provide a computer program product for use with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein,
- the computer program mechanism includes instructions for performing the various steps of the above methods.
- embodiments of the present invention provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the above method Each step.
- the invention automatically recognizes the image features of the tracked target in the picture of the monitoring video and records the collection positions of the respective monitoring pictures that conform to the image features, further intelligently processes the traces of the tracked objects according to the collected positions, and will be tracked
- the target trace is displayed on the electronic map, enabling the user to intuitively obtain the monitoring information of the tracked target.
- Embodiment 1 is a schematic flow chart of a display method in Embodiment 1 of the present invention.
- FIGS. 2a-2d are schematic views showing the display in the first embodiment of the present invention.
- Embodiment 3 is a schematic flow chart of a display method in Embodiment 2 of the present invention.
- 4a-4b are schematic diagrams showing the second embodiment and the third embodiment of the present invention.
- FIG. 5 is a schematic flow chart showing a display method in Embodiment 3 of the present invention.
- 6a-6f are schematic views showing the display in the fifth embodiment of the present invention.
- FIG. 7 is a schematic diagram showing the architecture of a monitoring system in each implementation sixth of the present invention.
- FIG. 8 is a schematic structural view of a display device in Embodiment 7 of the present invention.
- the present invention provides a display method for automatically identifying image features of a tracked target in a screen of a surveillance video and recording the collection locations of the respective monitoring images that conform to the image features, and further intelligently processing according to the collection locations.
- a trace of the tracked target is obtained, and the tracked target track is displayed on the electronic map, so that the user can intuitively obtain the monitoring information of the tracked target.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- FIG. 1 is a schematic flowchart of a display method in Embodiment 1 of the present invention. As shown in FIG. 1, the display method includes:
- Step 101 Acquire image features of the tracked target
- Step 102 Perform image recognition on multiple monitoring screens from different collection locations according to image features of the tracked target to determine a monitoring screen that conforms to the image features;
- Step 103 Generate a trace of the tracked target according to the collection position of each monitoring screen that meets the image feature and display it on the electronic map.
- the monitoring system needs to acquire the image features of the target before monitoring or analyzing a certain tracked target.
- the tracked target may be extracted according to a clear image of the tracked target input by the user.
- the image feature may also directly use the image feature data input by the user as the image feature of the tracked target, and may also extract or learn image features of a tracked target on certain frame images of the source video specified by the user.
- the wanted image of the wanted person is extracted by the user to extract the facial image feature of the wanted person, or the user inputs the grayscale data of a certain pattern as the image feature of the tracked target; or the user intercepts the desired image in a certain frame of the source video.
- the target face image from which the target face image feature and the like are extracted.
- the acquiring the image feature of the tracked target comprises: acquiring an image feature input by the user as an image feature of the tracked target.
- the image features of the tracked target that are desired to be monitored or analyzed are input by the user, so that the display result can be more suitable for the user's needs.
- the manner in which the user inputs the image feature of the tracked target may be: extracting the image feature of the tracked target according to the image of the monitoring target input by the user; using the image feature data input by the user as the image feature of the tracked target; The image features of a tracked target are extracted or learned on certain frame pictures of the source video.
- step 102 image recognition of a plurality of monitoring images from different collection locations is performed based on image features of the tracked target to determine a monitoring screen that conforms to the image features.
- the system needs to obtain the source monitoring video by each monitoring terminal, and the monitoring video includes a multi-frame monitoring picture.
- the monitoring screen may be a monitoring screen in a video stream collected by each monitoring terminal in real time, or may be a monitoring screen in a historical monitoring video read from a historical monitoring video library.
- the image features of the tracked target acquired in step 101 are identified in each source monitoring video in real time or non-real time.
- the specific method of image feature recognition may refer to existing image recognition technologies.
- the information of the monitoring screen is recorded, where at least the collection position of the monitoring screen, that is, the monitoring video corresponding to the monitoring image of the image feature of the tracked target, is included.
- Collection location The location of the monitoring video is the location of the monitoring device that collects the surveillance video or the location where the device is aligned. It is a precise location that can be marked on the map, such as latitude and longitude, relative coordinates, or street and distance information.
- the collection location of the monitoring video may be acquired together with the source monitoring video, or after the monitoring screen of the image feature of the tracked target is determined in this step, the collection location is not directly recorded, and the recording location is recorded first.
- the identifier of the monitoring video source or the identifier of the monitoring device that collects the video source and then check the table to obtain the location for recording.
- the information of the monitoring screen that conforms to the image features of the tracked target can be recorded by referring to the following table:
- Table 1 shows the information record of the monitoring screen.
- the image feature of the tracked target may be recorded once in the monitoring screen, or the image feature of the tracked target may be recognized in a certain monitoring image, and then in several consecutive monitoring images. No more recorded when the image feature of the tracked target appears.
- a trace of the tracked target is generated and displayed on the electronic map based on the information of the monitor screen that conforms to the image feature of the tracked target.
- the trace may be a combination of one or more of the following:
- FIG. 2a An example of the display of such a situation; 2) expanding the set of identification points of a certain range centering on the position of the monitoring screen for identifying the tracked object, FIG. 2b shows a display example of such a case; 3) for all Identifying the monitoring screen collection position of the tracked target, performing clustering and the like to obtain an appearance area of the tracked target, and displaying the calculated appearance area of the tracked target as an area trace of the tracked target, as shown in FIG. 2c.
- the monitoring screen collection position of the tracked target can be intelligently processed in other ways to obtain the trace of the tracked target, and displayed on the electronic map.
- the layered display on the electronic map enables the monitoring personnel to intuitively determine the location of the tracked target trace, which is convenient for analysis, management, and security maintenance.
- the image features of the tracked target are automatically identified in the screen of the monitoring video, and the collection positions of the respective monitoring images that meet the image features are recorded, and the trace of the tracked target is further intelligently processed according to the collected position, and will be
- the tracking target trace is displayed on the electronic map, enabling the user to intuitively obtain the monitoring information of the tracked target.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- FIG. 3 is a schematic flowchart of a display method in Embodiment 2 of the present invention, where the display method includes the following steps:
- Step 201 Acquire a first image feature of the tracked target, and simultaneously acquire a second image feature of the tracked target;
- Step 202 Perform image recognition on a plurality of monitoring images from different collection locations according to the first image feature of the tracked target to determine a first type of monitoring image that conforms to the first image feature; and, in obtaining Tracking the second image feature of the target, according to the second of the tracked target Performing image recognition on the first type of monitoring image to determine a second type of monitoring image that simultaneously conforms to the first image feature and the second image feature;
- Step 203 Generate a first trace of the tracked target according to the collection position of each first type of monitoring image that meets the first image feature and display it on the electronic map; and, according to the first image feature and the The acquisition position of the second type of monitoring image of the second image feature generates a second trace of the tracked object and is displayed on the electronic map different from the first trace.
- a first image feature and a second image feature of the tracked object are acquired simultaneously, the first image feature being different from the second image feature.
- the first image feature may be an image feature of a large class of tracked targets
- the second image feature is an image of a majority of the tracked targets having the first image feature. feature.
- the monitoring personnel input the image feature of the white car as the first image feature, and the image feature with the license plate number of “Kyo A12345” is the second image feature, and the tracked target is all white cars.
- the important tracked target is The white car with the license plate number "Kyoto A12345", that is, the tracked target has the first image feature, and the important tracked target has both the first image feature and the second image feature.
- the monitoring personnel can also determine a plurality of other license plate numbers, that is, determine a different set of second image features corresponding to the same first image feature, for example, the tracked target is all white cars, and the important tracked targets are respectively white cars. Vehicles with different license plate numbers.
- the first or second image feature may be extracted according to a clear image of the tracked target input by the user, or the image feature data input by the user may be directly used as the first or the first target of the tracked target.
- the second image feature may also extract or learn the first or second image feature on certain monitored images of the specified source video.
- step 202 image recognition is performed on a plurality of monitoring images from different collection locations according to the first image feature and the second image feature of the tracked target.
- the first image feature is first identified according to the first image feature, and the first type of monitoring image conforming to the first image feature is determined, and the second image feature is further identified in the first type of monitoring image.
- the first type of monitoring picture that meets the second image feature is determined to be the second type of monitoring picture. If the previous steps get more For the second image feature, each of the second image features can be simultaneously identified in the first type of monitoring image to obtain a plurality of sets of the second type of monitoring images.
- the information of the first type of monitoring screen and the information of the second type of monitoring screen are recorded.
- the recording may be separately performed by referring to the form of Table 1 above, or may be combined with reference to the following Table 2:
- Table 2 shows the information record of the monitoring screen.
- a first trace of the tracked target is generated according to the collected position of the first type of monitoring image and displayed on the electronic map; and according to the second image that conforms to the first image feature and the second image feature.
- the acquisition location of the class monitoring screen generates a second trace of the tracked object and is displayed on the electronic map different from the first trace. The difference is displayed, that is, when the second trace is displayed, the display is performed using a different trace form or display feature than the first trace.
- the trace form may be: a marker point, a marker point having a certain expansion range, and an area trace obtained by clustering or region division, and may further carry a trace with time information based on the foregoing trace form.
- the display feature may be: a color, a mark point shape, a mark point size, a layer, a mark of adding an image feature or a dynamic display mode, and a display manner that other monitoring personnel can distinguish.
- Figure 4a shows a schematic representation of the first trace and the second trace in different traces, with different shape display features for differential display. That is, the trace form of the first trace is a marker point, and the trace form of the second trace is a marker point plus a track; the marker point of the first trace shows that the feature is a circle, and the marker point of the second trace shows that the feature is a square.
- Figure 4b shows the first trace and the second trace in the same trace form, adding different image features
- the difference between the first trace and the second trace is displayed to enable the user to further acquire one of the more specific, simultaneously having the first image feature and the second, based on the monitoring information of the large class of objects having the first image feature.
- the monitoring information of the tracked target of the image feature is displayed visually.
- the traces of the courier vehicles with the same logo it is possible to distinguish the different license plate numbers from the specific travel data of different vehicles; for example, to display the traces of similar vehicles (dangerous vehicles, such as tank trucks), it is possible to distinguish different license plates.
- the number shows specific driving data of different vehicles; for example, while displaying the trace of the same license plate number vehicle, it is possible to distinguish the facial features of different drivers to display specific driving data when different people drive the vehicle; and, for example, to display a certain image according to facial recognition
- the criminal suspect traces it is possible to distinguish between the range of activities in which the clothes are displayed and the different clothing states.
- the monitoring personnel has determined that the first image feature shared by a large class of objects and the second image feature of some of the key targets are to be retrieved before the monitoring video is analyzed, so that two Image features are identified and synchronized for simultaneous display.
- the embodiment can automatically identify a large class of objects having the first image feature in the monitoring video, and the key target of the second class image having the second image feature, and then the target of the major class and the key target thereof
- the monitoring information is differentiated and displayed, so that users can obtain monitoring information of some key targets at the same time while intuitively acquiring monitoring information of a large class of targets.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 5 is a schematic flowchart of a display method in Embodiment 3 of the present invention, where the display method includes the following steps:
- Step 301 Acquire a first image feature of the tracked target.
- Step 302 when acquiring the first image feature of the tracked target, performing image recognition on the plurality of monitoring images from different collection locations according to the first image feature of the tracked target to determine that the first image feature is met
- the first type of monitoring screen
- Step 303 when acquiring the first image feature of the tracked target, generating a first trace of the tracked target according to the collection position of each first type of monitoring image that meets the first image feature and displaying the image on the electronic map;
- Step 304 Acquire a second image feature of the tracked target.
- Step 305 Perform image recognition on the first type of monitoring image according to the second image feature of the tracked target when acquiring the second image feature of the tracked target, to determine that the first image feature is consistent with a second type of monitoring image of the second image feature;
- Step 306 when acquiring the second image feature of the tracked target, generating a second trace of the tracked target according to the collection position of the second type of monitoring image that simultaneously meets the first image feature and the second image feature.
- the first trace is distinguished from the display on the electronic map.
- Step 301 and step 304 in this embodiment correspond to step 201 in the second embodiment
- step 302 and step 305 correspond to step 202 in the second embodiment
- step 303 and step 306 are the same as steps in the second embodiment.
- 203 corresponds.
- the specific implementation manners of the steps are the same, except that in the embodiment, after the first trace of the large-scale target has been obtained according to the first image feature, the display result on the electronic map is further determined according to the second target obtained.
- the image feature performs image recognition on the important target in the first type of monitoring image, and generates a second trace of the important target superimposed on the original first trace and performs differential display.
- Step 301 to step 303 and step 304 to step 306 in this embodiment perform two-step image recognition and result display.
- the two-step image recognition and result display can be in the following two ways:
- One implementation is to re-image all the source videos after obtaining the first step display result and acquiring the second image feature.
- This implementation second step identification and display step is similar to the steps in the second embodiment.
- the first image feature is monitored in the first image recognition process, that is, the second image feature is retrieved in the first type of image recognition process.
- the first type of monitoring pictures are extracted and recorded.
- the first image features can be selected to be the clearest.
- the first image feature occupies a frame or a few frames of the monitor image with the largest area ratio. It can be understood that the calculation amount of such an implementation will be smaller than the calculation amount of the first mode.
- the monitoring personnel only determines that the first image feature shared by a large class of objects is to be retrieved when the first image is recognized by the monitoring video, and after obtaining the first step display result, some important ones are
- the second image feature possessed by the target further performs image recognition, and displays and distinguishes the new result from the search result of the first image recognition.
- the embodiment can automatically identify a large class of objects having the first image feature in the monitoring video, and the key target of the second class image having the second image feature, and then the target of the major class and the key target thereof
- the monitoring information is differentiated and displayed, so that the user can obtain the monitoring information of some key targets further intuitively while acquiring the monitoring information of a large class of targets.
- Embodiment 4 is a diagrammatic representation of Embodiment 4:
- This embodiment is implemented on the basis of any of the above-described first to third display methods.
- first to third embodiments reference may be made to the description of the first to third embodiments.
- Generating a trace of the tracked target according to the collected position of each monitoring picture that meets the image feature and displaying the trace on the electronic map includes: generating an area trace of the tracked target according to the collected position of each monitoring picture that meets the image feature And displayed on the electronic map.
- the position point is clustered, and each group of the position points after the clustering (excluding the noise point whose distance is too far) is All the locations are connected by two or two, and the outer edges of all the connections are tracked.
- the area trace of the tracked target may also be determined according to the divided area, where the divided area is an area divided according to a street, a public security jurisdiction or an administrative area, and each monitoring screen conforming to the image feature is determined.
- the location point in the collection location information appears at most or which of the divided regions, and the one or several divided regions are used as the region trace of the tracked target.
- the traces of the first trace and the second trace may be the same or different, and may be regional traces or only one type
- the trail is a regional trail.
- the area trace is displayed on the electronic map. Compared with the display of other traces, the area trace is more convenient for the monitoring personnel to visually confirm the appearance area of the tracked target, which is convenient for subsequent monitoring or taking. Related actions.
- Embodiment 5 is a diagrammatic representation of Embodiment 5:
- This embodiment is implemented on the basis of the target tracking display method according to any one of the first to fourth embodiments.
- first to fourth embodiments For the similarities or repetitions of the first to fourth embodiments, reference may be made to the descriptions of the first to fourth embodiments.
- Generating a trace of the tracked target according to the collection position of each monitoring screen that meets the image feature and displaying the trace on the electronic map includes: generating a time according to the collection position and the acquisition time of each monitoring screen that meets the image feature A trace of the tracked target of the information is displayed on the electronic map.
- the process of performing image recognition in each picture of the video source according to the image feature of the tracked object, or according to the first image feature or the second image feature determining the image feature that matches the tracked target After monitoring the screen, the information of the monitoring screen is recorded, and In addition to the collection location of the monitoring screen, the acquisition time is included.
- the time of recording the current monitoring screen is the acquisition time; or when the image feature of the tracked target appears in the continuous monitoring screen
- the time of the monitoring screen that first recognizes the image feature of the tracked target is the acquisition time, that is, the start time, and is not recorded when the image feature of the tracked target appears in the continuous monitoring screen; and may also be in a certain monitoring screen.
- After identifying the image features of the tracked target it is determined whether the image features of the tracked target are still present in the subsequent consecutive image frames until the image features of the tracked target no longer appear, and only the last appearance of the image features of the tracked target is recorded.
- the time of the screen is the acquisition time, that is, the end time; the middle point of the start time and the end time may be the acquisition time, or the acquisition time of the monitoring image with the highest image feature matching rate of the tracked target between the start time and the end time.
- the information of the monitoring screen conforming to the image features of the tracked target can be recorded by referring to the following Table 3:
- Table 3 shows the information record of the monitoring screen.
- the information of the monitoring picture that conforms to the first image feature and the information of the monitoring picture that conforms to the first image feature and the second image feature can be recorded in the following table:
- Table 4 shows the information record of the monitor screen.
- the trace carrying the collection time information is displayed on the electronic map according to the collection time and the collection location included in the information of the monitoring screen that conforms to the image feature;
- the manner in which the trace carries the collection time information may be one or a combination of the following:
- Figures 6a-6d show examples of the display of several typical traces on the electronic map after the acquisition time identification.
- the acquisition time information is carried along with the trace.
- the trajectory is a kind of trace carrying the acquisition time information (because when the trajectory is generated, the position of the collection position included in the monitoring screen information is sequentially connected according to the order of the acquisition time included in the monitoring screen information), so as to further highlight
- the trajectory serves as the information of the acquisition time carried by the trace, and the trajectory direction can be indicated on the trajectory according to the acquisition time.
- Fig. 6e shows an example of display on the electronic map when the trace is a trajectory
- Fig. 6f shows a display example in which the trace is added as shown in 1) when the trace is a trajectory to which the moving direction is added, and is displayed. .
- the period of the acquisition time is much longer than the time of the display process, the period of the acquisition time can be compressed in a certain proportion during the display process;
- the color or size of the target may be displayed according to the sequence of the collection time. For example, the collection information corresponding to the current acquisition time is darker or more obvious.
- the traces of the first trace and the second trace may be the same or different, that is, the time information may or may not be carried.
- the generating, according to the collection position and the acquisition time of each monitoring picture that meets the image feature, a trace of the tracked target carrying the time information and displaying the trace on the electronic map includes: monitoring according to the image characteristics
- the acquisition position and acquisition time of the picture generate traces of the tracked target marked with the acquisition time and are displayed on the electronic map.
- Figures 6a-6d and 6f show examples of display of several typical traces on the electronic map after the acquisition time identification.
- the generating, according to the collection position and the acquisition time of each monitoring picture that meets the image feature, a trace of the tracked target carrying the time information and displaying the trace on the electronic map includes: monitoring according to the image characteristics
- the acquisition position and acquisition time of the screen generate the trajectory of the tracked target and display it on the electronic map.
- each location point may be sequentially connected according to the collection position and the acquisition time in the monitoring screen information conforming to the image feature, and the trajectory of the monitored target is generated.
- the trajectory of the monitored target is taken as a trace of the monitored target. In the process of generating the trajectory, points that are obviously unreasonable can be excluded, and the trajectory is fitted according to the street.
- Fig. 6e shows an example of display on the electronic map when the trace of the monitored target is a trajectory
- Fig. 6f shows that the trace of the monitored target is a trajectory to which the moving direction is added, and the display of the time stamp is further added when displayed Example.
- the monitoring personnel can be provided with frequent activities in the target time period. Scope for easy analysis, management and security maintenance.
- the information of the monitoring screen conforming to the image feature is enriched to include the acquisition time of the monitoring screen, and then the trace is generated according to the information of the monitoring screen conforming to the image feature and displayed in the trace
- the collection time information can be carried in various forms to provide users with more comprehensive monitoring information more flexibly; in addition, an intelligent scheme for displaying traces in the form of tracks is proposed to provide users with more intuitive monitoring information.
- FIG. 7 is a schematic structural diagram of a monitoring system according to Embodiment 6 of the present invention, which includes a video acquisition layer 601, a network access layer 602, and an intelligent analysis layer 603.
- the layers may be strongly coupled or loosely coupled.
- the video capture layer 601 mainly includes various standard monitoring video acquisition devices, and can be a camera, a video recorder or a monitoring robot with a video capture function.
- Various monitoring video capture devices can have the following functions: 1) Multiple network access functions: monitoring video capture devices can support multiple networks such as LTE, Wifi, XPON, etc.; 2) Error correction function: high network packet loss rate, mobile Guarantee image quality under the network; 3) Breakpoint retransmission function: ensure data is not lost in case of abnormal network disconnection; 4) High-definition codec function: encode and decode images by high-definition codec algorithm to ensure monitoring video clarity; 5) Efficient coding compression function: achieve lower bandwidth and storage requirements under the same image quality, reduce bandwidth occupancy and storage space of different scenes; 6) Capturing high-speed moving object image capability: capture image recognition capability of video surveillance acquisition equipment An image of a moving object at high speed.
- the functions of the above 1) to 5) can be realized by the monitoring video collection device itself, or by externally connecting the enhanced
- the various monitoring video capture devices transmit the collected multimedia information (images, sounds, etc.) to the intelligent analysis layer 603 through the network access layer 602.
- the network access layer 602 is an IP bearer network. Further, the IP bearer network can be a dedicated acceleration/encryption network. The data stream of the video captured by the video capture device is connected to the network through LTE, Wifi, XPON, and the like. The layer 602 is further transferred to the intelligent analysis layer for processing.
- the use of a dedicated IP bearer network for monitoring video data transmission compared with the traditional Internet service, can ensure the real-time performance of message transmission when the network has poor transmission quality (busy time), and avoid the monitoring accuracy or real-time due to poor network quality. Sex.
- the intelligent analysis layer 603 performs AI (Artificial Intelligence) through cloud computing technology.
- AI Artificial Intelligence
- the analysis can be performed by using the methods provided in the above embodiments to process and analyze the surveillance video, and the results can be sent to the command center or the monitoring center for linkage, and the city management and security can be achieved through visual dispatching, command and security linkage. Or marketing purposes.
- FIG. 8 is a schematic structural diagram of a display device according to Embodiment 7 of the present invention. As shown in the figure, the device 700 includes:
- the image feature acquiring module 701 is configured to acquire image features of the tracked target
- the image recognition module 702 is configured to perform image recognition on a plurality of monitoring images from different collection locations according to image features of the tracked target to determine a monitoring screen that conforms to the image features;
- the display module 703 is configured to generate a trace of the tracked target according to the collected position of each monitoring screen that conforms to the image feature and display the image on the electronic map.
- the image feature acquiring module 701 is specifically configured to acquire an image feature input by the user as an image feature of the tracked target.
- the image feature obtaining module 701 is specifically configured to:
- the image recognition module 702 is specifically configured to:
- the display module 703 is specifically configured to:
- the display module 703 is specifically configured to:
- An area trace of the tracked target is generated based on the collection locations of the respective monitoring pictures that conform to the image features and displayed on the electronic map.
- the display module 703 is specifically configured to:
- a trace of the tracked object carrying the time information is generated and displayed on the electronic map according to the collection position and the acquisition time of the respective monitoring pictures that conform to the image features.
- the display module 703 is specifically configured to:
- the trajectory of the tracked object is generated and displayed on the electronic map according to the collection position and the acquisition time of the respective monitoring pictures that conform to the image features.
- an embodiment of the present invention provides a target tracking display electronic device. Since the principle is similar to that of a target tracking display, the implementation of the method can be referred to the implementation of the method, and the repeated description is not repeated.
- the electronic device includes: a display, a memory, one or more processors; and one or more modules, the one or more modules being stored in the memory and configured to be configured by the one or more The processor executes, the one or more modules including instructions for performing the various steps of any of the above methods.
- an embodiment of the present invention further provides a target trace display computer program product for use in combination with an electronic device including a display, which is based on a principle and a target trace.
- the method is similar, so the implementation can refer to the implementation of the method, and the repeated description will not be repeated.
- the computer program product comprises a computer readable storage medium and a computer program mechanism embodied therein, the computer program mechanism comprising instructions for performing the various steps of any of the foregoing methods.
- an embodiment of the present invention further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to execute any Each step in the foregoing method.
- embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the device is implemented in a flow chart or Multiple processes and/or block diagrams The functions specified in one or more boxes.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
- Alarm Systems (AREA)
Abstract
L'invention concerne un procédé et un appareil d'affichage, un dispositif électronique, un produit de programme informatique et un support non transitoire de stockage lisible par ordinateur. Le procédé consiste à : acquérir une caractéristique d'image d'un objet suivi (101); effectuer, sur la base de la caractéristique d'image de l'objet suivi, une identification d'image sur de multiples images de surveillance acquises à partir de différents emplacements, pour déterminer une image de surveillance qui correspond à la caractéristique d'image (102); et générer une trace de l'objet suivi sur la base des emplacements des images de surveillance acquises qui correspondent à la caractéristique d'image, et afficher la trace sur une carte électronique (103). Selon le procédé, une caractéristique d'image d'un objet suivi est automatiquement identifiée dans des images d'une vidéo de surveillance; des emplacements des images de surveillance acquises correspondant à la caractéristique d'image sont enregistrés; une trace de l'objet suivi est obtenue par un traitement intelligent supplémentaire sur la base des emplacements; et la trace de l'objet suivi est affichée sur une carte électronique, de telle sorte qu'un utilisateur peut acquérir de manière intuitive des informations de surveillance de l'objet suivi.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/096196 WO2018035667A1 (fr) | 2016-08-22 | 2016-08-22 | Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur |
| CN201680002947.3A CN107004271B (zh) | 2016-08-22 | 2016-08-22 | 显示方法、装置、电子设备、计算机程序产品和存储介质 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/096196 WO2018035667A1 (fr) | 2016-08-22 | 2016-08-22 | Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018035667A1 true WO2018035667A1 (fr) | 2018-03-01 |
Family
ID=59431678
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/096196 Ceased WO2018035667A1 (fr) | 2016-08-22 | 2016-08-22 | Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107004271B (fr) |
| WO (1) | WO2018035667A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112650156A (zh) * | 2019-10-12 | 2021-04-13 | 北京京东乾石科技有限公司 | 展示无人设备运行情况的方法和装置 |
| CN112991485A (zh) * | 2019-12-13 | 2021-06-18 | 浙江宇视科技有限公司 | 轨迹绘制方法、装置、可读存储介质及电子设备 |
| CN113329054A (zh) * | 2021-04-27 | 2021-08-31 | 杭州壹悟科技有限公司 | 一种设备监控动画显示优化方法及装置 |
| CN114449212A (zh) * | 2020-11-04 | 2022-05-06 | 北京小米移动软件有限公司 | 对象追踪方法及装置、电子设备、存储介质 |
| CN114826958A (zh) * | 2022-05-05 | 2022-07-29 | 重庆伏特猫科技有限公司 | 一种基于智能控制的工业化监控装置 |
| CN115623336A (zh) * | 2022-11-07 | 2023-01-17 | 北京拙河科技有限公司 | 一种亿级摄像设备的图像跟踪方法及装置 |
| CN115866206A (zh) * | 2022-12-01 | 2023-03-28 | 北京天玛智控科技股份有限公司 | 综采工作面视频增强展示方法、装置、系统及电子设备 |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107396069A (zh) * | 2017-09-01 | 2017-11-24 | 三筑工科技有限公司 | 监控展示方法、装置及系统 |
| CN108982756B (zh) * | 2018-04-26 | 2021-07-27 | 贵州省烟草公司遵义市公司 | 一种农作物重金属污染预测方法及装置 |
| CN111699679B (zh) * | 2018-04-27 | 2023-08-01 | 上海趋视信息科技有限公司 | 交通系统监控和方法 |
| CN109215486B (zh) * | 2018-07-18 | 2021-11-26 | 平安科技(深圳)有限公司 | 电子地图标注及显示方法、装置、终端设备及存储介质 |
| CN109325965A (zh) * | 2018-08-22 | 2019-02-12 | 浙江大华技术股份有限公司 | 一种目标对象跟踪方法及装置 |
| CN109816906B (zh) * | 2019-01-03 | 2022-07-08 | 深圳壹账通智能科技有限公司 | 安保监控方法及装置、电子设备、存储介质 |
| CN111145212B (zh) * | 2019-12-03 | 2023-10-03 | 浙江大华技术股份有限公司 | 一种目标追踪处理方法及装置 |
| CN111010547A (zh) * | 2019-12-23 | 2020-04-14 | 浙江大华技术股份有限公司 | 目标对象的追踪方法及装置、存储介质、电子装置 |
| CN111131700A (zh) * | 2019-12-25 | 2020-05-08 | 重庆特斯联智慧科技股份有限公司 | 一种用于智慧安防的隐蔽跟踪设备及使用方法 |
| CN112468696A (zh) * | 2020-11-17 | 2021-03-09 | 珠海大横琴科技发展有限公司 | 一种数据处理的方法和装置 |
| CN114494355A (zh) * | 2022-02-15 | 2022-05-13 | 平安普惠企业管理有限公司 | 基于人工智能的轨迹分析方法、装置、终端设备及介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101883261A (zh) * | 2010-05-26 | 2010-11-10 | 中国科学院自动化研究所 | 大范围监控场景下异常目标检测及接力跟踪的方法及系统 |
| CN102724482A (zh) * | 2012-06-18 | 2012-10-10 | 西安电子科技大学 | 基于gps和gis的智能视觉传感网络运动目标接力跟踪系统 |
| CN103632044A (zh) * | 2013-11-19 | 2014-03-12 | 北京环境特性研究所 | 基于地理信息系统的摄像头拓扑构建方法及装置 |
| CN104954743A (zh) * | 2015-06-12 | 2015-09-30 | 西安理工大学 | 一种多相机语义关联目标跟踪方法 |
| KR20160014413A (ko) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | 복수의 오버헤드 카메라와 사이트 맵에 기반한 객체 추적 장치 및 그 방법 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7796780B2 (en) * | 2005-06-24 | 2010-09-14 | Objectvideo, Inc. | Target detection and tracking from overhead video streams |
| CN101277429B (zh) * | 2007-03-27 | 2011-09-07 | 中国科学院自动化研究所 | 监控中多路视频信息融合处理与显示的方法和系统 |
| CN101901354B (zh) * | 2010-07-09 | 2014-08-20 | 浙江大学 | 基于特征点分类的监控录像中实时多目标检测与跟踪方法 |
| CN104581000A (zh) * | 2013-10-12 | 2015-04-29 | 北京航天长峰科技工业集团有限公司 | 一种视频关注目标的运动轨迹的快速检索方法 |
-
2016
- 2016-08-22 WO PCT/CN2016/096196 patent/WO2018035667A1/fr not_active Ceased
- 2016-08-22 CN CN201680002947.3A patent/CN107004271B/zh active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101883261A (zh) * | 2010-05-26 | 2010-11-10 | 中国科学院自动化研究所 | 大范围监控场景下异常目标检测及接力跟踪的方法及系统 |
| CN102724482A (zh) * | 2012-06-18 | 2012-10-10 | 西安电子科技大学 | 基于gps和gis的智能视觉传感网络运动目标接力跟踪系统 |
| CN103632044A (zh) * | 2013-11-19 | 2014-03-12 | 北京环境特性研究所 | 基于地理信息系统的摄像头拓扑构建方法及装置 |
| KR20160014413A (ko) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | 복수의 오버헤드 카메라와 사이트 맵에 기반한 객체 추적 장치 및 그 방법 |
| CN104954743A (zh) * | 2015-06-12 | 2015-09-30 | 西安理工大学 | 一种多相机语义关联目标跟踪方法 |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112650156A (zh) * | 2019-10-12 | 2021-04-13 | 北京京东乾石科技有限公司 | 展示无人设备运行情况的方法和装置 |
| CN112650156B (zh) * | 2019-10-12 | 2022-09-30 | 北京京东乾石科技有限公司 | 展示无人设备运行情况的方法和装置 |
| CN112991485A (zh) * | 2019-12-13 | 2021-06-18 | 浙江宇视科技有限公司 | 轨迹绘制方法、装置、可读存储介质及电子设备 |
| CN114449212A (zh) * | 2020-11-04 | 2022-05-06 | 北京小米移动软件有限公司 | 对象追踪方法及装置、电子设备、存储介质 |
| CN113329054A (zh) * | 2021-04-27 | 2021-08-31 | 杭州壹悟科技有限公司 | 一种设备监控动画显示优化方法及装置 |
| CN113329054B (zh) * | 2021-04-27 | 2022-07-12 | 杭州壹悟科技有限公司 | 一种设备监控动画显示优化方法及装置 |
| CN114826958A (zh) * | 2022-05-05 | 2022-07-29 | 重庆伏特猫科技有限公司 | 一种基于智能控制的工业化监控装置 |
| CN114826958B (zh) * | 2022-05-05 | 2022-10-04 | 重庆伏特猫科技有限公司 | 一种基于智能控制的工业化监控装置 |
| CN115623336A (zh) * | 2022-11-07 | 2023-01-17 | 北京拙河科技有限公司 | 一种亿级摄像设备的图像跟踪方法及装置 |
| CN115623336B (zh) * | 2022-11-07 | 2023-06-30 | 北京拙河科技有限公司 | 一种亿级摄像设备的图像跟踪方法及装置 |
| CN115866206A (zh) * | 2022-12-01 | 2023-03-28 | 北京天玛智控科技股份有限公司 | 综采工作面视频增强展示方法、装置、系统及电子设备 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107004271B (zh) | 2021-01-15 |
| CN107004271A (zh) | 2017-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018035667A1 (fr) | Procédé et appareil d'affichage, dispositif électronique, produit de programme informatique et support non transitoire de stockage lisible par ordinateur | |
| CN112991656B (zh) | 基于姿态估计的全景监控下人体异常行为识别报警系统及方法 | |
| CN104063883B (zh) | 一种基于对象和关键帧相结合的监控视频摘要生成方法 | |
| CN111967393A (zh) | 一种基于改进YOLOv4的安全帽佩戴检测方法 | |
| CN102799935B (zh) | 一种基于视频分析技术的人流量统计方法 | |
| WO2021017882A1 (fr) | Procédé et appareil de conversion de système de coordonnées d'image, dispositif et support d'enregistrement | |
| CN104883548B (zh) | 监控视频人脸抓取处理方法及其系统 | |
| CN101883261A (zh) | 大范围监控场景下异常目标检测及接力跟踪的方法及系统 | |
| CN110096945B (zh) | 基于机器学习的室内监控视频关键帧实时提取方法 | |
| CN106060470A (zh) | 一种视频监控方法及其系统 | |
| CN111325051A (zh) | 一种基于人脸图像roi选取的人脸识别方法及装置 | |
| US20230055581A1 (en) | Privacy preserving anomaly detection using semantic segmentation | |
| CN108345854A (zh) | 基于图像分析的信息处理方法、装置、系统及存储介质 | |
| CN111652035B (zh) | 一种基于ST-SSCA-Net的行人重识别方法及系统 | |
| WO2022213540A1 (fr) | Procédé et système de détection d'objet, d'identification d'attribut d'objet et de suivi d'objet | |
| CN111915713B (zh) | 一种三维动态场景的创建方法、计算机设备、存储介质 | |
| CN114627526A (zh) | 基于多摄像头抓拍图像的融合去重方法、装置及可读介质 | |
| CN105930814A (zh) | 基于视频监控平台的人员异常聚集行为的检测方法 | |
| CN105608209A (zh) | 一种视频标注方法和视频标注装置 | |
| Zhang et al. | On the design and implementation of a high definition multi-view intelligent video surveillance system | |
| CN105989063B (zh) | 视频检索方法和装置 | |
| CN108833776A (zh) | 一种远程教育教师自动识别优化跟踪方法及系统 | |
| CN108876672A (zh) | 一种远程教育教师自动识别图像优化跟踪方法及系统 | |
| CN205883437U (zh) | 一种视频监控系统 | |
| CN105898259B (zh) | 一种视频画面自适应清晰化处理方法和装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16913691 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 07/06/2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16913691 Country of ref document: EP Kind code of ref document: A1 |