[go: up one dir, main page]

CN110191324B - Image processing method, image processing apparatus, server, and storage medium - Google Patents

Image processing method, image processing apparatus, server, and storage medium Download PDF

Info

Publication number
CN110191324B
CN110191324B CN201910579253.1A CN201910579253A CN110191324B CN 110191324 B CN110191324 B CN 110191324B CN 201910579253 A CN201910579253 A CN 201910579253A CN 110191324 B CN110191324 B CN 110191324B
Authority
CN
China
Prior art keywords
image
captured images
cameras
target
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910579253.1A
Other languages
Chinese (zh)
Other versions
CN110191324A (en
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910579253.1A priority Critical patent/CN110191324B/en
Publication of CN110191324A publication Critical patent/CN110191324A/en
Application granted granted Critical
Publication of CN110191324B publication Critical patent/CN110191324B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像处理方法、装置、服务器及存储介质,该图像处理方法应用于服务器,该图像处理方法包括:获取多个摄像头的拍摄图像;获取多个摄像头的拍摄图像中存在至少一个预设移动对象的拍摄图像;将存在移动对象的拍摄图像按照不同的预设移动对象进行分组,得到多个第一图像组;去除每个第一图像组的拍摄图像中存在的设定内容,得到多个第二图像组;按照拍摄图像的拍摄时间先后顺序,将多个第二图像组中每个第二图像组的拍摄图像进行拼接合成,得到多个预设移动对象对应的视频文件。本方法可以生成预设移动对象在多个范围内移动时的活动视频。

Figure 201910579253

The present application discloses an image processing method, a device, a server and a storage medium. The image processing method is applied to the server, and the image processing method includes: acquiring captured images of multiple cameras; Presetting the captured images of the moving objects; grouping the captured images of the moving objects according to different preset moving objects to obtain a plurality of first image groups; removing the setting content existing in the captured images of each first image group, Obtaining a plurality of second image groups; splicing and synthesizing the captured images of each second image group in the plurality of second image groups according to the shooting time sequence of the captured images to obtain a plurality of video files corresponding to the preset moving objects. The method can generate an active video when a preset moving object moves in multiple ranges.

Figure 201910579253

Description

Image processing method, image processing apparatus, server, and storage medium
Technical Field
The present application relates to the field of shooting technologies, and in particular, to an image processing method, an image processing apparatus, a server, and a storage medium.
Background
At present, with the wide use of shooting technology in daily life, people have more and more demands on video shooting. For example, there are increasing places to arrange monitoring systems, monitor the state and human activities of a certain area, and the like by using a camera. However, since the imaging area of the camera is limited, that is, the angle of view is limited, and the camera often can only image a certain fixed area, the content to be imaged is limited.
Disclosure of Invention
In view of the foregoing problems, the present application provides an image processing method, an image processing apparatus, a server, and a storage medium, which can obtain a monitoring video of a same moving object in a wide range.
In a first aspect, an embodiment of the present application provides a picture processing method, which is applied to a server, where the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped, where the method includes: acquiring shot images of the plurality of cameras; acquiring shot images of at least one preset moving object in the shot images of the plurality of cameras; grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, and the preset moving objects corresponding to each first image group are different; removing the set content existing in the shot image of each first image group to obtain a plurality of second image groups; and according to the shooting time sequence of the shot images, splicing and synthesizing the shot images of each second image group in the plurality of second image groups to obtain a plurality of video files corresponding to the preset moving objects.
In a second aspect, an embodiment of the present application provides a picture processing apparatus, which is applied to a server, the server is in communication connection with a plurality of cameras, the plurality of cameras are distributed in different positions, shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlapped, and the apparatus includes: the device comprises a first image acquisition module, a second image acquisition module, an image grouping module, a content removal module and a video synthesis module, wherein the first image acquisition module is used for acquiring shot images of the cameras; the second image acquisition module is used for acquiring shot images of at least one preset moving object in the shot images of the plurality of cameras; the image grouping module is used for grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, and the first image groups are a set of shot images containing the same preset moving object; the content removing module is used for removing set content existing in the shot image of each first image group to obtain a plurality of second image groups; and the video synthesis module is used for splicing and synthesizing the shot images of each second image group in the plurality of second image groups according to the shooting time sequence of the shot images to obtain a plurality of video files corresponding to the preset moving objects.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the image processing method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the image processing method provided in the first aspect.
The scheme that this application provided is applied to the server, server and a plurality of camera communication connection, and a plurality of cameras distribute in different positions, and the shooting region of two adjacent cameras in a plurality of cameras is adjoint or has some coincidences. The method comprises the steps of obtaining shot images of a plurality of cameras, obtaining shot images with at least one preset moving object in the shot images of the cameras, grouping the shot images with the moving object according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, the preset moving objects corresponding to each first image group are different, removing set contents existing in the shot images of each first image group to obtain a plurality of second image groups, splicing and synthesizing the shot images of each second image group in the plurality of second image groups according to the sequence of the shooting time of the shot images to obtain video files corresponding to the plurality of preset moving objects, and accordingly splicing and synthesizing the shot images of the preset moving objects in a plurality of shooting areas, the complete monitoring video of the preset moving object shot in the plurality of shooting areas is obtained, the monitoring effect is improved, the set content in the video file is removed, the interference of the set content to the user is avoided, the user does not need to check the shot images of the shooting areas independently, and the user experience is prompted.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of a distributed system provided by an embodiment of the present application.
FIG. 2 shows a flow diagram of an image processing method according to one embodiment of the present application.
FIG. 3 illustrates a grouping diagram of image groupings provided according to one embodiment of the present application.
FIG. 4 shows a flow diagram of an image processing method according to another embodiment of the present application.
FIG. 5 shows a flow diagram of an image processing method according to yet another embodiment of the present application.
FIG. 6 shows a flow chart of an image processing method according to yet another embodiment of the present application
FIG. 7 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 8 shows a block diagram of a content removal module in an image processing apparatus according to an embodiment of the present application.
FIG. 9 shows another block diagram of a content removal module in an image processing apparatus according to one embodiment of the present application.
Fig. 10 is a block diagram of a server for executing an image processing method according to an embodiment of the present application.
Fig. 11 is a storage unit for storing or carrying program codes for implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the development of society and the advancement of technology, monitoring systems are arranged in more and more places, and in most application scenarios of monitoring through the monitoring systems, the used cameras can only monitor a certain fixed area. Therefore, only videos monitored in a single area can be formed, and when a user needs to check videos monitored in a plurality of areas, the user can only check the monitored videos in each area respectively. In order to solve the above problems, a panoramic video monitoring system is developed in the market, the panoramic video monitoring system utilizes cameras arranged at different positions to shoot images, and the shot images are synthesized into a panoramic image, but the obtained range of the panoramic image is too large, so that a certain object cannot be monitored, and when a user needs to check the monitoring video of the certain object, the user needs to check the panoramic monitoring video with more monitoring contents, which causes poor user experience.
In view of the above problems, the inventors have found and proposed, through long-term research, an image processing method, an image processing apparatus, a server, and a storage medium provided in the embodiments of the present application, in which a plurality of cameras distributed at different positions are used for monitoring and shooting, then shot images of the plurality of cameras are grouped according to different moving objects, and after setting contents of the shot images in the groups are removed, the shot images in the groups are spliced and combined to form video files corresponding to the different moving objects, and the setting contents are removed from the video files, so that interference of the setting contents on users can be reduced, and the users can conveniently view the video files. The specific image processing method is described in detail in the following embodiments.
The following description will be made with respect to a distributed system to which the image processing method provided in the embodiment of the present application is applied.
Referring to fig. 1, fig. 1 shows a schematic diagram of a distributed system provided in an embodiment of the present application, where the distributed system includes a server 100 and a plurality of cameras 200 (the number of the cameras 200 shown in fig. 1 is 4), where the server 100 is connected to each camera 200 in the plurality of cameras 200, respectively, and is used to perform data interaction with each camera 200, for example, the server 100 receives an image sent by the camera 200, the server 100 sends an instruction to the camera 200, and the like, which is not limited specifically herein. In addition, the server 100 may be a cloud server or a traditional server, the camera 200 may be a gun camera, a hemisphere camera, a high-definition smart sphere camera, a pen container camera, a single-board camera, a flying saucer camera, a mobile phone camera, etc., and the lens of the camera may be a wide-angle lens, a standard lens, a telephoto lens, a zoom lens, a pinhole lens, etc., and is not limited herein.
In some embodiments, the plurality of cameras 200 are disposed at different positions for photographing different areas, and photographing areas of each two adjacent cameras 200 of the plurality of cameras 200 are adjacent or partially coincide. It can be understood that each camera 200 can correspondingly shoot different areas according to the difference of the angle of view and the setting position, and the shooting areas of every two adjacent cameras 200 are arranged to be adjacent or partially overlapped, so that the area to be shot by the distributed system can be completely covered. The plurality of cameras 200 may be arranged side by side at intervals in a length direction, and configured to capture images in the length direction area, or the plurality of cameras 200 may also be arranged at intervals in a ring direction, and configured to capture images in the ring area, and of course, the plurality of cameras 200 may further include other arrangement modes, which are not limited herein.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application. The image processing method is used for monitoring and shooting through a plurality of cameras distributed at different positions, then grouping the shot images of the plurality of cameras according to different moving objects, removing the set content of the shot images in the grouping, splicing and synthesizing the shot images according to the shot images in the grouping to form video files corresponding to the different moving objects, removing the set content from the video files, reducing the interference of the set content to users, and facilitating the viewing of the users. In a specific embodiment, the image processing method is applied to the image processing apparatus 400 shown in fig. 7 and the server 100 (fig. 10) configured with the image processing apparatus 400. The specific flow of the embodiment will be described below by taking a server as an example, and it is understood that the server applied in the embodiment may be a cloud server, and may also be a traditional server, which is not limited herein. The multiple cameras are distributed at different positions, and shooting areas of two adjacent cameras in the multiple cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 2, the image processing method may specifically include the following steps:
step S110: and acquiring shot images of the plurality of cameras.
In the embodiment of the application, the plurality of cameras distributed at different positions can shoot the shooting areas, and each camera can shoot the shot image of the corresponding shooting area. The plurality of cameras can upload shot images to the server, and the server can receive shot images uploaded by the plurality of cameras. The plurality of cameras are distributed at different positions, and the shooting areas of two adjacent cameras are adjacent or partially overlapped, so that the server can acquire the shooting images of different shooting areas, and the shooting areas can form a complete area, namely, the server can acquire the shooting image of a large-range complete area. The method for uploading the shot images by the camera is not limited, and for example, the shot images may be uploaded according to a set interval duration.
In some embodiments, each of the plurality of cameras may be in an on state, so that the entire shooting area corresponding to the plurality of cameras may be shot, wherein each of the plurality of cameras may be in an on state at a set time period or all the time. Of course, each camera in the multiple cameras may also be in an on state or an off state according to the received control instruction, and the control instruction may include an instruction automatically sent by a server connected to the camera, an instruction sent by the electronic device to the camera through the server, an instruction generated by a user through triggering the camera, and the like, which is not limited herein.
Step S120: and acquiring shot images of at least one preset moving object in the shot images of the plurality of cameras.
In this embodiment, the server may determine that there is a captured image of at least one preset moving object from the captured images of the plurality of cameras after acquiring the captured images of the plurality of cameras. The server may determine that a shot image of a preset moving object exists in shot images uploaded by a plurality of cameras at present, or may determine that the shot image is uploaded by the cameras within a set time period before the current time. The preset moving object may be a preset moving object, and the moving object may be a person, an animal, a vehicle, or the like that can move.
In some embodiments, the server may identify an image captured by each camera, and extract a captured image including at least one preset moving object from the image captured by each camera.
In some embodiments, the server may store information of a plurality of preset moving objects in advance, and the server may locally read the information of the plurality of preset moving objects stored in advance, where the information of the preset moving objects may include an image of the preset moving object, feature information of the preset moving object, and the like, which is not limited herein. In other embodiments, the information of the plurality of preset moving objects may also be sent to the server by the electronic device of the user, so that the shot image in which the preset moving object exists may be selected according to the user requirement.
In some embodiments, the server may identify the captured images of the plurality of cameras by using information of the plurality of preset moving objects to determine that the captured image including at least one preset moving object can be extracted from all the images. It is understood that at least one preset moving object may exist in the determined shot image, or a plurality of preset moving objects may exist, which is not limited herein.
As an alternative embodiment, the information of the preset moving object includes images of a plurality of preset moving objects. After the server acquires the shot images of the multiple cameras, each acquired shot image can be matched with the image of the preset moving object, and if the shot image has the content matched with the image of any one preset moving object, the preset moving object is determined to exist in the shot image. So that it can be determined from all the shot images of the plurality of cameras that there is at least one preset moving object.
Of course, the manner of acquiring the captured image in which the at least one preset moving object exists from the captured images of the plurality of cameras may not be limited.
Step S130: and grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, and the preset moving objects corresponding to each first image group are different.
After acquiring the shot images with at least one preset moving object, the server can group the shot images of the multiple cameras in real time according to different preset moving objects to obtain multiple first image groups, wherein the multiple first image groups correspond to the multiple preset moving objects one to one, namely each first image group corresponds to different preset moving objects. Thus, the server can obtain all the photographed images in which the preset moving object is photographed. It is understood that since a plurality of preset moving objects may exist in the photographed images, the same moving object may exist in the photographed images of different first image groups.
For example, referring to fig. 3, the preset moving object includes a target 1 and a target 2, when the target 1, the target 2 and the target 3 move in different camera areas, cameras in the corresponding areas capture images, and the captured images corresponding to the target 1 and the target 2 are grouped, so that an image group corresponding to the target 1 and an image group corresponding to the target 2 can be obtained.
Step S140: and removing the set content existing in the shot image of each first image group to obtain a plurality of second image groups.
In the embodiment of the present application, after obtaining the plurality of first image groups, the server may remove the setting content for the captured image of each of the plurality of first image groups. The setting content may be content that interferes when the user views the video file of the preset moving object, the setting content may include content such as a target person, a target object, and a target background, and the setting content may be stored locally by the server or set by the user, which is not limited herein.
In some embodiments, the server may perform the identification of the setting contents for each of the captured images in the first image group when the setting contents existing in the captured image in each of the first image groups are removed, and remove the setting contents in the captured image if the setting contents exist in the captured image, so that the server may perform the removal of the setting contents existing in the captured image in each of the first image groups after performing the operation for each of the images in the first image group. The server may identify the setting content according to the feature information of the setting content, the image content, and the like, which is not limited herein.
In some embodiments, the removing of the setting contents in the captured image may include: and cutting the setting content or blurring the setting content in the shot image. The blurring setting content may be, but is not limited to, reducing the sharpness, brightness, or the like of the setting content.
The server may remove the setting contents existing in the captured images of each of the first image groups, and then may treat the plurality of first image groups from which the setting contents are removed as the plurality of second image groups.
Step S150: and according to the shooting time sequence of the shot images, splicing and synthesizing the shot images of each second image group in the plurality of second image groups to obtain a plurality of video files corresponding to the preset moving objects.
In this embodiment of the application, after obtaining the second image group corresponding to the preset moving object, the server may perform stitching and synthesis on the captured images of each second image group in the plurality of image groups according to the order of the capturing time of the captured images to obtain video files corresponding to the plurality of preset moving objects, where the plurality of second image groups correspond to the plurality of preset moving objects one to one.
In some embodiments, the server may acquire the photographing time of the photographed image from the stored file information of the photographed image. The camera can send the shooting time to the server as one of the description information of the shot images when uploading the shot images, so that the server can obtain the shooting time of the shot images when receiving the shot images. Of course, the manner in which the server acquires the shooting time of the shot image is not limited, and for example, the server may search for the shooting time of the shot image from the camera.
In some embodiments, the server may sort the shooting times of all the shot images in each second image group from first to last for each second image group, so as to obtain the precedence order of the shooting times. And then according to the sequence, splicing and synthesizing all the shot images of each second image group to obtain a video file of a preset moving object corresponding to each second image group. That is, the captured images in the second image group constitute each frame of image in the video file, and the order of the images in the video file is the same as the order of the capturing time.
In some embodiments, the server may further send the video file to the electronic device or a third-party platform (such as APP, web mailbox, etc.) to facilitate the user to view and download, so that the user can directly view a video formed by the captured images of the preset mobile object.
In addition, due to the fact that time is required for the preset moving object to move from one shooting area to another shooting area, different cameras can shoot the preset moving object in sequence, shooting time for shooting images is sequential, and therefore the action track of the preset moving object in the shooting area formed by the cameras can be reflected in the video spliced and synthesized by the server according to the sequential order of the shooting time. And because the shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped, the shooting area formed by the cameras is a complete area, and therefore the spliced and synthesized video file can reflect the activity change of the content of the preset moving object in a larger area.
According to the image processing method provided by the embodiment of the application, the shot images with at least one preset moving object are selected from the shot images of the multiple cameras through the shot images shot by the multiple cameras distributed at different positions, then the selected shot images are grouped according to different preset moving objects, after the set contents of the shot images in the groups are removed, the shot images in the groups are spliced and synthesized according to the shot images in the groups, video files corresponding to different moving objects are formed, the monitoring effect is improved, the set contents in the video files are removed, the interference of the set contents to users is avoided, the users do not need to check the shot images in all the shot areas independently, and the user experience is prompted.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to another embodiment of the present application. The method is applied to the server, the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 4, the image processing method may specifically include the following steps:
step S210: and acquiring shot images of the plurality of cameras.
Step S220: and acquiring shot images of at least one preset moving object in the shot images of the plurality of cameras.
In the embodiment of the application, a user can send feature information of a preset moving object to be checked to a server through electronic equipment, so that the server can identify that the shot image of the at least one preset moving object exists in the shot images of the multiple cameras according to the feature information. The feature information of the preset moving object may represent an external feature of the preset moving object.
In some embodiments, when the preset moving object is a preset person, the feature information of the preset person may include a face image, a wearing feature, a body shape feature, a gait feature, and the like, and the feature information of the specific preset person may not be limited. When the preset moving object is a preset animal, the characteristic information of the preset animal may include fur characteristics, embodiment characteristics, and the like, which is not limited herein. When the preset moving object is a preset vehicle, the characteristic information of the preset vehicle may include license plate information, vehicle color information, vehicle type information, and the like, which is not limited herein.
In some embodiments, the server may receive feature information of a preset moving object sent by the electronic device, so as to select a shot image in which at least one preset moving object exists from shot images of a plurality of cameras according to the feature information of the preset moving object, so that a user may send the feature information of the preset moving object to the server according to a video file of the preset moving object that the user needs to view.
After receiving the feature information of the preset moving object sent by the electronic device, the server may identify that there is shooting information of at least one preset moving object in shooting of the plurality of cameras according to the feature information. In some embodiments, when there is content matching the feature information of the preset moving object in the photographed image, it may be determined that there is a photographed image of the preset moving object in the photographed image.
In some embodiments, the preset moving object may include a preset person, and the feature information may include a face image of the preset person. The identifying, by the server, that the captured image of the at least one preset moving object exists in the captured images of the multiple cameras according to the feature information may include: determining a first captured image in which a person exists from the captured images of the plurality of cameras; and according to the face image, recognizing a shot image containing content matched with the face image from the first shot image, and obtaining shot images of the preset person in the shot images of the plurality of cameras.
When the server determines that the shot image of the preset moving object exists in the obtained shot images of the plurality of cameras, the server may determine a first shot image in which a person exists from the shot images of the plurality of cameras.
In some embodiments, the server may identify whether a person is present in the captured image according to the appearance characteristics (e.g., body type, etc.) of the person. As an alternative embodiment, the external feature for determining whether or not a person is present in the captured image may be an external feature other than a face image, and it can be understood that since it is not necessary to perform determination based on the face image when determining whether or not a person is present in the captured image, efficiency in determining that the first captured image of a person is present may be improved. Of course, the specific appearance characteristics may not be limiting.
After acquiring the first captured image in which the person exists, the server may identify, from the first captured image, a captured image containing content matching a face image of at least one preset person. The method comprises the steps of carrying out face image recognition of preset persons on each first shot image, determining that at least one preset person exists in the first shot image if at least one face image of the preset person exists in the first shot image, and determining that no preset person exists in the first shot image if any face image of the preset person does not exist in the first shot image.
The server can thereby recognize the photographed image containing the preset person from the first photographed image based on the face image of the preset person. The server selects the first shot image with the character from the shot images of the plurality of cameras, and then determines the shot image with the preset character from the first shot image according to the face image of the preset character, so that the server does not need to identify the contents of the face image of all the shot images, and the processing efficiency of the server can be improved.
Step S230: and grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, and the preset moving objects corresponding to each first image group are different.
In the embodiment of the present application, step S230 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S240: and reading the pre-stored setting content.
In the embodiment of the present application, the server may store in advance setting contents that need to be removed from the captured image. The setting content stored by the server may include feature information, content type, image, etc. of the setting content, which is not limited herein. The setting content can be used as content which can interfere when a user views the video corresponding to the preset moving object.
Step S250: and identifying the set content of the shot images of each first image group to obtain a target shot image with the set content in the shot images of each first image group and the set content in the target shot image.
In some embodiments, the setting content may include other moving objects than the preset moving object. It can be understood that a plurality of moving objects may exist in the captured image, the preset moving object is one of the plurality of moving objects, and the preset moving object is the moving object concerned by the user, so that, to avoid interference of other moving objects to the user, other moving objects may be used as the setting content that needs to be removed, so that the content of other moving objects does not exist in the video file corresponding to the subsequently obtained preset moving object. For example, when the preset moving object is the person a, the setting content may be the other persons than the person a among all the persons existing in the captured image.
Further, the server may perform recognition of the setting content for the captured image of each of the first image groups, and obtain a target captured image in which the setting content exists in the captured image of each of the first image groups, and the setting content in the target captured image, and may include:
identifying all moving objects present in each captured image of each of the first image groups; and determining that the other moving object exists in the target captured image among all the captured images of the first image group according to the recognition result.
In some embodiments, the server may perform recognition of moving objects for each captured image in the first image group to recognize all moving objects present in each captured image. In one embodiment, when the moving object is a person, the moving object in the captured image may be identified according to feature information of the person, for example, by identifying a face of the person in the captured image, so as to determine all persons in the captured image.
After the moving object is recognized for each captured image, the target captured image of the other moving object in all captured images in the first image group may be determined according to the recognition result, and the other moving object in the target captured image may be determined. Wherein, since the preset moving object exists in the captured image of the first image group, it is determined that other moving objects exist in the captured image of the first image group when it is recognized that a plurality of moving objects are included in the captured image. In addition, the server may determine, according to all moving objects identified in the captured image, other moving objects than the preset moving object in all moving objects according to the preset moving object, for example, may determine, according to feature information of the preset moving object, positions of the preset moving object in the captured image in the plurality of moving objects, so as to determine positions of the other moving objects in the captured image.
In the above manner, the server can obtain the target captured image in which the setting content exists in the captured images of each of the first image groups, and the setting content in the target captured image.
Step S260: and removing the set content of the target shooting image in each first image group to obtain a plurality of second image groups.
After obtaining the target captured image with the setting content in the captured image of each first image group and the setting content in the target captured image, the server may obtain a plurality of second image groups from which the setting content is removed from the captured images after removing the setting content in the target captured image in each first image group.
Step S270: and according to the shooting time sequence of the shot images, splicing and synthesizing the shot images of each second image group in the plurality of second image groups to obtain a plurality of video files corresponding to the preset moving objects.
In the embodiment of the present application, step S270 may refer to the contents of the foregoing embodiments, and is not described herein again.
According to the image processing method provided by the embodiment of the application, shot images with at least one preset moving object are selected from the shot images of the multiple cameras through the shot images shot by the multiple cameras distributed at different positions, then the selected shot images are grouped according to different preset moving objects, the set contents of the shot images in the groups are identified, the shot images in the groups are removed, the shot images in the groups are spliced and synthesized according to the shot images in the groups, video files corresponding to different moving objects are formed, the monitoring effect is improved, the set contents in the video files are removed, the interference of the set contents to users is avoided, the users do not need to check the shot images in all shooting areas independently, and the user experience is prompted.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an image processing method according to still another embodiment of the present application. The method is applied to the server, the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 5, the image processing method may specifically include the following steps:
step S310: and acquiring shot images of the plurality of cameras.
Step S320: and acquiring shot images of at least one preset moving object in the shot images of the plurality of cameras.
Step S330: and grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, and the preset moving objects corresponding to each first image group are different.
Step S340: and acquiring a first target image group which meets a content removal condition from the plurality of first image groups, wherein the removal condition at least comprises that the number of target shooting images in the shooting images of the first image group is greater than a first set threshold, and the target shooting images are shooting images with the set content.
In the embodiment of the application, when the server removes the setting content existing in the captured image of each first image group, the server may further filter the first image group from which the setting content needs to be removed.
Step S350: and removing the set content of the shot image in the first target image group to obtain a new first target image group, and taking the other image groups except the first target image group in the plurality of first image groups and the new first target image group as the plurality of second image groups.
In some embodiments, when the number of target captured images is greater than the first set threshold, the number of captured images indicating that the set content exists in the first target image group is large, which is highly likely to cause interference to the user, so the server may subsequently remove the set content from the first image group in which the number of target captured images is greater than the first set threshold. And when the number of the target shooting images is not more than the first image group with the first set threshold, the user may not be disturbed, so that the set content may not be removed. Therefore, the first image group is screened by using the removing conditions, the number of the first image groups with the set content removed by the server can be reduced, and the processing efficiency of the server is improved.
In some other embodiments, before step S350, the method further comprises:
and if a second target image group which does not meet the content removal condition exists in the plurality of first image groups, sending a prompt content to the electronic equipment, wherein the prompt content is used for prompting whether to remove the set content of the shot image in the second target image group.
It is understood that, when there is a second target image group that does not satisfy the above-described content removal condition among the first image group, that is, there is a second target image group in which the number of target captured images is not more than the first setting threshold, the user may be prompted as to whether or not these second target image group also needs to be subjected to removal of the set content.
In this embodiment, step S350 may include:
when a determination instruction is received, removing the set content of the shot image in the first target image group to obtain a new first target image group; removing the set content of the shot image in the second target image group to obtain a new second target image group; and taking the new first target image group and other image groups of the new second target image group as the plurality of second image groups.
If a determination instruction sent by the user through the electronic equipment for confirming the removal of the set content of the shot images in the second target image group is received after the prompt content is sent, the shot images in the first target image group in the first image group can be removed of the set content, the shot images in the second target image group can also be removed of the set content, namely the shot images in each first image group in the first image group are removed of the set content, and a plurality of second image groups with the set content removed are obtained.
In some embodiments, the embodiments of step S340 and step S350 can also be applied to the foregoing embodiments.
Step S360: and according to the shooting time sequence of the shot images, splicing and synthesizing the shot images of each second image group in the plurality of second image groups to obtain a plurality of video files corresponding to the preset moving objects.
The image processing method provided by the embodiment of the application comprises the steps of selecting shot images with at least one preset moving object from the shot images of the multiple cameras through the shot images shot by the multiple cameras distributed at different positions, then grouping the selected shot images according to different preset moving objects, and after removing set contents of the groups meeting content removing conditions, splicing and synthesizing the groups according to the shot images in the groups to form video files corresponding to different moving objects, so that the monitoring effect is improved, the set contents in the video files are removed, the interference of the set contents to users is avoided, the users do not need to check the shot images in all shooting areas independently, and the user experience is prompted.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating an image processing method according to still another embodiment of the present application. The method is applied to the server, the server is in communication connection with a plurality of cameras, the cameras are distributed at different positions, and shooting areas of two adjacent cameras in the cameras are adjacent or partially overlapped. As will be described in detail with respect to the flow shown in fig. 6, the image processing method may specifically include the following steps:
step S410: and acquiring shot images of the plurality of cameras.
In the embodiment of the present application, step S410 may refer to the contents of the foregoing embodiments, and is not described herein again.
Step S420: and acquiring a second shot image satisfying a screening condition including an image shot by a camera of a specified area or an image shot in a specified time period from the shot images of the plurality of cameras.
In the embodiment of the application, after the server acquires the shot images of the multiple cameras, before the shot images of the preset moving object are selected from the shot images of the multiple cameras, the shot images of the multiple cameras can be screened to meet the user requirements.
In some embodiments, the screening conditions for screening the captured images of the plurality of cameras may include: images taken by cameras in a designated area or images taken in a designated time period. It can be understood that the user may need to view the active video of the preset mobile object in the designated area, and therefore, the designated area may be selected as the filtering condition. The user may also need to view the active video of the preset mobile object in a specified time period, so that the specified time period can be selected as the filtering condition.
In one embodiment, the screening condition includes an image captured by a camera in a designated area. Before the acquiring of the captured images of at least one preset moving object in the captured images of the plurality of cameras, the method further includes:
sending data of a plurality of shooting areas corresponding to the plurality of cameras to a mobile terminal, wherein the plurality of cameras correspond to the plurality of shooting areas one to one; and receiving a selection instruction of a designated area in the plurality of shooting areas sent by the mobile terminal, obtaining the designated area according to the selection instruction, and sending the selection instruction when detecting the selection operation of the plurality of shooting monitoring areas in the selection interface after the mobile terminal displays the selection interface according to the data of the plurality of shooting areas.
The server can acquire the shooting area corresponding to each camera and send the acquired data of the multiple shooting areas to the mobile terminal, so that the mobile terminal can generate a selection interface according to the data of the multiple shooting areas. The selection interface generated by the mobile terminal can comprise a whole area formed by a plurality of shooting areas, the whole area is divided into a plurality of shooting areas corresponding to a plurality of cameras, the plurality of cameras correspond to the plurality of shooting areas one by one, and a user can select a required shooting area.
In some embodiments, the data of the shooting area sent by the server to the mobile terminal may include a name of the shooting area, so that the mobile terminal may mark the name of the shooting area in the selection interface, and a user can know the shooting area conveniently. For example, in a home monitoring scene, the multiple cameras include a camera 1, a camera 2, a camera 3, a camera 4 and a camera 5, a shooting area corresponding to the camera 1 is a bedroom 1, a shooting area corresponding to the camera 2 is a bedroom 2, a shooting area corresponding to the camera 3 is a living room, a shooting area corresponding to the camera 4 is a kitchen, a shooting area corresponding to the camera 5 is a study room, the mobile terminal can display the name of each shooting area in the multiple shooting areas of the display interface, and a moving video of a preset moving object required by a user in a selected designated area can be obtained subsequently.
In some embodiments, when receiving a selection instruction sent by the electronic device, the server may determine, after determining a designated area selected by a user according to the selection instruction, whether the designated area is a continuous area, that is, whether there is a discontinuous area in the designated area. For example, the shooting area 1, the shooting area 2, the shooting area 3, and the shooting area 4 are adjacent in sequence, and if the designated area selected by the user is the shooting area 1, the shooting area 3, and the shooting area 4, the designated area is the intermittent shooting area 2, and thus the designated area selected by the user is not a continuous area.
Further, if the server determines that the designated area selected by the user is not a continuous area, prompting content for prompting the user that the designated area is discontinuous may be sent to the electronic device, so that the user may reselect the designated area. It can be understood that, usually, a user needs to view a moving video of a preset moving object in a continuous area, and if the specified area is not a continuous area, the requirement of the user cannot be met, so that sending a prompt content to prompt the user can facilitate the user to select the continuous area as the specified area.
Of course, the user may also send an instruction to the server to determine the selected designated area (non-continuous area) through the electronic device, and the server may perform the screening of the captured image according to the currently selected designated area.
In some embodiments, the image shot by the camera in the designated area and the image shot in the designated time period may also be used as the screening condition, so that the moving video in the shooting area and the time period which are required to be viewed by the user and preset by the mobile object can be conveniently obtained. Of course, the specific screening conditions may not be limiting.
Step S430: and acquiring a shot image in which at least one preset moving object exists from the second shot image.
In this embodiment of the application, after the captured images of the multiple cameras are screened, the captured image in which the at least one preset moving object exists may be obtained from the screened second image, where a manner of obtaining the captured image in which the at least one preset moving object exists may refer to the contents of the foregoing embodiment, and details are not described herein again.
It should be noted that the embodiments of step S420 and step S430 described above may also be applied to the foregoing embodiment, that is, in the foregoing embodiment, before the captured images in which at least one preset moving object exists are acquired, the second captured image that meets the filtering condition may also be filtered from the captured images of the plurality of cameras.
Step S440: and grouping the shot images with the moving objects according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, and the preset moving objects corresponding to each first image group are different.
Step S450: and removing the set content existing in the shot image of each first image group to obtain a plurality of second image groups.
Step S460: and according to the shooting time sequence of the shot images, splicing and synthesizing the shot images of each second image group in the plurality of second image groups to obtain a plurality of video files corresponding to the preset moving objects.
In some embodiments, after obtaining the plurality of second image groups, the server may further filter the captured images in the second image group corresponding to each preset moving object before performing stitching synthesis on the captured images in each of the plurality of second image groups.
As an embodiment, the server may determine whether the second image group includes captured images of two adjacent cameras at the same time for a preset moving object; if the second image group comprises images shot by two adjacent cameras at the same time for a preset moving object, acquiring a first target image shot by a first camera in the two adjacent cameras at the same time and acquiring a second target image shot by a second camera in the two adjacent cameras at the same time; and screening an image with the optimal image quality parameter of the preset moving object from the first target image and the second target image.
In this embodiment, the areas shot by two adjacent cameras partially coincide, and therefore, when the preset moving object is located in the overlapping portion of the areas shot by the two adjacent cameras, the two adjacent cameras may simultaneously shoot the preset moving object at the same time, and therefore, the shot image acquired by the server may include two shot images shot at the same time and including the preset moving object. Therefore, a target image (referred to as a first target image) captured by a first camera (one of the two adjacent cameras) at the same time and a target image (referred to as a second target image) captured by a second camera (the other camera) at the same time can be acquired.
Further, after acquiring the first target image and the second target image, the server may acquire an image quality parameter of the preset moving object in the first target image (denoted as a first image quality parameter) and an image quality parameter of the preset moving object in the second target image (denoted as a second image quality parameter), respectively. The image quality parameters may include sharpness, brightness, sharpness, lens distortion, color, resolution, color gamut, purity, and the like, which are not limited herein.
After obtaining the first image quality parameter and the second image quality parameter, the server may compare image quality effects corresponding to the first image quality parameter and the second image quality parameter, obtain an optimal image quality parameter from the first image quality parameter and the second image quality parameter according to a comparison result, and then screen an image corresponding to the optimal image quality parameter from the first target image and the second target image. For example, when the image quality parameter includes a sharpness, an image corresponding to the highest sharpness may be obtained from the first target image and the second target image.
After the server performs the image screening on each second image group, the server may perform stitching and synthesizing on the captured images of each second image group subjected to the image screening to obtain a plurality of video files corresponding to the preset moving objects.
In some embodiments, after obtaining the video files corresponding to the plurality of preset moving objects, the server may send the video files corresponding to the plurality of preset moving objects to the electronic device. As one way, the server may send video files corresponding to a plurality of preset moving objects to the same electronic device. As another mode, the video file corresponding to each preset moving object is sent to the electronic device corresponding to each preset moving object.
In some embodiments, the sending, by the server, the video files corresponding to the plurality of preset moving objects to the electronic device may include: detecting whether the second image group corresponding to each preset moving object has the shot images of other preset moving objects or not; and if at least the shot images of other preset moving objects exist in the second image group corresponding to the first preset moving object, sending the video files corresponding to the plurality of preset moving objects to the electronic equipment and sending prompt contents to the electronic equipment, wherein the prompt contents are used for prompting that the video files corresponding to the first preset moving object have the contents of other preset moving objects.
It can be understood that, in the obtained video file corresponding to a certain preset moving object, there may be contents of other preset moving objects, and the other preset moving objects are also contents that the user is interested in. Therefore, whether the content of other preset moving objects is included in the video file corresponding to the preset moving object can be determined by determining whether the second image group corresponding to the preset moving object includes the shot images in which the preset moving object and other preset moving objects exist simultaneously or not, that is, determining whether the second image group corresponding to the preset moving object includes the shot images in which the preset moving object and other preset moving objects exist simultaneously.
Further, if at least the second image group corresponding to the first preset moving object has the shot images of other preset moving objects, the server may send the video files corresponding to the plurality of preset moving objects to the electronic device and send the prompting content to the electronic device at the same time, so as to prompt the user that the video files corresponding to the first preset moving object have the contents of other preset moving objects.
In some embodiments, the server may further mark other preset mobile objects in the video file, so that the user can conveniently view the other preset mobile objects in the video file. For example, in a scene for police to check evidence, if the content of the suspect B exists in the video file corresponding to the victim a, the content of the suspect B is marked, so that the police can conveniently and quickly find out the video evidence.
Of course, the above embodiment may also be applied to the foregoing embodiment, that is, the above-mentioned manner of determining whether the content of the other preset moving object exists in the video file corresponding to the preset moving object and sending the prompt content may also be applied to the foregoing embodiment.
In some embodiments, the image processing method may be applied to monitor a scene of a preset moving object. The video file corresponding to the preset mobile object obtained by the server can be used as a monitoring video corresponding to the preset mobile object, and the server can send the monitoring video to the monitoring terminal corresponding to the preset mobile object, so that a user corresponding to the monitoring terminal can know the activity condition of the preset mobile object in time. For example, the preset mobile object may be an old person or a child, and the monitoring terminal may correspond to a guardian of the old person or the child, so that the guardian can timely know the conditions of the old person or the child at home, and the occurrence of an accident situation is avoided.
Further, after the server acquires the monitoring video of the preset mobile object, the server can automatically analyze the monitoring video to judge whether the preset mobile object in the monitoring video is abnormal, and when the judgment result represents that the preset mobile object is abnormal, the server can send alarm information to the monitoring terminal corresponding to the preset mobile object, so that a user corresponding to the monitoring terminal can timely perform corresponding processing. The abnormal condition may include falling, lying down, crying, onset of disease, etc., which is not limited herein. In addition, when the server sends the alarm information to the monitoring terminal, the server can also send the monitoring video or the video clip corresponding to the abnormal condition to the monitoring terminal, so that a user corresponding to the monitoring terminal can know the real condition of the preset mobile object in time.
In some embodiments, after receiving the monitoring video sent by the server, the monitoring terminal may send target instruction information to the server based on the monitoring video, and accordingly, after receiving the target instruction information, the server performs a corresponding operation in response to the target instruction information, for example, dialing an alarm call, dialing an emergency call, and the like.
The image processing method provided by the embodiment of the application screens second shot images meeting screening conditions from the shot images of a plurality of cameras through shot images shot by the plurality of cameras distributed at different positions, then selects the shot images with at least one preset moving object from the second shot images, then groups the selected shot images according to different preset moving objects to obtain a plurality of first image groups, removes set contents from the shot images in the plurality of first image groups to obtain a plurality of second image groups, and then splices and synthesizes the shot images of the plurality of second image groups respectively to form video files corresponding to different preset moving objects, removes interference information of the set contents from the video files, reduces interference of the set contents in the video files to users, thereby facilitating users to check the required preset moving objects in the appointed shooting areas or the movable variable moving areas of the plurality of shooting areas in the appointed time period And the time for user operation and user to look up the video is reduced, the requirements of the user are met, and the user experience is improved.
Referring to fig. 7, a block diagram of an image processing apparatus 400 according to an embodiment of the present application is shown, where the image processing apparatus 400 is applied to the server, and the server is connected to a plurality of cameras in a communication manner, the plurality of cameras are distributed at different positions, and shooting areas of two adjacent cameras in the plurality of cameras are adjacent or partially overlap. The image processing apparatus 400 includes: a first image acquisition module 410, a second image acquisition module 420, an image grouping module 430, a content removal module 440, and a video composition module 450. The first image obtaining module 410 is configured to obtain captured images of the multiple cameras; the second image obtaining module 420 is configured to obtain a captured image of at least one preset moving object in the captured images of the multiple cameras; the image grouping module 430 is configured to group the captured images with moving objects according to different preset moving objects to obtain a plurality of first image groups, where the first image groups are a set of captured images including the same preset moving object; the content removing module 440 is configured to remove setting content existing in the captured image of each first image group to obtain a plurality of second image groups; the video synthesizing module 450 is configured to splice and synthesize the captured images of each of the plurality of second image groups according to the sequence of the capturing time of the captured images, so as to obtain video files corresponding to a plurality of preset moving objects.
In some embodiments, referring to fig. 8, the content removal module 440 may include: a content reading unit 441, a content identifying unit 442, and a removal performing unit 443. The content reading unit 441 is used for reading pre-stored setting content; the content identifying unit 442 is configured to perform identification of the setting content for the captured images of each of the first image groups, and obtain a target captured image in which the setting content exists among the captured images of each of the first image groups, and the setting content in the target captured image; the removal execution unit 443 is configured to remove the setting content of the target captured image in each of the first image groups, resulting in a plurality of second image groups.
In this embodiment, the setting content includes a moving object other than the preset moving object. The content identification unit 442 may be specifically configured to: identifying all moving objects present in each captured image of each of the first image groups; and determining that the other moving object exists in the target captured image among all the captured images of the first image group according to the recognition result.
In some embodiments, referring to fig. 9, the content removal module 440 may include: a group of images filtering unit 444 and a removal execution unit 443. The image group filtering unit 444 is configured to acquire a first target image group satisfying a content removal condition including at least that the number of target captured images among the captured images of the first image group is greater than a first set threshold from among the plurality of first image groups, the target captured images being captured images in which the set content exists; the removal execution unit 443 is configured to remove the setting content in the target image group in the plurality of first image groups to obtain a new first target image group, and to use another image group in the plurality of first image groups except the first target image group and the new first target image group as the plurality of second image groups.
In this embodiment, the image processing apparatus 400 may further include: and a content prompt module. The content prompt module is configured to remove the setting content of the captured image in the first target image group by the removal execution unit 443 to obtain a new first target image group, and send a prompt content to the electronic device if a second target image group that does not satisfy a content removal condition exists in the plurality of first image groups before the plurality of first image groups and the new first target image group are used as the plurality of second image groups, where the prompt content is used to prompt whether to remove the setting content of the captured image in the second target image group. The removal execution unit 443 may be specifically configured to: when a determination instruction is received, removing the set content of the shot image in the first target image group to obtain a new first target image group; removing the set content of the shot image in the second target image group to obtain a new second target image group; and taking the new first target image group and other image groups of the new second target image group as the plurality of second image groups.
In some embodiments, the second image acquisition module 420 may include: the image screening unit and the image acquisition unit. The image screening unit is used for acquiring shot images meeting screening conditions from the shot images of the plurality of cameras, wherein the screening conditions comprise images shot by the cameras in a specified area or images shot in a specified time period; the image acquisition unit is used for acquiring the shot image with at least one preset moving object from the shot images meeting the screening condition.
In this embodiment, the screening condition includes an image captured by a camera of a specified area. The image processing apparatus 400 may further include: the device comprises a data sending module and an instruction receiving module. The data sending module is used for sending data of a plurality of shooting areas corresponding to the plurality of cameras to the mobile terminal, and the plurality of cameras correspond to the plurality of shooting areas one to one; the instruction receiving module is used for receiving a selection instruction of a designated area in the plurality of shooting areas sent by the mobile terminal, obtaining the designated area according to the selection instruction, and sending the selection instruction when the selection operation of the plurality of shooting monitoring areas in the selection interface is detected after the mobile terminal displays a selection interface according to the data of the plurality of shooting areas.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, the scheme that this application provided is applied to the server, server and a plurality of camera communication connection, and a plurality of cameras distribute in different positions, and the shooting region of two adjacent cameras in a plurality of cameras is adjoined or has some coincidences. The method comprises the steps of obtaining shot images of a plurality of cameras, obtaining shot images with at least one preset moving object in the shot images of the cameras, grouping the shot images with the moving object according to different preset moving objects to obtain a plurality of first image groups, wherein the first image groups are a set of shot images containing the same preset moving object, the preset moving objects corresponding to each first image group are different, removing set contents existing in the shot images of each first image group to obtain a plurality of second image groups, splicing and synthesizing the shot images of each second image group in the plurality of second image groups according to the sequence of the shooting time of the shot images to obtain video files corresponding to the plurality of preset moving objects, and accordingly splicing and synthesizing the shot images of the preset moving objects in a plurality of shooting areas, the complete monitoring video of the preset mobile object shot in the plurality of shooting areas is obtained, the monitoring effect is improved, the user does not need to check the shot images of the shooting areas independently, and the user experience is prompted.
Referring to fig. 10, a block diagram of a server according to an embodiment of the present application is shown. The server 100 may be a cloud server or a conventional server. The server 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the server 100 in use (such as phone books, audio and video data, chat log data), and the like.
Referring to fig. 11, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (8)

1.一种图片处理方法,其特征在于,应用于服务器,所述服务器与多个摄像头通信连接,所述多个摄像头分布于不同位置,所述多个摄像头中相邻两个摄像头的拍摄区域邻接或者存在部分重合,所述方法包括:1. a picture processing method, it is characterized in that, be applied to the server, described server is connected with multiple camera communication, described multiple camera is distributed in different positions, the shooting area of two adjacent cameras in described multiple camera Adjacent or there is partial overlap, the method includes: 获取所述多个摄像头的拍摄图像;acquiring captured images of the plurality of cameras; 发送所述多个摄像头所对应的多个拍摄区域的数据至移动终端,所述多个摄像头与所述多个拍摄区域一一对应;sending data of multiple shooting areas corresponding to the multiple cameras to the mobile terminal, and the multiple cameras are in one-to-one correspondence with the multiple shooting areas; 接收所述移动终端发送的对所述多个拍摄区域中指定区域的选择指令,根据所述选择指令确定用户选择的指定区域是否为连续的区域,若用户选择的指定区域不是连续的区域,向所述移动终端发送用于提示用户选择的指定区域非连续的提示内容,以使用户重新选择指定区域,所述选择指令由所述移动终端根据所述多个拍摄区域的数据显示选择界面后,检测到所述选择界面中对所述多个拍摄监控区域的选择操作时发送;Receive the selection instruction for the designated area in the multiple shooting areas sent by the mobile terminal, and determine whether the designated area selected by the user is a continuous area according to the selection instruction, and if the designated area selected by the user is not a continuous area, send the instruction to the designated area. The mobile terminal sends the prompt content for prompting the user to select discontinuous designated areas, so that the user can re-select the designated area. After the mobile terminal displays a selection interface according to the data of the multiple shooting areas, the selection instruction Sent when a selection operation on the plurality of shooting monitoring areas in the selection interface is detected; 从所述多个摄像头的拍摄图像中获取满足筛选条件的拍摄图像,所述筛选条件包括指定区域的摄像头拍摄的图像;Acquiring captured images from the captured images of the plurality of cameras that meet screening conditions, where the screening conditions include images captured by cameras in a specified area; 从所述满足筛选条件的拍摄图像中,获取存在至少一个预设移动对象的拍摄图像;From the captured images satisfying the screening conditions, obtain captured images with at least one preset moving object; 将存在移动对象的拍摄图像按照不同的预设移动对象进行分组,得到多个第一图像组,所述第一图像组为包含同一预设移动对象的拍摄图像的集合,每个第一图像组对应的预设移动对象不同;Group the captured images with moving objects according to different preset moving objects to obtain a plurality of first image groups, where the first image groups are a collection of captured images containing the same preset moving object, and each first image group The corresponding preset moving objects are different; 去除每个所述第一图像组的拍摄图像中存在的设定内容,得到多个第二图像组,所述设定内容包括除所述预设移动对象以外的其他移动对象;removing the setting content existing in the captured images of each of the first image groups to obtain a plurality of second image groups, where the setting content includes other moving objects other than the preset moving objects; 按照拍摄图像的拍摄时间先后顺序,将所述多个第二图像组中每个第二图像组的拍摄图像进行拼接合成,得到多个预设移动对象对应的视频文件。The captured images of each of the second image groups in the plurality of second image groups are spliced and synthesized according to the sequence of the shooting time of the captured images, so as to obtain a plurality of video files corresponding to the preset moving objects. 2.根据权利要求1所述的方法,其特征在于,所述去除每个所述第一图像组的拍摄图像中存在的设定内容,得到多个第二图像组,包括:2 . The method according to claim 1 , wherein the removing set content existing in the captured images of each of the first image groups to obtain a plurality of second image groups, comprising: 2 . 读取预先存储的设定内容;Read the pre-stored settings; 对每个所述第一图像组的拍摄图像进行所述设定内容的识别,获得每个所述第一图像组的拍摄图像中存在所述设定内容的目标拍摄图像,以及所述目标拍摄图像中的所述设定内容;Performing identification of the set content on the captured images of each of the first image groups, obtaining a target captured image in which the set content exists in the captured images of each of the first image groups, and the target captured image the set content in the image; 对每个所述第一图像组中目标拍摄图像的所述设定内容进行去除,得到多个第二图像组。The set content of the target captured image in each of the first image groups is removed to obtain a plurality of second image groups. 3.根据权利要求2所述的方法,其特征在于,所述设定内容包括除所述预设移动对象以外的其他移动对象,所述对每个所述第一图像组的拍摄图像进行设定内容的识别,获得每个所述第一图像组的拍摄图像中存在所述设定内容的目标拍摄图像,以及所述目标拍摄图像中的所述设定内容,包括:3 . The method according to claim 2 , wherein the setting content includes other moving objects other than the preset moving objects, and the setting is performed on the captured images of each of the first image groups. 4 . Recognition of the predetermined content, obtaining the target captured image in which the set content exists in the captured images of each of the first image groups, and the set content in the target captured image, including: 识别每个所述第一图像组的每个拍摄图像中存在的所有移动对象;identifying all moving objects present in each captured image of each of the first image groups; 根据识别结果,确定所述第一图像组的所有拍摄图像中存在所述其他移动对象的目标拍摄图像,并确定所述目标拍摄图像中存在的所述其他移动对象。According to the recognition result, it is determined that a target captured image of the other moving objects exists in all the captured images of the first image group, and the other moving objects that exist in the target captured image are determined. 4.根据权利要求1所述的方法,其特征在于,所述去除每个所述第一图像组的拍摄图像中存在的设定内容,得到多个第二图像组,包括:4 . The method according to claim 1 , wherein the removing set content existing in the captured images of each of the first image groups to obtain a plurality of second image groups, comprising: 5 . 从所述多个第一图像组中获取满足内容去除条件的第一目标图像组,所述去除条件至少包括所述第一图像组的拍摄图像中目标拍摄图像的数量大于第一设定阈值,所述目标拍摄图像为存在所述设定内容的拍摄图像;acquiring a first target image group from the plurality of first image groups that satisfies a content removal condition, the removal condition at least including that the number of target captured images in the captured images of the first image group is greater than a first set threshold, The target shot image is a shot image with the set content; 将所述第一目标图像组中拍摄图像的所述设定内容进行去除,得到新的第一目标图像组,将所述多个第一图像组中除所述第一目标图像组以外的其他图像组,以及所述新的第一目标图像组,作为所述多个第二图像组。removing the set content of the captured images in the first target image group to obtain a new first target image group, An image group, and the new first target image group, as the plurality of second image groups. 5.根据权利要求4所述的方法,其特征在于,在将所述第一目标图像组中拍摄图像的所述设定内容进行去除,得到新的第一目标图像组,将所述多个第一图像组中除所述第一目标图像组以外的其他图像组,以及所述新的第一目标图像组,作为所述多个第二图像组之前,所述方法还包括:5 . The method according to claim 4 , wherein the set content of the captured images in the first target image group is removed to obtain a new first target image group, and the plurality of Before other image groups in the first image group except the first target image group and the new first target image group are used as the plurality of second image groups, the method further includes: 如果所述多个第一图像组中存在不满足内容去除条件的第二目标图像组,则发送提示内容至电子设备,所述提示内容用于提示是否将所述第二目标图像组中拍摄图像的所述设定内容进行去除;If there is a second target image group that does not meet the content removal condition in the plurality of first image groups, send prompt content to the electronic device, where the prompt content is used to prompt whether to capture images in the second target image group to remove the set content; 所述将所述第一目标图像组中拍摄图像的所述设定内容进行去除,得到新的第一目标图像组,将所述多个第一图像组中除所述第一目标图像组以外的其他图像组,以及所述新的第一目标图像组,作为所述多个第二图像组,包括:The setting content of the captured images in the first target image group is removed to obtain a new first target image group, and the plurality of first image groups except the first target image group are removed. other image groups of , and the new first target image group, as the plurality of second image groups, including: 在接收到确定指令时,将所述第一目标图像组中拍摄图像的所述设定内容进行去除,得到新的第一目标图像组;When receiving the determination instruction, remove the set content of the captured image in the first target image group to obtain a new first target image group; 将所述第二目标图像组中拍摄图像的所述设定内容进行去除,得到新的第二目标图像组;removing the set content of the captured images in the second target image group to obtain a new second target image group; 将所述新的第一目标图像组以及所述新的第二目标图像组其他图像组,作为所述多个第二图像组。The new first target image group and other image groups of the new second target image group are used as the plurality of second image groups. 6.一种图片处理装置,其特征在于,应用于服务器,所述服务器与多个摄像头通信连接,所述多个摄像头分布于不同位置,所述多个摄像头中相邻两个摄像头的拍摄区域邻接或者存在部分重合,所述装置包括:第一图像获取模块、数据发送模块、指令接收模块、第二图像获取模块、图像分组模块、内容去除模块以及视频合成模块,第二图像获取模块包括图像筛选单元以及图像获取单元,其中,6. An image processing device, characterized in that, it is applied to a server, the server is connected in communication with a plurality of cameras, the plurality of cameras are distributed in different positions, and the shooting areas of two adjacent cameras in the plurality of cameras Adjacent or partially overlapping, the device includes: a first image acquisition module, a data transmission module, an instruction reception module, a second image acquisition module, an image grouping module, a content removal module, and a video synthesis module, and the second image acquisition module includes an image a screening unit and an image acquisition unit, wherein, 所述第一图像获取模块用于获取所述多个摄像头的拍摄图像;The first image acquisition module is configured to acquire the captured images of the plurality of cameras; 所述数据发送模块用于发送所述多个摄像头所对应的多个拍摄区域的数据至移动终端,所述多个摄像头与所述多个拍摄区域一一对应;The data sending module is configured to send data of multiple shooting areas corresponding to the multiple cameras to the mobile terminal, and the multiple cameras are in one-to-one correspondence with the multiple shooting areas; 所述指令接收模块用于接收所述移动终端发送的对所述多个拍摄区域中指定区域的选择指令,根据所述选择指令确定用户选择的指定区域是否为连续的区域,若用户选择的指定区域不是连续的区域,向所述移动终端发送用于提示用户选择的指定区域非连续的提示内容,以使用户重新选择指定区域,所述选择指令由所述移动终端根据所述多个拍摄区域的数据显示选择界面后,检测到所述选择界面中对所述多个拍摄监控区域的选择操作时发送;The instruction receiving module is configured to receive a selection instruction for a designated area in the plurality of shooting areas sent by the mobile terminal, and determine whether the designated area selected by the user is a continuous area according to the selection instruction. If the area is not a continuous area, a prompt content for prompting the user to select a designated area that is discontinuous is sent to the mobile terminal, so that the user can re-select the designated area, and the selection instruction is performed by the mobile terminal according to the plurality of shooting areas. After the data of the selection interface is displayed, it is sent when the selection operation of the plurality of shooting monitoring areas in the selection interface is detected; 所述图像筛选单元用于从所述多个摄像头的拍摄图像中获取满足筛选条件的拍摄图像,所述筛选条件包括指定区域的摄像头拍摄的图像;The image screening unit is configured to obtain, from the captured images of the plurality of cameras, captured images that meet screening conditions, where the screening conditions include images captured by cameras in a designated area; 所述图像获取单元用于从所述满足筛选条件的拍摄图像中,获取存在至少一个预设移动对象的拍摄图像;The image acquisition unit is configured to acquire a captured image with at least one preset moving object from the captured images that meet the screening conditions; 所述图像分组模块用于将存在移动对象的拍摄图像按照不同的预设移动对象进行分组,得到多个第一图像组,所述第一图像组为包含同一预设移动对象的拍摄图像的集合;The image grouping module is used to group the captured images with moving objects according to different preset moving objects to obtain a plurality of first image groups, where the first image groups are a collection of captured images containing the same preset moving object ; 所述内容去除模块用于去除每个所述第一图像组的拍摄图像中存在的设定内容,得到多个第二图像组,所述设定内容包括除所述预设移动对象以外的其他移动对象;The content removal module is configured to remove the set content existing in the captured images of each of the first image groups to obtain a plurality of second image groups, the set content including other than the preset moving objects moving objects; 所述视频合成模块用于按照拍摄图像的拍摄时间先后顺序,将所述多个第二图像组中每个第二图像组的拍摄图像进行拼接合成,得到多个预设移动对象对应的视频文件。The video synthesis module is used for splicing and synthesizing the captured images of each of the second image groups in the plurality of second image groups according to the time sequence of the captured images, so as to obtain video files corresponding to a plurality of preset moving objects . 7.一种服务器,其特征在于,包括:7. A server, characterized in that, comprising: 一个或多个处理器;one or more processors; 存储器;memory; 一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-5任一项所述的方法。One or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs are configured to perform such as The method of any one of claims 1-5. 8.一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-5任一项所述的方法。8. A computer-readable storage medium, wherein a program code is stored in the computer-readable storage medium, and the program code can be invoked by a processor to execute any one of claims 1-5 Methods.
CN201910579253.1A 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium Expired - Fee Related CN110191324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579253.1A CN110191324B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579253.1A CN110191324B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Publications (2)

Publication Number Publication Date
CN110191324A CN110191324A (en) 2019-08-30
CN110191324B true CN110191324B (en) 2021-09-14

Family

ID=67724308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579253.1A Expired - Fee Related CN110191324B (en) 2019-06-28 2019-06-28 Image processing method, image processing apparatus, server, and storage medium

Country Status (1)

Country Link
CN (1) CN110191324B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113906728A (en) * 2020-04-17 2022-01-07 深圳市大疆创新科技有限公司 Image processing method and device, camera module and movable equipment
JP7532075B2 (en) 2020-04-21 2024-08-13 キヤノン株式会社 Image processing device, method and program for controlling the image processing device
CN111626123B (en) * 2020-04-24 2024-08-20 平安国际智慧城市科技股份有限公司 Video data processing method, device, computer equipment and storage medium
CN114972801B (en) * 2022-05-26 2025-03-28 咪咕文化科技有限公司 Video generation method, device, equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658670A (en) * 2004-02-20 2005-08-24 上海银晨智能识别科技有限公司 Intelligent tracking monitoring system with multi-camera
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for multi-channel video information fusion processing and display in monitoring
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103077387A (en) * 2013-02-07 2013-05-01 东莞中国科学院云计算产业技术创新与育成中心 Automatic detection method of freight train carriages in video
CN104660998A (en) * 2015-02-16 2015-05-27 苏州阔地网络科技有限公司 Relay tracking method and system
CN105915847A (en) * 2016-04-29 2016-08-31 浙江理工大学 Characteristics matching and tracking based video monitoring apparatus and method
CN108234961A (en) * 2018-02-13 2018-06-29 欧阳昌君 A kind of multichannel video camera coding and video flowing drainage method and system
CN207601823U (en) * 2017-12-06 2018-07-10 中科劲阳(北京)科技有限公司 A kind of data policy service arrangement device
CN108540754A (en) * 2017-03-01 2018-09-14 中国电信股份有限公司 Methods, devices and systems for more video-splicings in video monitoring
CN109726716A (en) * 2018-12-29 2019-05-07 深圳市趣创科技有限公司 A kind of image processing method and system

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE517900C2 (en) * 1999-12-23 2002-07-30 Wespot Ab Methods, monitoring system and monitoring unit for monitoring a monitoring site
JP2007079641A (en) * 2005-09-09 2007-03-29 Canon Inc Information processing apparatus, information processing method, program, and storage medium
JP4638361B2 (en) * 2006-02-16 2011-02-23 パナソニック株式会社 Imaging device
CN101075376B (en) * 2006-05-19 2010-11-03 无锡易斯科电子技术有限公司 Intelligent video traffic monitoring system based on multi-viewpoints and its method
US20100013935A1 (en) * 2006-06-14 2010-01-21 Honeywell International Inc. Multiple target tracking system incorporating merge, split and reacquisition hypotheses
CN101035271A (en) * 2007-04-12 2007-09-12 上海天卫通信科技有限公司 Video monitoring system and method for tracking the motive target in the low-speed mobile network
CN101626489B (en) * 2008-07-10 2011-11-02 苏国政 Method and system for intelligently identifying and automatically tracking objects under unattended condition
CN101527786B (en) * 2009-03-31 2011-06-01 西安交通大学 A method to enhance the definition of visually important areas in network video
CN103108198A (en) * 2011-11-09 2013-05-15 宏碁股份有限公司 Image generating apparatus and image adjusting method
US9712738B2 (en) * 2012-04-17 2017-07-18 E-Vision Smart Optics, Inc. Systems, devices, and methods for managing camera focus
CN102881100B (en) * 2012-08-24 2017-07-07 济南纳维信息技术有限公司 Entity StoreFront anti-thefting monitoring method based on video analysis
US9002109B2 (en) * 2012-10-09 2015-04-07 Google Inc. Color correction based on multiple images
JP6214426B2 (en) * 2014-02-24 2017-10-18 アイホン株式会社 Object detection device
CN105023278B (en) * 2015-07-01 2019-03-05 中国矿业大学 A kind of motion target tracking method and system based on optical flow method
US9449258B1 (en) * 2015-07-02 2016-09-20 Agt International Gmbh Multi-camera vehicle identification system
JP6758834B2 (en) * 2016-01-14 2020-09-23 キヤノン株式会社 Display device, display method and program
CN106971142B (en) * 2017-02-07 2018-07-17 深圳云天励飞技术有限公司 A kind of image processing method and device
CN107566799A (en) * 2017-09-15 2018-01-09 泾县麦蓝网络技术服务有限公司 Home environment monitoring method and system based on mobile terminal control
CN108111818B (en) * 2017-12-25 2019-05-03 北京航空航天大学 Method and device for active perception of moving target based on multi-camera collaboration
CN108470353A (en) * 2018-03-01 2018-08-31 腾讯科技(深圳)有限公司 A kind of method for tracking target, device and storage medium
CN109862313B (en) * 2018-12-12 2022-01-14 科大讯飞股份有限公司 Video concentration method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658670A (en) * 2004-02-20 2005-08-24 上海银晨智能识别科技有限公司 Intelligent tracking monitoring system with multi-camera
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for multi-channel video information fusion processing and display in monitoring
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103077387A (en) * 2013-02-07 2013-05-01 东莞中国科学院云计算产业技术创新与育成中心 Automatic detection method of freight train carriages in video
CN104660998A (en) * 2015-02-16 2015-05-27 苏州阔地网络科技有限公司 Relay tracking method and system
CN105915847A (en) * 2016-04-29 2016-08-31 浙江理工大学 Characteristics matching and tracking based video monitoring apparatus and method
CN108540754A (en) * 2017-03-01 2018-09-14 中国电信股份有限公司 Methods, devices and systems for more video-splicings in video monitoring
CN207601823U (en) * 2017-12-06 2018-07-10 中科劲阳(北京)科技有限公司 A kind of data policy service arrangement device
CN108234961A (en) * 2018-02-13 2018-06-29 欧阳昌君 A kind of multichannel video camera coding and video flowing drainage method and system
CN109726716A (en) * 2018-12-29 2019-05-07 深圳市趣创科技有限公司 A kind of image processing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频内容分析的摘要以及检索系统;叶泽雄;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20160731;全文 *

Also Published As

Publication number Publication date
CN110191324A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN110267008B (en) Image processing method, device, server and storage medium
CN110191324B (en) Image processing method, image processing apparatus, server, and storage medium
JP7266672B2 (en) Image processing method, image processing apparatus, and device
CN110267011B (en) Image processing method, image processing apparatus, server, and storage medium
WO2020057355A1 (en) Three-dimensional modeling method and device
CN110177258A (en) Image processing method, image processing apparatus, server, and storage medium
US9402033B2 (en) Image sensing apparatus and control method therefor
CN111163259A (en) Image capturing method, monitoring camera and monitoring system
CN110267010B (en) Image processing method, image processing apparatus, server, and storage medium
CN110278413A (en) Image processing method, image processing apparatus, server, and storage medium
KR101514061B1 (en) Wireless camera device for managing old and weak people and the management system thereby
CN110267009B (en) Image processing method, image processing apparatus, server, and storage medium
CN108449555A (en) Image fusion method and system
CN109376601B (en) Object tracking method based on high-speed ball, monitoring server and video monitoring system
CN107360366B (en) Photographing method, device, storage medium and electronic device
CN112565599A (en) Image shooting method and device, electronic equipment, server and storage medium
JP2014042160A (en) Display terminal, setting method of target area of moving body detection and program
CN106791703B (en) The method and system of scene is monitored based on panoramic view
JP6809114B2 (en) Information processing equipment, image processing system, program
CN110266953B (en) Image processing method, image processing apparatus, server, and storage medium
CN110267007A (en) Image processing method, image processing apparatus, server, and storage medium
JP2003199097A (en) Monitoring system center apparatus, monitoring system center program and recording medium for recording monitoring system center program
CN110278414A (en) Image processing method, image processing apparatus, server, and storage medium
CN106888353A (en) A kind of image-pickup method and equipment
JP2020072469A (en) Information processing apparatus, control method and program for information processing apparatus, and imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210914