WO2018133589A1 - Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote - Google Patents
Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote Download PDFInfo
- Publication number
- WO2018133589A1 WO2018133589A1 PCT/CN2017/115877 CN2017115877W WO2018133589A1 WO 2018133589 A1 WO2018133589 A1 WO 2018133589A1 CN 2017115877 W CN2017115877 W CN 2017115877W WO 2018133589 A1 WO2018133589 A1 WO 2018133589A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- cameras
- images
- aerial
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
Definitions
- the invention relates to the technical field of drones, in particular to an aerial photography method, a device and a drone.
- the current aerial photography system is mainly composed of a drone and a remote control device.
- the user controls the drone through a remote control device, and the drone transmits the captured image to the remote control device for viewing by the user through an aerial camera.
- a virtual reality (VR) device such as VR glasses is started as a remote control device.
- the drone transmits the captured image to the VR glasses in real time, and the user watches the captured image in real time through the VR glasses, and the user also controls the posture and shooting angle of the drone through the VR glasses.
- VR virtual reality
- VR glasses completely isolate the human eye from the outside, it provides an immersive experience for the user and improves the user experience to a certain extent.
- the image transmitted by the drone to the VR glasses is a 2D image, the advantage of the VR glasses cannot be fully utilized, and the method gives the user an immersive experience.
- the main object of the embodiments of the present invention is to provide an aerial photography method, device and drone, which aim to realize an immersive experience for the user when performing aerial photography through the drone. .
- an aerial photography method is proposed on the one hand, and the method comprises the following steps:
- the 3D image is sent out.
- the image is a photo or video stream.
- the splicing the two images collected by the two cameras into one 3D image includes:
- the two video streams collected by the two cameras are respectively sampled into two preset resolution video streams, and the preset resolution is lower than the original resolution;
- the two preset resolution video streams are spliced into one 3D video stream.
- the two cameras are arranged side by side, and the splicing the two images collected by the two cameras into one 3D image includes:
- the two images collected by the two cameras are spliced side by side to form a 3D image in a left and right format.
- the two images collected by the two cameras are side-by-side spliced together: the image captured by the left camera is spliced to the left, and the image captured by the right camera is spliced to the right.
- the method further includes: performing depth detection of the captured scene by using the 3D image to obtain depth information.
- the sending the 3D image outward comprises: transmitting the 3D image to a head mounted virtual reality device.
- an aerial camera device comprising:
- An image acquisition module for collecting images through two cameras
- An image processing module configured to splicing two images collected by the two cameras into one 3D image
- An image sending module configured to send the 3D image outward.
- the image is a photo or video stream.
- the image processing module is configured to: separately sample two video streams collected by the two cameras into two preset resolution video streams, and the two The preset resolution video streams are spliced into a 3D video stream, wherein the preset resolution is lower than the original resolution.
- the two cameras are arranged side by side, and the image processing module is configured to: splicing the two images collected by the two cameras side by side to obtain a 3D image in a left and right format.
- the image processing module is configured to: splicing the image collected by the left camera to the left side, and splicing the image collected by the right camera to the right side.
- the device further includes a depth detecting module, configured to: perform depth detection of the shooting scene by using the 3D image to obtain depth information.
- a depth detecting module configured to: perform depth detection of the shooting scene by using the 3D image to obtain depth information.
- the image sending module is configured to: send the 3D image to the head mounted virtual reality device.
- the invention also proposes a drone comprising:
- One or more processors are One or more processors;
- One or more applications wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to use Perform the aforementioned aerial photography method.
- an aerial photography method is proposed on the one hand, and the method comprises the following steps:
- the 3D image is sent out.
- the image is a photo or video stream.
- the splicing the two images collected by the two cameras into one 3D image includes:
- the two video streams collected by the two cameras are respectively sampled into two preset resolution video streams, and the preset resolution is lower than the original resolution;
- the two preset resolution video streams are spliced into one 3D video stream.
- the two cameras are arranged side by side, and the splicing the two images collected by the two cameras into one 3D image includes:
- the two images collected by the two cameras are spliced side by side to form a 3D image in a left and right format.
- the two images collected by the two cameras are side-by-side spliced together: the image captured by the left camera is spliced to the left, and the image captured by the right camera is spliced to the right.
- the method further includes: performing depth detection of the captured scene by using the 3D image to obtain depth information.
- the sending the 3D image outward comprises: transmitting the 3D image to a head mounted virtual reality device.
- an aerial camera device comprising:
- An image acquisition module for collecting images through two cameras
- An image processing module configured to splicing two images collected by the two cameras into one 3D image
- An image sending module configured to send the 3D image outward.
- the image is a photo or video stream.
- the image processing module is configured to: separately sample two video streams collected by the two cameras into two preset resolution video streams, and the two The preset resolution video streams are spliced into a 3D video stream, wherein the preset resolution is lower than the original resolution.
- the two cameras are arranged side by side, and the image processing module is configured to: splicing the two images collected by the two cameras side by side to obtain a 3D image in a left and right format.
- the image processing module is configured to: splicing the image collected by the left camera to the left side, and splicing the image collected by the right camera to the right side.
- the device further includes a depth detecting module, configured to: perform depth detection of the shooting scene by using the 3D image to obtain depth information.
- a depth detecting module configured to: perform depth detection of the shooting scene by using the 3D image to obtain depth information.
- the image sending module is configured to: send the 3D image to the head mounted virtual reality device.
- the invention also proposes a drone comprising:
- One or more processors are One or more processors;
- One or more applications wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to use Perform the aforementioned aerial photography method.
- An aerial photography method provided by an embodiment of the present invention collects two images through two cameras, and splicing the collected two images into a 3D image and transmitting the same, so that the drone can provide a 3D image during the aerial photography process, and The user can view the realistic 3D images in real time, allowing the user to have an immersive feeling, realizing an immersive experience, and greatly improving the user's aerial photography experience.
- the depth detection is also performed through the 3D image, so that the drone can simultaneously realize 3D aerial photography, obstacle avoidance, and tracking by using a set of binocular cameras (ie, two cameras).
- a set of binocular cameras ie, two cameras.
- FIG. 1 is a flow chart of an aerial photography method according to a first embodiment of the present invention
- FIG. 2 is a flow chart of an aerial photography method according to a second embodiment of the present invention.
- Figure 3 is a block diagram showing an aerial photographing apparatus of a third embodiment of the present invention.
- Fig. 4 is a block diagram showing the aerial device of the fourth embodiment of the present invention.
- first, second, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated.
- features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
- the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. It is also within the scope of protection required by the present invention.
- the aerial photographing method and the aerial photographing device of the embodiment of the present invention are mainly applied to the drone, and can of course be applied to other aircrafts, which is not limited by the present invention.
- the embodiment of the present invention is described in detail by applying to an unmanned aerial vehicle as an example.
- an aerial photography method according to a first embodiment of the present invention is proposed.
- the method includes the following steps:
- the drone is provided with two cameras to form a set of binocular cameras.
- the two cameras are preferably arranged side by side. Of course, it can also be staggered, that is, the two cameras are not on the same horizontal line.
- the two cameras are separated by a certain distance. In theory, the larger the separation distance, the better.
- the drone acquires images simultaneously (synchronously) through two cameras, and the acquired images may be photos or video streams.
- the two images collected by the two cameras are spliced side by side, preferably, the image collected by the left camera is spliced to the left, and the image captured by the right camera is spliced to the right, and finally a 3D (3D) format is obtained. )image.
- the two images collected by the two cameras may be spliced together side by side to obtain a 3D image of the top and bottom format.
- the 3D image is a 3D photo or a 3D video stream.
- the drone before performing image splicing, the drone first performs resolution reduction processing on the original image, and then splicing the reduced resolution image to reduce the final 3D image size, thereby avoiding excessive consumption during subsequent transmission.
- the bandwidth resources thereby increasing the transmission speed and improving the real-time performance of image transmission.
- the drone first samples the two video streams collected by the two cameras into two preset resolution video streams, and then splices the two preset resolution video streams into A 3D video stream in which the preset resolution is lower than the original resolution.
- the two cameras of the drone respectively capture a video stream of 4K resolution, and the drone samples two video streams of 4K resolution into two video streams of 720P format, and the sampling method can adopt a general downsampling algorithm.
- the four pixels are merged into one pixel; the sampled two 720P video streams are placed on the left side of the left camera and the right camera is placed on the right side while the frame synchronization is maintained.
- the drone can save the original images captured by the two cameras in the local storage space. Further, before the saving, the original image is also compressed to save storage space, such as compressing the video stream into a H.265 format video file.
- the drone transmits the obtained 3D image, for example, to a remote control device or a terminal device that establishes a wireless communication connection with the drone, such as a mobile phone, a tablet computer, or a head mounted virtual reality (VR) device. (such as VR glasses, VR helmets, etc.), etc., or uploaded to a server via a wireless communication network.
- a remote control device or a terminal device that establishes a wireless communication connection with the drone, such as a mobile phone, a tablet computer, or a head mounted virtual reality (VR) device. (such as VR glasses, VR helmets, etc.), etc., or uploaded to a server via a wireless communication network.
- VR virtual reality
- the drone before transmitting the 3D image, the drone further performs compression processing on the 3D image to reduce the size of the 3D image, improve transmission efficiency, and realize real-time transmission. For example, a 3D video stream is compressed into a H.264 format video stream and then sent out.
- the drone uses 3D images to perform depth detection of the captured scene, acquires depth information, and uses depth information to implement functions such as target ranging, face recognition, gesture recognition, target tracking, and the like, and can be combined with the drone.
- Attitude information and depth information can be used to avoid obstacles (such as forward obstacle avoidance), so that a set of binocular cameras can simultaneously achieve various functions such as aerial photography, obstacle avoidance, tracking, and ranging.
- depth detection is performed by using a 3D image, that is, depth detection is performed by using a difference (parallax) between left and right or upper and lower images in a 3D image (such as a 3D video stream).
- Parallax is the difference in direction produced by observing the same target from two points with a certain distance, so there is parallax in the image obtained by binocular cameras (such as left camera and right camera) of the same target at different positions. The closer the target is to the camera, the larger the parallax in the image of the binocular camera. Therefore, the distance from the target to the camera, that is, the depth of the target, can be calculated according to the parallax size of the target in the two images obtained by the binocular camera.
- Depth detection The image is divided into several effective areas, the target distance of each area is calculated in turn, and the distance and the area orientation are fed back to the flight control. The flight control can realize obstacle avoidance according to the distance and orientation of the front target.
- a remote control device such as VR glasses
- the pitch angle is greater than the preset pitch angle (can be set according to actual needs)
- the drone prompts the user to disable the obstacle function and/or maintain the hover state.
- the drone when the user selects a target that needs to be tracked, the drone will adjust the posture of itself and the pan/tilt to align with the selected target. Because the target tracking accuracy based on depth information is more accurate than the previous method of planar vision, it can realize the tracking function with practical application value for the drone.
- the drone when the user triggers the photographing instruction, such as waving a wave, a two-hand frame, and the like in front of the remote control device (such as VR glasses), the drone takes two full-resolution photos through two cameras. Store locally and use the two photos you took to stitch into a 3D photo.
- the photographing instruction such as waving a wave, a two-hand frame, and the like in front of the remote control device (such as VR glasses)
- the drone takes two full-resolution photos through two cameras. Store locally and use the two photos you took to stitch into a 3D photo.
- the drone can also perform image quality improvement through the respective screens of the two cameras, such as performing denoising processing, background blurring processing, and the like.
- the two photos taken by the binocular camera can be matched by the feature points to find a completely overlapping area, and the picture in the area is equivalent to two shots.
- Overlay processing (such as the simplest weighted average) after multiple shots of the same picture can effectively reduce noise.
- the foreground and the back view of the picture can be distinguished, and the back scene can be blurred by a fuzzy algorithm (such as the simplest Gaussian blur filter), thereby forming a background blur effect.
- an aerial photography method according to a second embodiment of the present invention is proposed.
- the method includes the following steps:
- the binocular cameras of the drone are arranged side by side, and the drone simultaneously collects the video stream through the binocular camera.
- the two video streams collected by the two cameras are respectively sampled into two preset resolution video streams.
- the two preset resolution video streams are spliced side by side to obtain a left and right format 3D video stream.
- the drone firstly samples the two video streams collected by the binocular camera into two preset resolution video streams, and then splicing the two preset resolution video streams side by side, preferably, The video stream collected by the left camera is spliced to the left, and the video stream collected by the right camera is spliced to the right, and finally a 3D video stream of the left and right format is obtained, wherein the preset resolution is lower than the original resolution.
- the two cameras of the drone each shoot a 4K resolution video stream, and the drone first samples two 4K resolution video streams into two 720P format video streams, and then two 720P format videos.
- the stream is spliced left and right (or up and down) into a 3D video stream with a resolution of 2560*720 in the left and right format (or top and bottom format).
- the drone also saves the 4K resolution video stream captured by the two cameras in the local storage space. Further, before the saving, the original video stream is also compressed to save storage space, such as compressing the 4K resolution video stream into a H.265 format video file.
- the 3D video stream is compressed and sent to the VR glasses.
- the drone compresses the 3D video stream into a video stream of the H.264 format, and then transmits the video stream to the VR glasses for real-time transmission.
- the VR glasses After the VR glasses receive the 3D video stream and play it immediately, the user can watch the 3D video captured by the drone in real time, making the picture more realistic, giving the user an immersive feeling and greatly improving the user experience.
- the user can control the flight attitude and shooting angle of the drone through the VR glasses.
- the drone When the user triggers the camera command, such as waving a wave or a two-hand frame in front of the VR glasses, the drone shoots two through two cameras. Full-resolution photos are stored locally, and the two photos taken are stitched into a 3D photo and sent back to the VR glasses, allowing users to view the captured 3D photos in real time. Further, the drone can also perform image quality improvement through the respective screens of the two cameras, such as performing denoising processing, background blurring processing, and the like.
- S25 Perform depth detection of the shooting scene by using the 3D video stream to obtain depth information.
- the drone also uses the 3D video stream to perform depth detection of the shooting scene to acquire depth information.
- the depth detection is performed by using a 3D video stream, that is, using the difference (parallax) between the left and right video streams in the 3D video stream to implement depth detection.
- the depth information is used to implement obstacle avoidance (such as forward obstacle avoidance), face recognition, gesture recognition, target tracking, etc., and the target distance measurement can be realized by combining the attitude information and the depth information of the drone, and the specific implementation is realized.
- obstacle avoidance such as forward obstacle avoidance
- face recognition such as face recognition
- gesture recognition such as target tracking
- target distance measurement can be realized by combining the attitude information and the depth information of the drone, and the specific implementation is realized.
- the process is the same as the prior art and will not be described here.
- the two video streams captured by the binocular camera are spliced into one 3D video stream, and the 3D video stream is transmitted to the VR glasses in real time for the user to view in real time, and the depth detection is performed through the 3D video stream, so that the drone utilizes one.
- the group binocular camera can realize 3D aerial photography, obstacle avoidance, tracking, ranging and other functions at the same time, without using two sets of binocular cameras (ie four cameras) to achieve 3D shooting and depth detection respectively, thus lowering Costs enable multiple functions.
- the device includes an image acquisition module, an image processing module, and an image transmission module, wherein:
- Image acquisition module used to capture images through two cameras.
- the drone is provided with two cameras to form a set of binocular cameras.
- the two cameras are preferably arranged side by side. Of course, it can also be staggered, that is, the two cameras are not on the same horizontal line.
- the two cameras are separated by a certain distance. In theory, the larger the separation distance, the better.
- the image acquisition module acquires images simultaneously (synchronously) through two cameras, and the acquired images may be photos or video streams.
- Image processing module used to splicing two images acquired by two cameras into one 3D image.
- the image processing module splices the two images collected by the two cameras side by side, preferably, the image collected by the left camera is spliced to the left, and the image captured by the right camera is spliced to the right, and finally a left and right format is obtained.
- 3D (stereo) image the image processing module can also splicing the two images collected by the two cameras side by side, and finally obtaining a 3D image of the top and bottom format.
- the 3D image is a 3D photo or a 3D video stream.
- the image processing module before performing image splicing, the image processing module first performs resolution reduction processing on the original image, and then splicing the reduced resolution image to reduce the final 3D image size, thereby avoiding excessive consumption during subsequent transmission.
- the bandwidth resources thereby increasing the transmission speed and improving the real-time performance of image transmission.
- the image processing module first samples the two video streams collected by the two cameras into two preset resolution video streams, and then splices the two preset resolution video streams into A 3D video stream in which the preset resolution is lower than the original resolution.
- the two cameras of the drone respectively capture a video stream of 4K resolution
- the image processing module samples two video streams of 4K resolution into two video streams of 720P format
- the sampling mode can adopt a general downsampling algorithm. For example, combine 4 pixels into one pixel; in the case of keeping the video stream frames of two 720P formats synchronized, the picture taken by the left camera is placed on the left side, and the picture taken by the right camera is placed on the right side, and two 720P are placed.
- the formatted video stream is spliced left and right into a 3D video stream with a resolution of 2560*720.
- the image processing module can also save the original images captured by the two cameras in a local storage space. Further, before saving, the image processing module further compresses the original image to save storage space, such as compressing the video stream into a video file of the H.265 format.
- Image Send Mode Used to send 3D images outward.
- the image sending module sends the obtained 3D image in real time (or timing), for example, to a remote control device or a terminal device that establishes a wireless communication connection with the drone, such as a mobile phone, a tablet computer, or a headset virtual reality device.
- Equipment such as VR glasses, VR helmets, etc., etc., or uploaded to a server via a wireless communication network.
- the image processing module before transmitting the 3D image, the image processing module further performs compression processing on the 3D image to reduce the size of the 3D image, improve transmission efficiency, and realize real-time transmission. For example, a 3D video stream is compressed into a H.264 format video stream and then sent out.
- the image acquisition module takes two full-resolution photos through two cameras.
- the image processing module uses the two photos taken to form a 3D photo.
- the image processing module can also perform image quality improvement through the respective screens of the two cameras, such as performing denoising processing, background blurring processing, and the like.
- the two photos taken by the binocular camera can be completely overlapped by matching the feature points, and the image in the area is equivalent to two shots, and the image processing module performs multiple shots.
- the same picture is superimposed (such as the simplest weighted average) to effectively reduce noise.
- the image processing module can blur the back scene through a fuzzy algorithm (such as the simplest Gaussian blur filtering), thereby forming a background blur effect.
- the aerial camera of the embodiment of the invention collects two images through two cameras, and splicing the collected two images into a 3D image and transmitting them, so that the drone can provide 3D images during the aerial photography process, and enables the user to real-time Viewing the realistic 3D image of the screen brings the user an immersive experience, which greatly enhances the user's aerial photography experience.
- an aerial photographing apparatus according to a fourth embodiment of the present invention is proposed.
- a depth detecting module is added to the third embodiment, and the depth detecting module is configured to: perform depth detection of a shooting scene by using a 3D image.
- obstacle avoidance such as forward obstacle avoidance
- a set of binocular cameras can simultaneously achieve various functions such as aerial photography, obstacle avoidance, tracking, and ranging.
- depth detection is performed by using a 3D image, that is, depth detection is performed by using a difference (parallax) between left and right or upper and lower images in a 3D image (such as a 3D video stream).
- Parallax is the difference in direction produced by observing the same target from two points with a certain distance, so there is parallax in the image obtained by binocular cameras (such as left camera and right camera) of the same target at different positions. The closer the target is to the camera, the larger the parallax in the image of the binocular camera, so the depth detection module can calculate the distance from the target to the camera according to the parallax size of the target in the two images obtained by the binocular camera, ie the depth of the target. To achieve depth detection.
- the depth detection module prompts The user's obstacle avoidance function is disabled and/or the drone is kept in a hovering state.
- the drone when the user selects a target that needs to be tracked, the drone will adjust the posture of itself and the pan/tilt to align with the selected target. Because the target tracking accuracy based on depth information is more accurate than the previous method of planar vision, it can realize the tracking function with practical application value for the drone.
- the 3D image is transmitted to the user in real time, and the depth detection is performed through the 3D image, so that the drone uses a set of binocular cameras ( That is to say, two cameras can realize 3D aerial photography, obstacle avoidance, tracking, ranging and other functions at the same time, without using two sets of binocular cameras (ie four cameras) to realize 3D shooting and depth detection respectively, thereby lowering The cost of implementing a variety of functions.
- the invention also proposes a drone, the drone comprising: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and Configured to be performed by the one or more processors, the one or more applications configured to perform an aerial photography method.
- the aerial photography method includes the steps of: acquiring images by two cameras; splicing two images acquired by the two cameras into one 3D image; and transmitting the 3D images outward.
- the aerial photography method described in this embodiment is the aerial photography method according to the above embodiment of the present invention, and details are not described herein again.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
- Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
- An aerial photography method provided by an embodiment of the present invention collects two images through two cameras, and splicing the collected two images into a 3D image and transmitting the same, so that the drone can provide a 3D image during the aerial photography process, and The user can view the realistic 3D images in real time, allowing the user to have an immersive feeling, realizing an immersive experience, and greatly improving the user's aerial photography experience.
- the depth detection is also performed through the 3D image, so that the drone can simultaneously realize 3D aerial photography, obstacle avoidance, and tracking by using a set of binocular cameras (ie, two cameras).
- a variety of functions, such as ranging, do not need to separately use two sets of binocular cameras (ie, four cameras) to achieve 3D shooting and depth detection, respectively, thereby achieving a variety of functions at a lower cost, and therefore, industrial applicability.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Processing Or Creating Images (AREA)
Abstract
La présente invention concerne un dispositif, un procédé de photographie aérienne, et un véhicule aérien sans pilote. Le procédé comprend les étapes consistant : à acquérir des images à l'aide de deux appareils photos ; à assembler les deux images acquises par les deux appareils photos en une image 3D ; et à transmettre l'image 3D. De ce fait, le véhicule aérien sans pilote peut fournir l'image 3D pendant un processus de photographie aérienne, ce qui crée des images 3D en temps réel pour un utilisateur afin de visualiser et d'apporter une expérience immersive à l'utilisateur et améliore significativement l'expérience de l'utilisateur dans la photographie aérienne. De plus, le mode de réalisation peut en outre consister : à détecter une profondeur à l'aide de l'image 3D tout en transmettant l'image 3D en temps réel pour un utilisateur à visualiser. Par la seule utilisation d'un ensemble d'appareils photos de vision binoculaire (les deux appareils photos), le véhicule aérien sans pilote peut mettre en œuvre une pluralité de fonctions telles que la photographie aérienne 3D, l'évitement d'obstacles, le suivi, et la détection de distance, sans utiliser deux ensembles d'appareils photos de vision binoculaire (quatre appareils photos) pour mettre en œuvre séparément une photographie aérienne 3D et une détection de profondeur, ce qui met ainsi en œuvre la pluralité de fonctions à moindre coût.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710031741.X | 2017-01-17 | ||
| CN201710031741.XA CN107071389A (zh) | 2017-01-17 | 2017-01-17 | 航拍方法、装置和无人机 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018133589A1 true WO2018133589A1 (fr) | 2018-07-26 |
Family
ID=59597930
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/115877 Ceased WO2018133589A1 (fr) | 2017-01-17 | 2017-12-13 | Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107071389A (fr) |
| WO (1) | WO2018133589A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109729337A (zh) * | 2018-11-15 | 2019-05-07 | 华南师范大学 | 一种应用于双摄像头的视觉合成装置及其控制方法 |
| CN112506228A (zh) * | 2020-12-28 | 2021-03-16 | 广东电网有限责任公司中山供电局 | 一种变电站无人机最优紧急避险路径选择方法 |
Families Citing this family (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107071389A (zh) * | 2017-01-17 | 2017-08-18 | 亿航智能设备(广州)有限公司 | 航拍方法、装置和无人机 |
| CN107741781A (zh) * | 2017-09-01 | 2018-02-27 | 中国科学院深圳先进技术研究院 | 无人机的飞行控制方法、装置、无人机及存储介质 |
| WO2019051649A1 (fr) * | 2017-09-12 | 2019-03-21 | 深圳市大疆创新科技有限公司 | Procédé et dispositif de transmission d'image, plate-forme mobile, dispositif de surveillance, et système |
| CN108304000B (zh) * | 2017-10-25 | 2024-01-23 | 河北工业大学 | 云台实时vr系统 |
| CN108616679A (zh) * | 2018-04-09 | 2018-10-02 | 沈阳上博智像科技有限公司 | 双目摄像机和控制双目摄像机的方法 |
| CN108521558A (zh) * | 2018-04-10 | 2018-09-11 | 深圳慧源创新科技有限公司 | 无人机图传方法、系统、无人机和无人机客户端 |
| CN108985193A (zh) * | 2018-06-28 | 2018-12-11 | 电子科技大学 | 一种基于图像检测的无人机航拍人像对准方法 |
| CN109587451A (zh) * | 2018-12-25 | 2019-04-05 | 青岛小鸟看看科技有限公司 | 一种用于虚拟现实显示设备的视频拍摄装置及其控制方法 |
| CN110460677A (zh) * | 2019-08-23 | 2019-11-15 | 临工集团济南重机有限公司 | 一种挖掘机及挖掘机远程控制系统 |
| CN111006586B (zh) * | 2019-12-12 | 2020-07-24 | 天目爱视(北京)科技有限公司 | 一种用于3d信息采集的智能控制方法 |
| CN111674549A (zh) * | 2020-07-10 | 2020-09-18 | 南京森林警察学院 | 一种半自动式打击类警用智能化旋翼无人机 |
| WO2022088072A1 (fr) * | 2020-10-30 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Procédé et appareil de suivi visuel, plateforme mobile et support de stockage lisible par ordinateur |
| CN112714281A (zh) * | 2020-12-19 | 2021-04-27 | 西南交通大学 | 一种基于5g网络的无人机机载vr视频采集传输装置 |
| CN113784051A (zh) * | 2021-09-23 | 2021-12-10 | 深圳市道通智能航空技术股份有限公司 | 控制飞行器基于人像模式拍摄的方法、装置、设备及介质 |
| CN114422768B (zh) * | 2022-02-24 | 2025-04-11 | 珠海一微半导体股份有限公司 | 一种图像获取装置、机器人和机器人图像获取方法 |
| CN115042971A (zh) * | 2022-06-10 | 2022-09-13 | 湖北工业大学 | 一种能叫人起床的多功能飞行机器人 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8860780B1 (en) * | 2004-09-27 | 2014-10-14 | Grandeye, Ltd. | Automatic pivoting in a wide-angle video camera |
| CN106162145A (zh) * | 2016-07-26 | 2016-11-23 | 北京奇虎科技有限公司 | 基于无人机的立体图像生成方法、装置 |
| CN106184787A (zh) * | 2016-07-14 | 2016-12-07 | 科盾科技股份有限公司北京分公司 | 具有辅助驾驶系统的飞行器及其起降及避免碰撞的方法 |
| CN107071389A (zh) * | 2017-01-17 | 2017-08-18 | 亿航智能设备(广州)有限公司 | 航拍方法、装置和无人机 |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106471803A (zh) * | 2014-12-04 | 2017-03-01 | 深圳市大疆创新科技有限公司 | 成像系统及方法 |
| CN105872523A (zh) * | 2015-10-30 | 2016-08-17 | 乐视体育文化产业发展(北京)有限公司 | 三维视频数据获取方法、设备及系统 |
| CN105828062A (zh) * | 2016-03-23 | 2016-08-03 | 常州视线电子科技有限公司 | 无人机3d虚拟现实拍摄系统 |
| CN105791810A (zh) * | 2016-04-27 | 2016-07-20 | 深圳市高巨创新科技开发有限公司 | 一种虚拟立体显示的方法及装置 |
| CN205847443U (zh) * | 2016-07-13 | 2016-12-28 | 杭州翼飞电子科技有限公司 | 一种能够多人共享无人机3d实时图传的装置 |
-
2017
- 2017-01-17 CN CN201710031741.XA patent/CN107071389A/zh active Pending
- 2017-12-13 WO PCT/CN2017/115877 patent/WO2018133589A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8860780B1 (en) * | 2004-09-27 | 2014-10-14 | Grandeye, Ltd. | Automatic pivoting in a wide-angle video camera |
| CN106184787A (zh) * | 2016-07-14 | 2016-12-07 | 科盾科技股份有限公司北京分公司 | 具有辅助驾驶系统的飞行器及其起降及避免碰撞的方法 |
| CN106162145A (zh) * | 2016-07-26 | 2016-11-23 | 北京奇虎科技有限公司 | 基于无人机的立体图像生成方法、装置 |
| CN107071389A (zh) * | 2017-01-17 | 2017-08-18 | 亿航智能设备(广州)有限公司 | 航拍方法、装置和无人机 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109729337A (zh) * | 2018-11-15 | 2019-05-07 | 华南师范大学 | 一种应用于双摄像头的视觉合成装置及其控制方法 |
| CN112506228A (zh) * | 2020-12-28 | 2021-03-16 | 广东电网有限责任公司中山供电局 | 一种变电站无人机最优紧急避险路径选择方法 |
| CN112506228B (zh) * | 2020-12-28 | 2023-11-07 | 广东电网有限责任公司中山供电局 | 一种变电站无人机最优紧急避险路径选择方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107071389A (zh) | 2017-08-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018133589A1 (fr) | Dispositif, procédé de photographie aérienne, et véhicule aérien sans pilote | |
| US10484621B2 (en) | Systems and methods for compressing video content | |
| US10395338B2 (en) | Virtual lens simulation for video and photo cropping | |
| US10171792B2 (en) | Device and method for three-dimensional video communication | |
| US20190246104A1 (en) | Panoramic video processing method, device and system | |
| CN109076249B (zh) | 用于视频处理和显示的系统和方法 | |
| US10116922B2 (en) | Method and system for automatic 3-D image creation | |
| US9940697B2 (en) | Systems and methods for combined pipeline processing of panoramic images | |
| US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
| CN107005687B (zh) | 无人机飞行体验方法、装置、系统以及无人机 | |
| WO2016187924A1 (fr) | Système et procédé pour montrer des informations d'image | |
| CN106162145B (zh) | 基于无人机的立体图像生成方法、装置 | |
| WO2021237616A1 (fr) | Procédé de transmission d'image, plateforme mobile et support de stockage lisible par ordinateur | |
| CN108616733B (zh) | 一种全景视频图像的拼接方法及全景相机 | |
| JP6057570B2 (ja) | 立体パノラマ映像を生成する装置及び方法 | |
| WO2017166360A1 (fr) | Procédé et un dispositif d'appel vidéo | |
| WO2020093850A1 (fr) | Procédé et appareil d'intégration d'image à double lumière, et véhicule aérien sans pilote | |
| WO2012039306A1 (fr) | Dispositif de traitement d'images, dispositif de capture d'images, procédé de traitement d'images et programme associé | |
| WO2022047701A1 (fr) | Procédé et appareil de traitement d'images | |
| WO2017166714A1 (fr) | Procédé, dispositif et système de capture d'image panoramique | |
| JP2018033107A (ja) | 動画の配信装置及び配信方法 | |
| KR20150091064A (ko) | 단일의 카메라를 이용하여 3d 이미지를 캡쳐하는 방법 및 시스템 | |
| CN204681518U (zh) | 一种全景图像信息采集设备 | |
| CN208572248U (zh) | 基于无线同步的阵列相机 | |
| WO2021196005A1 (fr) | Procédé de traitement d'images, dispositif de traitement d'images, équipement utilisateur, aéronef et système |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17893175 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/11/2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17893175 Country of ref document: EP Kind code of ref document: A1 |