[go: up one dir, main page]

WO2018212514A1 - Procédé et appareil de traitement d'image à 360 degrés - Google Patents

Procédé et appareil de traitement d'image à 360 degrés Download PDF

Info

Publication number
WO2018212514A1
WO2018212514A1 PCT/KR2018/005440 KR2018005440W WO2018212514A1 WO 2018212514 A1 WO2018212514 A1 WO 2018212514A1 KR 2018005440 W KR2018005440 W KR 2018005440W WO 2018212514 A1 WO2018212514 A1 WO 2018212514A1
Authority
WO
WIPO (PCT)
Prior art keywords
degree image
motion vector
motion vectors
image
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2018/005440
Other languages
English (en)
Korean (ko)
Inventor
사-가리가앨버트
반디니앨레산드로
매스트리토마소
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1708001.1A external-priority patent/GB2562529B/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to CN201880032626.7A priority Critical patent/CN110622210A/zh
Priority to US16/606,004 priority patent/US20210142452A1/en
Priority to DE112018002554.3T priority patent/DE112018002554T5/de
Publication of WO2018212514A1 publication Critical patent/WO2018212514A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to a recording medium on which a method for processing a 360 degree image, a device for processing a 360 degree image, and a program for processing a 360 degree image are recorded.
  • VR virtual reality
  • the stabilization of the image can be performed in the post-processing step of the image, most image stabilization technology has to perform two separate tasks. Firstly, the task of detecting and suppressing unintended camera movement from the estimated camera trajectory should be performed, and secondly, generating a new image sequence using the stable trajectory of the camera and the original image sequence. Should be. However, it is difficult to estimate the camera trajectory in the uncorrected single-view imaging system, and generating new images from the stabilized camera view is also difficult to reliably perform. Therefore, further research is required to stabilize the 360 degree image.
  • the disclosed embodiment provides a method and apparatus for processing a 360 degree image capable of stabilizing an image by converting a motion vector of the 360 degree image into rotation information and using the same to correct distortion caused by shaking included in the 360 degree image. I would like to.
  • a method of processing a 360 degree image may include: obtaining a plurality of motion vectors with respect to the 360 degree image; Determining at least one motion vector representing global rotation of a 360 degree image among the plurality of motion vectors through filtering; Obtaining three-dimensional rotation information of a 360 degree image by three-dimensionally transforming the determined at least one motion vector; And correcting the distortion of the 360 degree image due to the shaking based on the obtained 3D rotation information.
  • the determining of the at least one motion vector includes removing a motion vector included in a predetermined region according to a type of projection among a plurality of motion vectors. can do.
  • a method of processing a 360 degree image comprises: generating a mask based on an edge detected from the 360 degree image; Determining a region where no texture exists in the 360 degree image by applying the generated mask to the 360 degree image; And removing a motion vector included in a region in which no texture exists among the plurality of motion vectors.
  • the determining of the at least one motion vector comprises: detecting at least one moving object from the 360 degree image through a preset object detection process; And removing the motion vector associated with the detected object among the plurality of motion vectors.
  • the determining of the at least one motion vector may include: motion vectors positioned opposite to each other on a unitsphere from which a 360-degree image is projected from among the plurality of motion vectors
  • a motion vector that is parallel, has the opposite sign, and has a magnitude within a certain threshold may be determined as a motion vector representing global rotation.
  • obtaining the three-dimensional rotation information the step of classifying the determined at least one motion vector into a plurality of bins corresponding to a specific direction and a specific size range; Selecting a bin containing the most motion vectors of the sorted plurality of bins; And converting the direction and the distance of the selected bin to obtain the 3D rotation information.
  • the obtaining of the 3D rotation information may include applying the weighted average to the directions and distances of the selected bin and a plurality of bins adjacent to the selected bin, and thus the 3D rotation information Can be obtained.
  • the obtaining of the 3D rotation information may obtain, as the 3D rotation information, a rotation value for minimizing the sum of the determined at least one motion vector.
  • the obtaining of the 3D rotation information may include obtaining the 3D rotation information based on a plurality of motion vectors using a previously generated learning network model. have.
  • a method of processing a 360 degree image may further include acquiring sensor data generated as a result of sensing a shake generated when capturing a 360 degree image through a photographing apparatus, and detecting distortion of the 360 degree image.
  • the distortion of the 360 degree image may be corrected by combining the acquired sensor data and the 3D rotation information.
  • an apparatus for processing a 360 degree image may include a memory configured to store one or more instructions; And a processor that executes one or more instructions stored in a memory, wherein the processor obtains a plurality of motion vectors for the 360 degree image, and filters the at least one representing a global rotation of the 360 degree image among the plurality of motion vectors. Determine a motion vector of the at least one motion vector, and three-dimensionally convert the determined at least one motion vector to obtain three-dimensional rotation information of the 360-degree image, and correct the distortion of the 360-degree image due to shaking based on the obtained three-dimensional rotation information. You can correct it.
  • FIG. 1 is a diagram illustrating a format in which a 360 degree image is stored, according to an exemplary embodiment.
  • FIG. 2 is a flowchart illustrating a method of processing a 360-degree image by the image processing apparatus according to an exemplary embodiment.
  • FIG. 3 is a flowchart illustrating a method of processing a 360-degree image by the image processing apparatus according to an exemplary embodiment in more detail.
  • FIG. 4 is a diagram for describing a motion vector in a 360 degree image, according to an exemplary embodiment.
  • FIG. 5 is a diagram for describing a method of removing, by filtering, a motion vector of a predetermined region from a plurality of motion vectors by an image processing apparatus, according to an exemplary embodiment.
  • FIG. 6 is a diagram for describing a method of removing, by the image processing apparatus, a motion vector included in a texture free area through filtering, according to an exemplary embodiment.
  • FIG. 7 is a diagram for describing a method of removing, by the image processing apparatus, a motion vector determined to not be global rotation through filtering, according to an exemplary embodiment.
  • FIG. 8 is a flowchart for describing a method of determining, by the image processing apparatus, a motion vector indicating global rotation through filtering, according to an exemplary embodiment.
  • FIG. 9 is a flowchart illustrating a method of converting a motion vector into 3D rotation by an image processing apparatus, according to an exemplary embodiment.
  • FIG. 10 illustrates a motion vector of a 360 degree image, according to an exemplary embodiment.
  • 11 is a table for describing a result of classifying a plurality of motion vectors into a plurality of bins, according to an exemplary embodiment.
  • FIG. 12 illustrates a histogram of a plurality of motion vectors classified in FIG. 11 according to an exemplary embodiment.
  • FIG. 13 is a flowchart for describing a method of determining, by an image processing apparatus, rotation information obtained by combining a rotation information acquired based on a motion vector and sensing data about shaking for a 360 degree image.
  • FIG. 14 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • 15 is a diagram for describing at least one processor, according to an exemplary embodiment.
  • 16 is a block diagram of a data learner, according to an exemplary embodiment.
  • 17 is a block diagram of a data recognizer according to an exemplary embodiment.
  • FIG. 18 is a block diagram of an image processing apparatus according to another exemplary embodiment.
  • a method of processing a 360 degree image may include: obtaining a plurality of motion vectors with respect to the 360 degree image; Determining at least one motion vector representing global rotation of a 360 degree image among the plurality of motion vectors through filtering; Obtaining three-dimensional rotation information of a 360 degree image by three-dimensionally transforming the determined at least one motion vector; And correcting the distortion of the 360 degree image due to the shaking based on the obtained 3D rotation information.
  • the determining of the at least one motion vector includes removing a motion vector included in a predetermined region according to a type of projection among a plurality of motion vectors. can do.
  • a method of processing a 360 degree image comprises: generating a mask based on an edge detected from the 360 degree image; Determining a region where no texture exists in the 360 degree image by applying the generated mask to the 360 degree image; And removing a motion vector included in a region in which no texture exists among the plurality of motion vectors.
  • the determining of the at least one motion vector comprises: detecting at least one moving object from the 360 degree image through a preset object detection process; And removing the motion vector associated with the detected object among the plurality of motion vectors.
  • the determining of the at least one motion vector may include: motion vectors positioned opposite to each other on a unitsphere from which a 360-degree image is projected from among the plurality of motion vectors
  • a motion vector that is parallel, has the opposite sign, and has a magnitude within a certain threshold may be determined as a motion vector representing global rotation.
  • obtaining the three-dimensional rotation information the step of classifying the determined at least one motion vector into a plurality of bins corresponding to a specific direction and a specific size range; Selecting a bin containing the most motion vectors of the sorted plurality of bins; And converting the direction and the distance of the selected bin to obtain the 3D rotation information.
  • the obtaining of the 3D rotation information may include applying the weighted average to the directions and distances of the selected bin and a plurality of bins adjacent to the selected bin, and thus the 3D rotation information Can be obtained.
  • the obtaining of the 3D rotation information may obtain, as the 3D rotation information, a rotation value for minimizing the sum of the determined at least one motion vector.
  • the obtaining of the 3D rotation information may include obtaining the 3D rotation information based on a plurality of motion vectors using a previously generated learning network model. have.
  • a method of processing a 360 degree image may further include acquiring sensor data generated as a result of sensing a shake generated when capturing a 360 degree image through a photographing apparatus, and detecting distortion of the 360 degree image.
  • the distortion of the 360 degree image may be corrected by combining the acquired sensor data and the 3D rotation information.
  • an apparatus for processing a 360 degree image may include a memory configured to store one or more instructions; And a processor that executes one or more instructions stored in a memory, wherein the processor obtains a plurality of motion vectors for the 360 degree image, and filters the at least one representing a global rotation of the 360 degree image among the plurality of motion vectors. Determine a motion vector of the at least one motion vector, and three-dimensionally convert the determined at least one motion vector to obtain three-dimensional rotation information of the 360-degree image, and correct the distortion of the 360-degree image due to shaking based on the obtained three-dimensional rotation information. You can correct it.
  • first and second may be used to describe various components, but the components are not limited by the terms. The terms are only used to distinguish one component from another.
  • first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • any part of the specification is to “include” any component, this means that it may further include other components, except to exclude other components unless otherwise stated.
  • the term “part” as used herein refers to a hardware component such as software, a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC), and the “part” performs certain roles. However, “part” is not meant to be limited to software or hardware.
  • the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
  • a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
  • FIG. 1 is a diagram illustrating a format in which a 360 degree image is stored, according to an exemplary embodiment.
  • a 360 degree image may be stored in various formats.
  • the pixels that make up the frame of a 360 degree image can be indexed in a three-dimensional coordinate system that defines the location of each pixel on the surface of the virtual sphere 110. have.
  • cube map projection 120 image data for each face of the virtual cube may be stored as a two-dimensional image over a 90 ° ⁇ 90 ° field of view.
  • image data may be stored as a single two-dimensional image over a 360 ° ⁇ 180 ° field of view.
  • the labels 'top', 'bottom', 'front', 'back', 'left' and 'right' indicate areas of 360-degree images respectively corresponding to the equivalent projection described above.
  • the formats shown in FIG. 1 are just examples, and according to another exemplary embodiment, the 360 degree image may be stored in a format different from the format shown in FIG. 1.
  • FIG. 2 is a flowchart illustrating a method of processing a 360-degree image by the image processing apparatus according to an exemplary embodiment.
  • the image processing apparatus may acquire a plurality of motion vectors for the 360 degree image.
  • a motion vector in two-dimensional image data for a 360 degree image is shown in FIG. 4.
  • FIG. 4 is a diagram for describing a motion vector in a 360 degree image, according to an exemplary embodiment.
  • the motion vector is information describing the displacement of a predetermined area 411 of the image between the reference frame 401 and the current frame 402.
  • the frame immediately before the image is selected as the reference frame 401, but in another embodiment, the motion vector may be calculated using the non-contiguous frame as the reference frame.
  • the motion vector may be obtained at a point uniformly distributed throughout the frame.
  • a plurality of 3D motion vectors may be obtained.
  • image data for the current frame is stored using the unit sphere representation shown in FIG. 1, a 3D motion vector may be obtained.
  • the plurality of motion vectors obtained in this embodiment are motion vectors previously generated while encoding the image data of the frame of the 360 degree image.
  • Motion vectors can generally be generated and stored in existing video encoding processes such as MPEG 4.2 or H.264 encoding. During encoding of the image, the motion vector can be used to compress the image data by reusing the blocks of the previous frame to draw the next frame. A detailed description of the method for generating the motion vector will be omitted.
  • the previously generated motion vector may be retrieved from the stored 360 degree image file. Reusing motion vectors in this way can reduce the overall processing burden.
  • the motion vector may be generated in step S210.
  • the image processing apparatus may determine at least one motion vector indicating global rotation of the 360 degree image among the plurality of motion vectors through filtering.
  • 'global rotation' refers to a rotation that affects the image throughout the frame, unlike a local rotation that affects only part of the image.
  • Global rotation can be the result of the camera being rotated while the image is being captured, or a large portion of the frame moving around the camera in the same way. For example, if a 360-degree image is taken from a moving vehicle, the rotation of the vehicle can cause global rotation in the background, and the rotation of the camera itself can cause global rotation in all parts of the vehicle visible in the background and foreground. .
  • Rotation can be regarded as 'global rotation' when it affects a significant portion of the frame.
  • Examples of motion vectors that do not represent global rotation may include motion vectors associated with objects of relatively smaller movement in the scene, or motion vectors associated with static objects that do not appear to rotate when the camera rotates as they are fixed relative to the camera. have.
  • the image processing apparatus may perform filtering to remove a motion vector included in a predetermined region among the plurality of motion vectors. This will be described later in more detail with reference to FIG. 5.
  • the image processing apparatus may perform filtering to generate a mask based on edges detected from the 360 degree image, and apply the generated mask to the 360 degree image to texture-free the 360 degree image.
  • Motion vectors included in the region can be removed. This will be described later in more detail with reference to FIG. 6.
  • the image processing apparatus may perform filtering to remove a motion vector associated with a moving object in a 360 degree image.
  • the image processing apparatus may determine whether a motion vector located on the opposite side of the unit sphere satisfies a specific condition, and determine whether the motion vector indicates global rotation to perform filtering. This will be described later in more detail with reference to FIG. 7.
  • the image processing apparatus may combine two or more of the above-described filtering methods to remove a motion vector that does not represent global rotation among the plurality of motion vectors.
  • other filtering methods may be used.
  • Other embodiments in which the motion vector may be filtered may include, but are not limited to, static object filtering, background flow subtraction, and manual filtering.
  • static object filtering static objects that do not change their position from one frame to the next may be detected, and motion vectors associated with the static objects may be filtered.
  • static object types that can occur in 360-degree images include black pixels on the lens or the user's finger in front of the camera.
  • background pixels moving at a constant rate in the entire image may be excluded, assuming that they do not contain useful information for calculating the stabilization rotation.
  • Manual filtering may include a human operator that manually filters the motion vector.
  • the image processing apparatus may obtain 3D rotation information about the 360 degree image by 3D transforming the determined at least one motion vector.
  • the image processing apparatus may classify the determined at least one motion vector into a plurality of bins corresponding to a specific direction and a specific size range.
  • the image processing apparatus may obtain the 3D rotation information by converting the direction and the distance of the bin including the most motion vectors among the plurality of classified bins.
  • the image processing apparatus may obtain the 3D rotation information by applying a weighted average to the directions and distances of the bin including the most motion vectors and the plurality of bins adjacent to the bin. have.
  • the image processing apparatus may obtain, as 3D rotation information, a rotation value for minimizing the sum of the determined at least one motion vector.
  • the image processing apparatus may obtain 3D rotation information based on a plurality of motion vectors using a previously generated learning network model.
  • humans can stabilize their gaze while maintaining their eye level by analyzing image shifts (similar to motion vectors) caused by movement to the environment as the body rotates. Similar behavior can be observed in simpler samples, such as flies, with relatively few neurons.
  • Neurons can convert sensory information into a format corresponding to their motor system requirements.
  • a machine learning mechanism may be used to mimic the behavior of living things and to obtain sensor rotational transformations using motion vectors as input data.
  • a machine learning system may be used, such as a learning network model trained with a pattern of motion vectors in a frame having a particular rotation. Such mechanisms tend to mimic living beings and may receive a plurality of motion vectors as inputs and output an overall rotation for stabilizing a 360 degree image.
  • the image processing apparatus may correct the distortion of the 360 degree image due to the shaking based on the obtained 3D rotation information.
  • the image processing apparatus may correct the distortion of the 360 degree image due to the shaking by rotating the 360 degree image according to the 3D rotation information.
  • the image processing apparatus may render and display the corrected 360-degree image, or encode and store it for later playback.
  • FIG. 3 is a flowchart illustrating a method of processing a 360-degree image by the image processing apparatus according to an exemplary embodiment in more detail.
  • all the steps of the method disclosed in FIG. 3 may be performed in the same apparatus, and each of the steps may be performed in different apparatuses. 3 may be performed by software or hardware according to an embodiment.
  • an apparatus for performing the method disclosed in FIG. 3 includes a processing unit comprising one or more processors, and a computer reading storing computer program instructions executable by the processing unit to perform the method. Possible memory may be included.
  • the image processing apparatus may acquire a plurality of motion vectors for the current frame of the 360 degree image.
  • the image processing apparatus may obtain a plurality of motion vectors by searching for a motion vector from a stored 360 degree image file or generating a motion vector at a point uniformly distributed throughout the frame.
  • step S310 may correspond to step S210 described above with reference to FIG. 2.
  • the image processing apparatus may perform filtering on the plurality of motion vectors.
  • the motion vector may be filtered to remove a motion vector that does not represent global rotation of the 360 degree image.
  • the image processing device may filter to detect a motion vector associated with an object of relatively smaller movement in a frame or a motion vector associated with a static object that does not appear to rotate when the camera rotates as it is fixed relative to the camera. Can be removed Examples of various methods of filtering the motion vectors will be described in more detail later with reference to FIGS. 5 to 7.
  • the motion vector may not be filtered, in which case step S320 may be omitted.
  • the image processing apparatus may convert the motion vector into 3D rotation.
  • the image processing apparatus filters a plurality of motion vectors to remove motion vectors that do not represent global rotation, and then the remaining motion vectors may be applied to the current frame to stabilize the 360 degree image. Can be converted to dimensional rotation.
  • a 360 degree image is stored as two-dimensional image data via equilateral rectangular projection, and a pre-defined transform can be used to convert the motion vector into three-dimensional rotation.
  • Pre-defined transformations may be predefined based on the geometry of the two-dimensional projection.
  • a transform according to the following equation (1) can be used.
  • Rx, Ry, and Rz represent rotations in degrees about the x, y, and z axes, respectively
  • width represents the total width of the field of view in pixels
  • height is the pixels.
  • the motion vector v can be expressed as (13, 8), for example, representing 13 pixels in the x-axis and 8 pixels in the y-axis.
  • the frame width in the horizontal direction is 36 pixels, which corresponds to 10 ° per pixel.
  • the vertical component of the motion vector may be converted into equivalent rotation about the x or y axis depending on the position of the motion vector in the frame.
  • the overall rotation required to stabilize the 360 degree image can be expressed as a three-dimensional rotation, that is, a rotation in three-dimensional space.
  • Rotation can be represented by three separate rotational components, such as axes perpendicular to one another, for example the x, y and z axes as shown in FIG.
  • the rotation obtained in step S330 may be referred to as stabilizing rotation as the camera shake may be effectively corrected to stabilize the 360 degree image.
  • each motion vector may be converted to equivalent rotation as described above, and the average rotation (eg, average or mode) over the entire frame may be considered a full rotation.
  • the average rotation eg, average or mode
  • a Gaussian or median filter may be used when taking the average in consideration of neighboring values around the average or mode value.
  • the average motion vector can be calculated for the entire frame, and the average motion vector can be transformed to full rotation using a predefined transformation.
  • Equation 1 described above may be modified as needed in other embodiments.
  • Equation 1 described above may be modified.
  • the image processing apparatus may provide 3D rotation to the image processing unit to generate a stabilized image.
  • the image processing apparatus may generate a stabilized image by applying 3D rotation to image data of the current frame.
  • the image processing apparatus may render and display the stabilized image or encode and store it for later playback.
  • the stabilized image may be encoded using interframe compression.
  • more effective compression may be achieved based on the rotation applied to the stabilized image data.
  • the image stabilization process described above modifies the frames of the original 360-degree image in a way that minimizes the difference between two consecutive frames of image data, which allows the encoder to reuse more information from previous frames, thereby interframe
  • lower bit rates can be used. As a result, the amount of key frames generated can be reduced, and thus the compression rate can be improved.
  • an analysis for determining a rotation for stabilizing an image may be performed in the first image processing apparatus, and the generating of the stabilized image may be performed by physically separating the first image processing apparatus from the first image processing apparatus. It may be performed by the second image processing apparatus.
  • the first image processing apparatus may set the value of the 3D rotation parameter in the metadata associated with the 360 degree image according to the determined rotation.
  • the first image processing apparatus may provide metadata and associated image data to the second image processing apparatus through an appropriate mechanism such as a broadcast signal or a network connection.
  • the second image processing apparatus may obtain a value of the 3D rotation parameter from the metadata to determine the rotation.
  • the second image processing apparatus may generate the stabilized 360 degree image by applying the rotation defined by the 3D rotation parameter to the 360 degree image.
  • the second image processing apparatus according to the exemplary embodiment applies a rotation and / or translation defined by the camera control input to the rotated image data before rendering the rotated image data, thereby stabilizing the 360 degree image. Can be generated.
  • FIG. 5 is a diagram for describing a method of removing, by filtering, a motion vector of a predetermined region from a plurality of motion vectors by an image processing apparatus, according to an exemplary embodiment.
  • the distance between the upper region 511 and the lower region 512 tends to be exaggerated, so that when the equilateral rectangular projection is used, the upper region 511 and the lower portion of the frame 500 are used.
  • the motion vector in region 512 can include potentially large errors.
  • the image processing apparatus may determine the motion vectors of the upper region 511 and the lower region 512 among the plurality of motion vectors when calculating rotation for stabilization of a 360 degree image. Can be removed
  • FIG. 6 is a diagram for describing a method of removing, by the image processing apparatus, a motion vector included in a texture free area through filtering, according to an exemplary embodiment.
  • the image processing apparatus may generate a mask by performing edge detection on a frame and dilatating the frame.
  • the image processing apparatus may apply a mask to the frame to remove the texture-free area, which is an area substantially free of texture.
  • the black pixels in the mask represent areas where no edges are detected, which may mean areas that are substantially free of texture.
  • the mask may be thresholded to include only pixel values of 1 or 0, where 1 may represent white pixels and 0 may represent black pixels.
  • the image processing apparatus may perform filtering by comparing the position of the motion vector in the 360 degree image with the pixel value of the mask and discarding the motion vector when the mask has the pixel value 0 at the position.
  • the motion vector of the texture-free region is removed through filtering.
  • the motion vector may be filtered from another type of region that may include an unreliable motion vector. It may be. Examples of other types of regions that may include unreliable motion vectors may include regions exhibiting chaotic movement such as foliage or smoke.
  • FIG. 7 is a diagram for describing a method of removing, by the image processing apparatus, a motion vector determined to not be global rotation through filtering, according to an exemplary embodiment.
  • the image processing apparatus may perform filtering by using a fact that a global rotation in a 360 degree image generates motion vectors having similar magnitudes and opposite directions on opposite sides of the unit sphere. Specifically, the image processing apparatus compares one or more motion vectors in or near the unitary sphere with one or more corresponding motion vectors on the opposite side of the sphere, referred to as " mirror points, " It can be determined.
  • the image processing apparatus may determine that two motion vectors opposite to each other have a magnitude within a specific threshold (eg, ⁇ 10%), are parallel to each other, and have signs in opposite directions, and are motion vectors indicating global rotation.
  • a specific threshold eg, ⁇ 10%
  • the image processing apparatus may use it to determine the rotation for stabilization of the 360 degree image.
  • FIG. 8 is a flowchart for describing a method of determining, by the image processing apparatus, a motion vector indicating global rotation through filtering, according to an exemplary embodiment.
  • Steps S810 to S890 described with reference to FIG. 8 may be performed between steps S310 and S330 described above with reference to FIG. 3.
  • the image processing apparatus may filter motion vectors of at least one region of the plurality of motion vectors for the 360 degree image. For example, when an equilateral rectangular projection is used for a 360 degree image, the image processing apparatus may remove motion vectors in an upper region and a lower region of the 360 degree image through filtering.
  • the image processing apparatus may generate a mask for filtering the texture-free area.
  • the image processing apparatus may generate a mask by performing edge detection on a 360 degree image and expanding the same.
  • the image processing apparatus may apply a mask to the current frame to filter the motion vector of the texture-free region. For example, the image processing apparatus compares the position of the motion vector in the 360 degree image with the pixel value of the mask, and removes the motion vector if the mask has pixel value 0 (the area where no edge is detected) at that position, Filtering can be performed.
  • the image processing apparatus may detect an object moving in the 360 degree image.
  • the image processing apparatus may detect one or more moving objects within a 360 degree image by using an appropriate object detection algorithm among existing object detection algorithms.
  • the image processing apparatus may filter a motion vector associated with the moving object.
  • the image processing apparatus may remove the motion vector associated with the moving object among the remaining motion vectors through filtering.
  • the motion vector associated with the moving object can be much larger in size than other motion vectors. Accordingly, the image processing apparatus may filter the motion vector so that the stabilization rotation is not distorted by the large motion vector due to the fast moving object.
  • the image processing apparatus may compare motion vectors on opposite sides of the sphere.
  • the image processing apparatus may determine whether the motion vector corresponds to the global rotation. For example, the image processing apparatus may determine that two motion vectors opposite to each other have a magnitude within a specific threshold (eg, ⁇ 10%) and are parallel to each other and have signs in opposite directions. .
  • a specific threshold eg, ⁇ 10%
  • the image processing apparatus may maintain the motion vector.
  • the image processing apparatus may exclude the motion vector when calculating the rotation.
  • FIG. 9 is a flowchart illustrating a method of converting a motion vector into 3D rotation by an image processing apparatus, according to an exemplary embodiment.
  • the image processing apparatus may classify the plurality of motion vectors into a plurality of bins corresponding to a specific size range in a specific direction.
  • FIG. 10 illustrates a motion vector of a 360 degree image, according to an exemplary embodiment.
  • FIG. 10 illustrates a motion vector for a 360 degree image after applying the mask illustrated in FIG. 6.
  • the present embodiment for simplicity of explanation, only the motion vector in the horizontal (x-axis) direction is shown.
  • this is merely an example, and the method applied to the present embodiment may be extended to motion vectors of other axes to determine three-dimensional rotation.
  • 11 is a table for describing a result of classifying a plurality of motion vectors into a plurality of bins, according to an exemplary embodiment.
  • the distance associated with a particular bin may be converted into an equivalent angle using a predetermined transformation as described above with reference to step S330 of FIG. 3.
  • the motion vector has a value between -1 and +12.
  • FIG. 12 illustrates a histogram of a plurality of motion vectors classified in FIG. 11 according to an exemplary embodiment.
  • the most motion vector is included in the bin at the distance 7 corresponding to the 20th.
  • the image processing apparatus may identify bins including the largest number of motion vectors among the plurality of bins. As described above with reference to FIG. 12, the image processing apparatus may identify that the most motion vector is included in the bin at a distance of 7.
  • the image processing apparatus may calculate a rotation based on a weighted average based on the identified bin and the neighboring bin.
  • the distance 7 corresponding to the bin identified in step S920 described above is equivalent to a rotation of 0.043 radians (2.46 °).
  • the image processing apparatus may determine a rotation for stabilizing a 360 degree image by converting a distance corresponding to the identified bin into an equivalent rotation by using a predetermined transformation.
  • the analysis is performed based on a 360 degree image in which the actual camera rotation is measured in 0.04109753 radians. It can be seen that it is a reasonable estimate of the actual camera rotation.
  • the image processing apparatus may calculate the rotation using a weighted average across the bins identified in step S920 and the plurality of neighboring bins in order to increase the accuracy of the obtained rotation value.
  • a weighted average a 3-amplitude Gaussian weighted average may be used.
  • a plurality of The rotation can be determined by summing the motion vectors vj.
  • the three-dimensional rotation for stabilizing the 360 degree image may be obtained by determining a rotation R that minimizes the entire motion field as shown in Equation 3 below.
  • FIG. 13 is a flowchart for describing a method of determining, by an image processing apparatus, rotation information obtained by combining a rotation information acquired based on a motion vector and sensing data about shaking for a 360 degree image.
  • the image processing apparatus may determine at least one motion vector indicating global rotation of the 360 degree image among the plurality of motion vectors with respect to the 360 degree image.
  • step S1310 may correspond to step S220 described above with reference to FIG. 2.
  • the image processing apparatus may obtain 3D rotation information by converting the determined at least one motion vector.
  • step S1320 may correspond to step S230 described above with reference to FIG. 2.
  • the image processing apparatus may re-determine the rotation information of the 360 degree image by combining the sensor data and the rotation information regarding the shaking obtained when the 360 degree image is captured.
  • the image processing apparatus may be set to acquire sensor data regarding shaking of the photographing apparatus while a 360 degree image is captured.
  • the image processing apparatus may consider sensor data when determining rotation.
  • the image processing apparatus may verify rotation information obtained by analyzing the motion vector using the sensor data, or verify rotation information obtained through the sensor data using the rotation information obtained by analyzing the motion vector.
  • the image processing apparatus may merge sensor data into rotation information obtained by analyzing a motion vector.
  • the result of analyzing the sensor data and the motion data may be merged by applying a weight to the sensor data and the motion vector analysis result according to the relative error margin of the sensor data with respect to the rotation information obtained by analyzing the motion vector.
  • This approach may be advantageous in scenarios in which the rotation calculated using the motion vector may have a larger error than the measurement obtained by the sensor. For example, the case where the scene has a large area without texture may be included in the above-described scenario. In this situation, more weight may be given to the sensor data.
  • Sensors on the other hand, can suffer from drift problems. The drift problem can be mitigated by combining the sensor data with the rotation computed in the motion vector.
  • FIG. 14 is a block diagram of an image processing apparatus 1400, according to an exemplary embodiment.
  • the image processing apparatus 1400 may include at least one processor 1410 and a memory 1420.
  • At least one processor 1410 may perform the processing method of the 360-degree image described above with reference to FIGS. 1 to 13. For example, the at least one processor 1410 may obtain a plurality of motion vectors for the 360 degree image. The at least one processor 1410 may determine at least one motion vector indicating global rotation of the 360 degree image among the plurality of motion vectors through filtering. In addition, the at least one processor 1410 may obtain the 3D rotation information about the 360 degree image by converting the determined at least one motion vector. The at least one processor 1410 may correct the distortion of the 360 degree image due to the shaking based on the obtained 3D rotation information.
  • the memory 1420 may store programs (one or more instructions) for processing and controlling the at least one processor 1410. Programs stored in the memory 1420 may be divided into a plurality of modules according to their functions.
  • the memory 1420 may be configured as a software module and a data learner and a data recognizer, which will be described later with reference to FIG. 15.
  • the data learning unit and the data recognizing unit may each independently include a learning network model, or share one learning network model.
  • 15 is a diagram for describing at least one processor 1410 according to an exemplary embodiment.
  • At least one processor 1410 may include a data learner 1510 and a data recognizer 1520.
  • the data learner 1510 may learn a criterion for obtaining 3D rotation information from a plurality of motion vectors for a 360 degree image.
  • the data recognizer 1520 may determine 3D rotation information from the plurality of motion vectors for the 360 degree image based on the criteria learned by the data learner 1510.
  • At least one of the data learner 1510 and the data recognizer 1520 may be manufactured in the form of at least one hardware chip and mounted on the image processing apparatus.
  • at least one of the data learner 1510 and the data recognizer 1520 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or an existing general purpose processor (eg, a CPU).
  • the processor may be manufactured as a part of an application processor or a graphics processor (eg, a GPU) and mounted on the aforementioned various image processing apparatuses.
  • the data learner 1510 and the data recognizer 1520 may be mounted in one image processing apparatus, or Each may be mounted on separate image processing apparatuses.
  • one of the data learner 1510 and the data recognizer 1520 may be included in the image processing apparatus, and the other may be included in the server.
  • the data learner 1510 and the data recognizer 1520 may provide model information constructed by the data learner 1510 to the data recognizer 1520 via a wired or wireless connection.
  • the data input to 1520 may be provided to the data learner 1510 as additional learning data.
  • At least one of the data learner 1510 and the data recognizer 1520 may be implemented as a software module.
  • the software module may be computer readable non-transitory readable. It may be stored in a non-transitory computer readable media.
  • at least one software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • OS operating system
  • others may be provided by a predetermined application.
  • 16 is a block diagram of the data learner 1510, according to an exemplary embodiment.
  • the data learner 1510 may include a data acquirer 1610, a preprocessor 1620, a training data selector 1630, a model learner 1640, and a model evaluator ( 1650).
  • a data acquirer 1610 may acquire a data acquirer 1610, a preprocessor 1620, a training data selector 1630, a model learner 1640, and a model evaluator ( 1650).
  • the data learning unit 1510 may be configured with fewer components than those described above, or other components may be additionally included in the data learning unit 1510.
  • the data acquirer 1610 may acquire at least one 360 degree image as learning data.
  • the data acquirer 1610 acquires at least one 360 degree image from an image processing apparatus including the data learning unit 1510 or an external device that can communicate with the image processing apparatus including the data learning unit 1510. can do.
  • the preprocessor 1620 may process the obtained at least one 360 degree image in a preset format so that the model learner 1640, which will be described later, uses the at least one 360 degree image acquired for learning.
  • the training data selector 1630 may select a 360 degree image for learning from the preprocessed data.
  • the selected 360 degree image may be provided to the model learner 1640.
  • the training data selector 1630 may select a 360 degree image for learning from the preprocessed 360 degree images according to the set criteria.
  • the model learner 1640 may learn a criterion about whether to determine the 3D rotation information from the plurality of motion vectors by using some information from the 360 degree image in the plurality of layers in the learning network model.
  • model learner 1640 may train the data recognition model, for example, through reinforcement learning using feedback on whether the acquired 360-degree image is suitable for learning.
  • the model learner 1640 may store the trained data recognition model.
  • the model evaluator 1650 may input evaluation data into the learning network model, and if the recognition result output from the evaluation data does not satisfy a predetermined criterion, the model evaluator 1640 may retrain the model.
  • the evaluation data may be preset data for evaluating the learning network model.
  • At least one of the data acquirer 1610, the preprocessor 1620, the training data selector 1630, the model learner 1640, and the model evaluator 1650 in the data learner 1510 may be at least one. It may be manufactured in the form of a hardware chip and mounted on an image processing apparatus.
  • at least one of the data acquirer 1610, the preprocessor 1620, the training data selector 1630, the model learner 1640, and the model evaluator 1650 may be artificial intelligence (AI).
  • AI artificial intelligence
  • It may be manufactured in the form of a dedicated hardware chip, or may be manufactured as part of an existing general purpose processor (eg, a CPU or an application processor) or a graphics dedicated processor (eg, a GPU) and mounted on the above-described various image processing apparatuses.
  • a general purpose processor eg, a CPU or an application processor
  • a graphics dedicated processor eg, a GPU
  • the data acquirer 1610, the preprocessor 1620, the training data selector 1630, the model learner 1640, and the model evaluator 1650 may be mounted in one image processing apparatus or may be separate. May be mounted on the respective image processing apparatuses. For example, some of the data acquirer 1610, the preprocessor 1620, the training data selector 1630, the model learner 1640, and the model evaluator 1650 are included in the image processing apparatus, and some of the remaining data are included in the image processing apparatus. May be included in the server.
  • At least one of the data acquirer 1610, the preprocessor 1620, the training data selector 1630, the model learner 1640, and the model evaluator 1650 may be implemented as a software module.
  • at least one software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • OS operating system
  • some of the at least one software module may be provided by an operating system (OS), and others may be provided by a predetermined application.
  • 17 is a block diagram of a data recognizer 1520 according to an embodiment.
  • the data recognizer 1520 includes a data acquirer 1710, a preprocessor 1720, a recognition data selector 1730, a recognition result provider 1740, and a model updater. (1750).
  • the data acquirer 1710 may acquire at least one 360 degree image, and the preprocessor 1720 may preprocess the obtained at least one 360 degree image.
  • the preprocessor 1720 may generate at least one 360-degree image so that the recognition result provider 1740, which will be described later, may use the at least one 360-degree image obtained for the determination of the 3D rotation information for the plurality of motion vectors.
  • the recognition data selector 1730 may select a motion vector required for determining 3D rotation information among a plurality of motion vectors included in the preprocessed data. The selected motion vector may be provided to the recognition result provider 1740.
  • the recognition result provider 1740 may determine 3D rotation information based on the selected motion vector. In addition, the recognition result providing unit 1740 may provide the determined 3D rotation information.
  • the model updater 1750 based on the evaluation of the 3D rotation information provided by the recognition result providing unit 1740, provides information on the evaluation so that the parameters of the layers included in the learning network model are updated. Reference may be provided to the model learner 1640 described above.
  • At least one of the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 in the data recognizer 1520 may be at least It may be manufactured in the form of one hardware chip and mounted on an image processing apparatus.
  • at least one of the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 may be a dedicated hardware chip for artificial intelligence. It may be manufactured in the form, or may be manufactured as a part of an existing general purpose processor (eg, a CPU or an application processor) or a graphics dedicated processor (eg, a GPU) and mounted on the above-described various image processing apparatuses.
  • an existing general purpose processor eg, a CPU or an application processor
  • a graphics dedicated processor eg, a GPU
  • the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 may be mounted in one image processing apparatus, or Each may be mounted on separate image processing apparatuses.
  • some of the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 are included in the image processing apparatus. Some may be included in the server.
  • At least one of the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 may be implemented as a software module.
  • At least one of the data acquirer 1710, the preprocessor 1720, the recognition data selector 1730, the recognition result provider 1740, and the model updater 1750 includes a software module (or instruction). If implemented as a program module, the software module may be stored in a computer readable non-transitory computer readable media.
  • at least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the at least one software module may be provided by an operating system (OS), and others may be provided by a predetermined application.
  • OS operating system
  • OS operating system
  • FIG. 18 is a block diagram of an image processing apparatus according to another exemplary embodiment.
  • the image processing apparatus includes a first apparatus 1800 for analyzing a 360 degree image to determine three-dimensional rotation information, and includes a rotation provided by the first apparatus 1800. It may include a second device 1810 for generating a stabilized image based on. In other embodiments, some or all of the components of the first device 1800 and the second device 1810 may be implemented as a single physical device.
  • the first device 1800 converts the motion vector obtaining unit 1801 to obtain a plurality of motion vectors for the 360 degree image, and converts the plurality of motion vectors into three-dimensional rotation and converts the three-dimensional rotation to the second device 1810. It may include a motion vector conversion unit 1802 providing the included image processing unit 1811.
  • the second device 1810 can include an image processing unit 1811 and a display 1812 that displays a stabilized 360 degree image rendered by the image processing unit 1811.
  • the second device 1810 may further include an input unit 1813 configured to receive a control input of the imaging device defining the rotation and / or the transformation.
  • Method according to an embodiment of the present invention is implemented in the form of program instructions that can be executed by various computer means may be recorded on a computer readable medium.
  • the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • a device may include a processor, a memory for storing and executing program data, a persistent storage such as a disk drive, a communication port for communicating with an external device, a touch panel, a key, a user interface such as a button, and the like.
  • Methods implemented by software modules or algorithms may be stored on a computer readable recording medium as computer readable codes or program instructions executable on the processor.
  • the computer-readable recording medium may be a magnetic storage medium (eg, read-only memory (ROM), random-access memory (RAM), floppy disk, hard disk, etc.) and an optical reading medium (eg, CD-ROM). ) And DVD (Digital Versatile Disc).
  • the computer readable recording medium can be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • the medium is readable by the computer, stored in the memory, and can be executed by the processor.
  • An embodiment may be represented by functional block configurations and various processing steps. Such functional blocks may be implemented in various numbers of hardware or / and software configurations that perform particular functions.
  • an embodiment may include an integrated circuit configuration such as memory, processing, logic, look-up table, etc. that may execute various functions by the control of one or more microprocessors or other control devices. You can employ them.
  • an embodiment may employ the same or different types of cores, different types of CPUs.
  • Similar to the components in the present invention may be implemented in software programming or software elements, embodiments include C, C ++, including various algorithms implemented in combinations of data structures, processes, routines or other programming constructs. It may be implemented in a programming or scripting language such as Java, an assembler, or the like.
  • the functional aspects may be implemented with an algorithm running on one or more processors.
  • the embodiment may employ the prior art for electronic configuration, signal processing, and / or data processing.
  • Terms such as “mechanism”, “element”, “means”, “configuration” can be used widely and are not limited to mechanical and physical configurations. The term may include the meaning of a series of routines of software in conjunction with a processor or the like.
  • connection or connection members of the lines between the components shown in the drawings by way of example shows a functional connection and / or physical or circuit connections, in the actual device replaceable or additional various functional connections, physical It may be represented as a connection, or circuit connections.
  • such as "essential”, “important” may not be a necessary component for the application of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de traitement d'image à 360 degrés pour : acquérir une pluralité de vecteurs de mouvement pour une image à 360 degrés, déterminer au moins un vecteur de mouvement, qui indique une rotation globale de l'image à 360 degrés, parmi la pluralité de vecteurs de mouvement par filtrage ; effectuer une transformation en trois dimensions sur le ou les vecteurs de mouvement déterminés de façon à acquérir des informations de rotation en trois dimensions de l'image à 360 degrés et corriger la distorsion de l'image à 360 degrés provoquée par un tremblement sur la base des informations de rotation en trois dimensions acquises.
PCT/KR2018/005440 2017-05-18 2018-05-11 Procédé et appareil de traitement d'image à 360 degrés Ceased WO2018212514A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880032626.7A CN110622210A (zh) 2017-05-18 2018-05-11 用于处理360度图像的方法和装置
US16/606,004 US20210142452A1 (en) 2017-05-18 2018-05-11 Method and apparatus for processing 360-degree image
DE112018002554.3T DE112018002554T5 (de) 2017-05-18 2018-05-11 Verfahren und Vorrichtung zur Verarbeitung eines 360-Grad-Bildes

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1708001.1 2017-05-18
GB1708001.1A GB2562529B (en) 2017-05-18 2017-05-18 Method and apparatus for stabilising 360 degree video
KR10-2018-0045741 2018-04-19
KR1020180045741A KR102444292B1 (ko) 2017-05-18 2018-04-19 360도 영상을 처리하는 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2018212514A1 true WO2018212514A1 (fr) 2018-11-22

Family

ID=64274188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/005440 Ceased WO2018212514A1 (fr) 2017-05-18 2018-05-11 Procédé et appareil de traitement d'image à 360 degrés

Country Status (1)

Country Link
WO (1) WO2018212514A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005056295A (ja) * 2003-08-07 2005-03-03 Iwane Kenkyusho:Kk 360度画像変換処理装置
US20070286286A1 (en) * 2006-04-21 2007-12-13 Dilithium Holdings, Inc. Method and System for Video Encoding and Transcoding
KR20070119525A (ko) * 2006-06-14 2007-12-20 소니 가부시끼가이샤 화상 처리 장치, 화상 처리 방법, 촬상 장치 및 촬상 방법
KR101137107B1 (ko) * 2003-12-09 2012-07-02 마이크로소프트 코포레이션 그래픽 처리 유닛을 사용해 기계 학습 기술들의 처리를가속화하고 최적화하는 시스템 및 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005056295A (ja) * 2003-08-07 2005-03-03 Iwane Kenkyusho:Kk 360度画像変換処理装置
KR101137107B1 (ko) * 2003-12-09 2012-07-02 마이크로소프트 코포레이션 그래픽 처리 유닛을 사용해 기계 학습 기술들의 처리를가속화하고 최적화하는 시스템 및 방법
US20070286286A1 (en) * 2006-04-21 2007-12-13 Dilithium Holdings, Inc. Method and System for Video Encoding and Transcoding
KR20070119525A (ko) * 2006-06-14 2007-12-20 소니 가부시끼가이샤 화상 처리 장치, 화상 처리 방법, 촬상 장치 및 촬상 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SEO, YEONG GEON ET AL.: "JA Generation of ROI Mask and an Automatic Extraction of ROI Using Edge Distribution of JPEG2000 Image", JOURNAL OF DIGITAL CONTENTS SOCIETY, vol. 16, no. 4, 31 August 2015 (2015-08-31), pages 583 - 593, Retrieved from the Internet <URL:http://koteascience.ot.kr/at1icie/ArticleFuliRecord.jsp?cn=DGTCBD2015_v16n4_583> *

Similar Documents

Publication Publication Date Title
WO2020032354A1 (fr) Procédé, support de stockage et appareil pour convertir un ensemble d&#39;images 2d en un modèle 3d
WO2021045599A1 (fr) Procédé d&#39;application d&#39;effet bokeh sur une image vidéo et support d&#39;enregistrement
WO2015194864A1 (fr) Dispositif de mise à jour de carte de robot mobile et procédé associé
WO2015194867A1 (fr) Dispositif de reconnaissance de position de robot mobile utilisant le suivi direct, et son procédé
WO2020071839A1 (fr) Dispositif et procédé de surveillance de port et de navires
WO2015194865A1 (fr) Dispositif et procede pour la reconnaissance d&#39;emplacement de robot mobile au moyen d&#39;appariement par correlation a base de recherche
WO2015194866A1 (fr) Dispositif et procédé permettant de reconnaître un emplacement d&#39;un robot mobile au moyen d&#39;un réajustage basé sur les bords
WO2015194868A1 (fr) Dispositif de commande d&#39;entraînement d&#39;un robot mobile sur lequel sont montées des caméras grand-angle, et son procédé
WO2020231153A1 (fr) Dispositif électronique et procédé d&#39;aide à la conduite d&#39;un véhicule
WO2018093100A1 (fr) Appareil électronique et son procédé de traitement d&#39;image
US20110096143A1 (en) Apparatus for generating a panoramic image, method for generating a panoramic image, and computer-readable medium
WO2020027519A1 (fr) Dispositif de traitement d&#39;image et son procédé de fonctionnement
WO2021085757A1 (fr) Procédé d&#39;interpolation de trame vidéo, résistant à un mouvement exceptionnel, et appareil associé
KR102444292B1 (ko) 360도 영상을 처리하는 방법 및 장치
WO2019022509A1 (fr) Dispositif et procédé de fourniture de contenu
WO2020101420A1 (fr) Procédé et appareil de mesurer des caractéristiques optiques d&#39;un dispositif de réalité augmentée
WO2021241994A1 (fr) Procédé et dispositif pour générer un modèle 3d par suivi de caméra rgb-d
WO2021246822A1 (fr) Procédé et appareil pour améliorer une image d&#39;objet
WO2020204287A1 (fr) Appareil d&#39;affichage et procédé de traitement d&#39;image associé
WO2019017720A1 (fr) Système de caméra permettant la protection de la confidentialité et procédé correspondant
WO2024167173A1 (fr) Stabilisation de multimédia à trame complète dans des dispositifs électroniques
WO2022255777A1 (fr) Procédé de modélisation, dispositif de modélisation et système de modélisation pour la modélisation d&#39;un objet cible
WO2019194544A1 (fr) Procédé et système pour gérer un contenu d&#39;image à 360 degrés
WO2019112296A1 (fr) Appareil de traitement d&#39;images et robot mobile équipé dudit appareil
WO2018212514A1 (fr) Procédé et appareil de traitement d&#39;image à 360 degrés

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18801440

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18801440

Country of ref document: EP

Kind code of ref document: A1