CN102447927B - Method for warping three-dimensional image with camera calibration parameter - Google Patents
Method for warping three-dimensional image with camera calibration parameter Download PDFInfo
- Publication number
- CN102447927B CN102447927B CN 201110278135 CN201110278135A CN102447927B CN 102447927 B CN102447927 B CN 102447927B CN 201110278135 CN201110278135 CN 201110278135 CN 201110278135 A CN201110278135 A CN 201110278135A CN 102447927 B CN102447927 B CN 102447927B
- Authority
- CN
- China
- Prior art keywords
- ref
- des
- image
- pixel point
- reference image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title abstract description 8
- 238000011426 transformation method Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000009877 rendering Methods 0.000 abstract description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
技术领域technical field
本发明属于3D电视系统中的基于深度图像绘制(Depth-Image-BasedRendering,简称DIBR)技术领域,更为具体地讲,涉及一种有摄像机标定参数的三维图像变换方法。The invention belongs to the technical field of Depth-Image-Based Rendering (DIBR for short) in a 3D television system, and more specifically relates to a three-dimensional image conversion method with camera calibration parameters.
背景技术Background technique
基于深度图像绘制技术是根据参考图像(reference image)及其对应的深度图像(depth image)来生成一幅新的虚拟视点图像,即目标图像(destinationimage)。与利用左右两路平面视频合成三维影像即传统三维视频格式相比,采用DIBR技术之后仅需要传递一路视频及其深度图像序列就可合成三维影像,而且可以很方便的实现二维和三维的切换,同时避免了由传统视图生成方法所带来的三维空间变换的计算复杂性。正因为如此,DIBR技术在3D电视合成三维影像中得到了广泛应用,它也引起了人们愈来愈浓厚的兴趣。通常,人们把需要采用DIBR技术的3D视频称为基于深度图像的3D视频(depth-image-based3Dvideo)。Based on the depth image rendering technology, a new virtual viewpoint image, namely the destination image, is generated based on the reference image (reference image) and its corresponding depth image (depth image). Compared with the traditional 3D video format that uses two channels of plane video to synthesize 3D images, after adopting DIBR technology, only one channel of video and its depth image sequence are needed to synthesize 3D images, and the switching between 2D and 3D can be realized conveniently. , while avoiding the computational complexity of 3D space transformation brought by traditional view generation methods. Because of this, DIBR technology has been widely used in 3D TV synthesis of 3D images, and it has also aroused people's growing interest. Generally, people refer to the 3D video that needs to adopt the DIBR technology as depth-image-based 3D video (depth-image-based 3D video).
DIBR技术的核心步骤是三维图像变换(3d image warping)。三维图像变换能够将参考图像中的点投影到三维空间,再将三维空间中的点重投影到目标图像平面上,从而生成新视点视图,即目标图像。The core step of DIBR technology is three-dimensional image transformation (3d image warping). The 3D image transformation can project the points in the reference image to the 3D space, and then reproject the points in the 3D space onto the target image plane, so as to generate a new viewpoint view, that is, the target image.
然而,传统的三维图像变换方法中目标图像像素点计算比较复杂,计算量较大,目前无法做到实时性绘制,且不利于硬件的实现。However, in the traditional 3D image transformation method, the calculation of the pixel points of the target image is relatively complicated, and the amount of calculation is large. At present, real-time rendering cannot be achieved, and it is not conducive to the realization of hardware.
发明内容Contents of the invention
本发明的目的在于克服现有技术的不足,提供一种计算量小的有摄像机标定参数的三维图像变换方法。The object of the present invention is to overcome the deficiencies of the prior art, and provide a three-dimensional image transformation method with camera calibration parameters and a small amount of calculation.
为实现上述目的,本发明有摄像机标定参数的三维图像变换方法,其特征在于,包括以下步骤:In order to achieve the above object, the present invention has a three-dimensional image transformation method of camera calibration parameters, which is characterized in that, comprising the following steps:
(1)、初始化视差图M,将其所有元素置为空洞点视差值;(1), initialize the disparity map M, and set all its elements as the disparity value of the hole point;
(2)、区分目标图像是左视图还是右视图,如果是左视图,则将布尔变量α置为1,从右到左,从上到下的顺序扫描参考图像,按行遍历参考图像Iref中的像素点;如果是右视图,则从左到右,从上到下的顺序扫描参考图像,并将布尔变量α置为0,按行遍历参考图像Iref中的像素点;(2) Distinguish whether the target image is a left view or a right view. If it is a left view, set the Boolean variable α to 1, scan the reference image from right to left, and from top to bottom, and traverse the reference image I ref by row The pixels in ; if it is a right view, scan the reference image from left to right and from top to bottom, and set the Boolean variable α to 0, and traverse the pixels in the reference image I ref by row;
在遍历时,2.1)、对参考图像Iref中第vref行第uref列的像素点pref,依据公式(1)计算其在目标图像Ides中对应的匹配像素点pdes;When traversing, 2.1), for the pixel point p ref in the v ref row and u ref column in the reference image I ref , calculate its corresponding matching pixel point p des in the target image I des according to the formula (1);
公式(1)中,(uref,vref),(udes,vdes)分别表示参考图像Iref上的像素点pref和其在目标图像Ides上对应的匹配像素点pdes的水平、垂直坐标,即x轴、y轴坐标,h表示由传感变换摄像机(shift-sensor camera)设置零视差平面(ZPS plane)时所做的水平位移像素点数,f表示图像焦距,sx表示由图像物理坐标系向图像像素坐标系转换时在x轴方向上每单位物理长度对应的像素点的个数,B表示基线长度,zw表示像素点pref对应的深度值;In formula (1), (u ref , v ref ), (u des , v des ) represent the level of the pixel point p ref on the reference image I ref and its corresponding matching pixel point p des on the target image I des respectively , vertical coordinates, that is, x-axis and y-axis coordinates, h represents the number of horizontal displacement pixels made by the shift-sensor camera (shift-sensor camera) when setting the zero parallax plane (ZPS plane), f represents the image focal length, s x represents When converting from the image physical coordinate system to the image pixel coordinate system, the number of pixels corresponding to each unit physical length in the x-axis direction, B represents the baseline length, and z w represents the depth value corresponding to the pixel point p ref ;
深度值zw依据公式(2)确定:Depth value z w is determined according to formula (2):
公式(2)中,D(uref,vref)表示深度图像中像素点(uref,vref)的灰度值,g为深度图像灰度级,zmin为最近深度值,zmax为最远深度值;通常深度图像灰度级g为8比特,即256。In formula (2), D(u ref , v ref ) represents the gray value of the pixel point (u ref , v ref ) in the depth image, g is the gray level of the depth image, z min is the nearest depth value, and z max is The farthest depth value; usually the gray level g of the depth image is 8 bits, that is, 256.
2.2)、判断像素点pdes是否落在目标图像Ides内;如果像素点pdes的水平坐标udes满足:2.2), determine whether the pixel point p des falls within the target image I des ; if the horizontal coordinate u des of the pixel point p des satisfies:
0≤udes<Wi, (1)0≤u des <W i , (1)
则表明像素点pdes落在目标图像Ides内,将像素点pref的像素值拷贝到像素点pdes,将视差图M中的元素(uref,vref)置为udes-uref;其中,Wi为目标图像的水平像素点数;It indicates that the pixel point p des falls in the target image I des , copy the pixel value of the pixel point p ref to the pixel point p des , and set the element (u ref , v ref ) in the disparity map M to u des -u ref ; Wherein, Wi is the horizontal pixel number of target image;
(3)、遍历完参考图像Iref中的所有像素点,则输出目标图像Ides及视差图M。(3) After traversing all the pixels in the reference image I ref , output the target image I des and the disparity map M.
本发明的发明目的是这样实现的:The purpose of the invention of the present invention is achieved like this:
本发明有摄像机标定参数的三维图像变换方法,通过平移摄像机,传感变换摄像机(shift-sensor camera)设置生成目标图像,在利用简化的基于深度图像绘制的三维图像变换公式以及像素点深度值计算公式计算目标图像像素点的坐标时只需在水平方向有一定的平移,然后将参考图像点的像素值拷贝到目标图像对应点上,从而减小了计算量,有利于硬件实现和实时处理。The present invention has a three-dimensional image transformation method for camera calibration parameters. By panning the camera and setting the shift-sensor camera (shift-sensor camera) to generate the target image, the simplified three-dimensional image transformation formula based on depth image rendering and calculation of pixel depth values are used. When the formula calculates the coordinates of the pixel points of the target image, it only needs to have a certain translation in the horizontal direction, and then copy the pixel values of the reference image points to the corresponding points of the target image, thereby reducing the amount of calculation, which is conducive to hardware implementation and real-time processing.
附图说明Description of drawings
图1是本发明中参考图像作为左视图生成目标图像即右视图的示意图;Fig. 1 is a schematic diagram of a reference image as a left view to generate a target image, i.e. a right view, in the present invention;
图2是本发明中参考图像作为右视图生成目标图像即左视图的示意图;Fig. 2 is a schematic diagram of the reference image as the right view to generate the target image, i.e. the left view, in the present invention;
图3是本发明有摄像机标定参数的三维图像变换方法一具体实施方法流程图。FIG. 3 is a flow chart of a specific implementation method of the three-dimensional image transformation method with camera calibration parameters according to the present invention.
具体实施方式Detailed ways
下面结合附图对本发明的具体实施方式进行描述,以便本领域的技术人员更好地理解本发明。需要特别提醒注意的是,在以下的描述中,当已知功能和设计的详细描述也许会淡化本发明的主要内容时,这些描述在这里将被忽略。Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.
图1是本发明中参考图像作为左视图生成目标图像即右视图的示意图。FIG. 1 is a schematic diagram of generating a target image, ie, a right view, from a reference image as a left view in the present invention.
在本实施中,如图1所示,根据画家算法,当绘制右视图时,应按照从左到右、从上到下的顺序扫描参考图像,按行遍历参考图像Iref中的像素点pref,其坐标为(uref,vref);然后依据三维图像变换公式找到目标图像Ides上对应的匹配像素点pdes的水平、垂直坐标,将像素点pref的像素值拷贝到像素点pdes,将视差图M中的元素(uref,vref)置为udes-uref。In this implementation, as shown in Figure 1, according to the painter's algorithm, when drawing the right view, the reference image should be scanned from left to right and from top to bottom, and the pixel points p in the reference image I ref should be traversed row by row ref , whose coordinates are (u ref , v ref ); then find the horizontal and vertical coordinates of the corresponding matching pixel point p des on the target image I des according to the three-dimensional image transformation formula, and copy the pixel value of the pixel point p ref to the pixel point p des , set the element (u ref , v ref ) in the disparity map M to u des -u ref .
图2是本发明中参考图像作为右视图生成目标图像即左视图的示意图。FIG. 2 is a schematic diagram of generating a target image, ie, a left view, from a reference image as a right view in the present invention.
在本实施中,如图2所示,根据画家算法,当绘制左视图时,应按照从右到左、从上到下的顺序扫描参考图像,按行遍历参考图像Iref中的像素点pref,其坐标为(uref,vref);然后依据三维图像变换公式找到目标图像Ides上对应的匹配像素点pdes的水平、垂直坐标,将像素点pref的像素值拷贝到像素点pdes,将视差图M中的元素(uref,vref)置为udes-uref。In this implementation, as shown in Figure 2, according to the painter's algorithm, when drawing the left view, the reference image should be scanned from right to left and from top to bottom, and the pixel point p in the reference image I ref should be traversed row by row ref , whose coordinates are (u ref , v ref ); then find the horizontal and vertical coordinates of the corresponding matching pixel point p des on the target image I des according to the three-dimensional image transformation formula, and copy the pixel value of the pixel point p ref to the pixel point p des , set the element (u ref , v ref ) in the disparity map M to u des -u ref .
图3是本发明有摄像机标定参数的三维图像变换方法一具体实施方法流程图。FIG. 3 is a flow chart of a specific implementation method of the three-dimensional image transformation method with camera calibration parameters according to the present invention.
在本实施中,如图3所示,本发明有摄像机标定参数的三维图像变换方法实现在已知摄像机标定参数的情况下,由一幅参考图像及其深度图像生成一幅目标图像的功能;In this implementation, as shown in Figure 3, the three-dimensional image transformation method with camera calibration parameters of the present invention realizes the function of generating a target image from a reference image and its depth image when the camera calibration parameters are known;
如图3所示,具体步骤为:As shown in Figure 3, the specific steps are:
1)、首先输入参考图像Iref及其深度图像D,它们的分辨率都是Wi×Hi;焦距f;x轴方向上每单位物理长度对应的像素的个数sx;基线长度B;为设置ZPS平面,参考图像所作的水平位移像素点数h(h≥0);最近的深度值zmin,即近剪切平面的深度值与最远的深度值zmax,即远剪切平面的深度值;参考图像的扫描顺序标志rend_order,rend_order的值由画家算法得到;1), first input the reference image I ref and its depth image D, their resolutions are all W i ×H i ; the focal length f; the number of pixels corresponding to each unit physical length in the x-axis direction s x ; the baseline length B ;In order to set the ZPS plane, the number of horizontal displacement pixels h (h≥0) made by the reference image; the nearest depth value z min , that is, the depth value of the near clipping plane and the farthest depth value z max , that is, the far clipping plane The depth value; the scan order flag rend_order of the reference image, the value of rend_order is obtained by the painter's algorithm;
2)、初始化参数2), initialization parameters
初始化视差图M,置M的所有元素为-128,-128表示该点为空洞,即将将视差图M所有元素置为空洞点视差值;将参考图像Iref的像素点直接拷贝到目标图像Ides中,该像素点在目标图像Ides中的位置与参考图像Iref中的位置相同,即参考图像Iref与目标图像Ides完全相同,这样可以较好第填充一些小的空洞;Initialize the disparity map M, set all elements of M to -128, -128 indicates that the point is a hole, that is, set all elements of the disparity map M to the disparity value of the hole point; directly copy the pixels of the reference image I ref to the target image In I des , the position of the pixel in the target image I des is the same as the position in the reference image I ref , that is, the reference image I ref is exactly the same as the target image I des , so that some small holes can be better filled;
3)、将垂直坐标vref置为0,即vref=0;3) Set the vertical coordinate v ref to 0, that is, v ref =0;
4)、根据扫描顺序标志rend_order确定对参考图像Iref的扫描顺序,即目标图像是左视图还是右视图;rend_order=0,表示从左到右,从上到下的顺序扫描参考图像,表示生成右视图;rend_order=1表示从右到左,从上到下的顺序扫描参考图像,生成左视图;具体的扫描和拷贝如图1、图2所示;4) Determine the scanning order of the reference image I ref according to the scanning order flag rend_order, that is, whether the target image is a left view or a right view; rend_order=0 means that the reference image is scanned from left to right and from top to bottom, indicating that the generated Right view; rend_order=1 means that the reference image is scanned from right to left and top to bottom to generate a left view; specific scanning and copying are shown in Figure 1 and Figure 2;
5)、对于生成右视图,将水平坐标uref置为0,即uref=0;对于生成左视图,将水平坐标uref置为Wi-1,即uref=Wi-1;5) For generating the right view, set the horizontal coordinate u ref to 0, that is, u ref =0; for generating the left view, set the horizontal coordinate u ref to W i -1, that is, u ref =W i -1;
6)、按照公式(1)计算像素点pref的对应像素点pdes的坐标udes;6) Calculate the coordinate u des of the pixel point p des corresponding to the pixel point p ref according to the formula (1);
7)、判断像素点pdes的坐标udes是否满足0≤udes<Wi,即像素点pdes是否落在目标图像Ides内;如果满足,进行步骤8);不满足,直接进行步骤9);7) Determine whether the coordinate u des of the pixel point p des satisfies 0≤u des <W i , that is, whether the pixel point p des falls within the target image I des ; if it is satisfied, proceed to step 8); if not, directly proceed to step 9);
8)、则将像素点pref像素值拷贝到像素点pdes,视差图M中的元素(uref,vref)置为udes-uref,即M(udes,vdes)=udes-uref;8), then copy the pixel value of the pixel point p ref to the pixel point p des , and set the element (u ref , v ref ) in the disparity map M to u des -u ref , that is, M(u des , v des )=u des -u ref ;
9)、对于生成右视图,将水平坐标uref加1,即uref=uref+1;对于生成左视图,将水平坐标uref减1,即uref=uref-1;9) For generating the right view, add 1 to the horizontal coordinate u ref , that is, u ref = u ref +1; for generating the left view, subtract 1 from the horizontal coordinate u ref , that is, u ref = u ref -1;
10)、对于生成右视图,判断uref是否小于参考图像Iref的水平像素点数Wi,如果是,则返回步骤6),否则,进行步骤11);对于生成左视图,判断uref是否大于等于0,如果是,则返回步骤6),否则,进行步骤11);这都是判断是否遍历完成一行;10) For generating the right view, judge whether u ref is smaller than the horizontal pixel number W i of the reference image I ref , if yes, return to step 6), otherwise, proceed to step 11); for generating the left view, judge whether u ref is greater than Equal to 0, if yes, return to step 6), otherwise, proceed to step 11); this is to judge whether the traversal is complete;
11)、将垂直坐标vref加1,即vref=vref+1;11) Add 1 to the vertical coordinate v ref , that is, v ref =v ref +1;
12)、判断垂直坐标vref是否小于参考图像Iref的垂直像素点数Hi,如果是,返回步骤5),否则,进行步骤13);12) Determine whether the vertical coordinate v ref is smaller than the number of vertical pixels H i of the reference image I ref , if yes, return to step 5), otherwise, proceed to step 13);
13)、遍历完参考图像Iref中的所有像素点,则输出目标图像Ides及视差图M;视差图MM中的每一个元素为8bits的有符号整数,指示了目标图像Ides中对应点是否为空洞点,-128表示该点是空洞点,其余值表示该点不是空洞点,其值为视差值。13) After traversing all the pixels in the reference image I ref , output the target image I des and the disparity map M; each element in the disparity map MM is a signed integer of 8 bits, indicating the corresponding point in the target image I des Whether it is a hole point, -128 means that the point is a hole point, and other values mean that the point is not a hole point, and its value is the parallax value.
尽管上面对本发明说明性的具体实施方式进行了描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。Although the illustrative specific embodiments of the present invention have been described above, so that those skilled in the art can understand the present invention, it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, As long as various changes are within the spirit and scope of the present invention defined and determined by the appended claims, these changes are obvious, and all inventions and creations using the concept of the present invention are included in the protection list.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110278135 CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110278135 CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102447927A CN102447927A (en) | 2012-05-09 |
CN102447927B true CN102447927B (en) | 2013-11-06 |
Family
ID=46009947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110278135 Expired - Fee Related CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102447927B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102724526B (en) * | 2012-06-14 | 2014-09-10 | 清华大学 | Three-dimensional video rendering method and device |
JP5977591B2 (en) * | 2012-06-20 | 2016-08-24 | オリンパス株式会社 | Image processing apparatus, imaging apparatus including the same, image processing method, and computer-readable recording medium recording an image processing program |
CN104683788B (en) * | 2015-03-16 | 2017-01-04 | 四川虹微技术有限公司 | Gap filling method based on image re-projection |
CN109714587A (en) * | 2017-10-25 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | A kind of multi-view image production method, device, electronic equipment and storage medium |
CN108900825A (en) * | 2018-08-16 | 2018-11-27 | 电子科技大学 | A kind of conversion method of 2D image to 3D rendering |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404777A (en) * | 2008-11-06 | 2009-04-08 | 四川虹微技术有限公司 | Drafting view synthesizing method based on depth image |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
US20110115886A1 (en) * | 2009-11-18 | 2011-05-19 | The Board Of Trustees Of The University Of Illinois | System for executing 3d propagation for depth image-based rendering |
-
2011
- 2011-09-19 CN CN 201110278135 patent/CN102447927B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404777A (en) * | 2008-11-06 | 2009-04-08 | 四川虹微技术有限公司 | Drafting view synthesizing method based on depth image |
US20110115886A1 (en) * | 2009-11-18 | 2011-05-19 | The Board Of Trustees Of The University Of Illinois | System for executing 3d propagation for depth image-based rendering |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
Non-Patent Citations (6)
Title |
---|
一种基于DIBR的虚拟视点合成算法;陈思利等;《成都电子机械高等专科学校学报》;20100331;第13卷(第1期);全文 * |
刘占伟等.基于D IBR和图像融合的任意视点绘制.《中国图象图形学报》.2007,第12卷(第10期),1696-1700. |
基于D IBR和图像融合的任意视点绘制;刘占伟等;《中国图象图形学报》;20071031;第12卷(第10期);全文 * |
基于深度图像绘制的二维转三维视频关键技术研究;徐萍;《南京邮电大学硕士学文论文》;20110331;全文 * |
徐萍.基于深度图像绘制的二维转三维视频关键技术研究.《南京邮电大学硕士学文论文》.2011,26-28. |
陈思利等.一种基于DIBR的虚拟视点合成算法.《成都电子机械高等专科学校学报》.2010,第13卷(第1期),15-18. |
Also Published As
Publication number | Publication date |
---|---|
CN102447927A (en) | 2012-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102625127B (en) | Optimization method suitable for virtual viewpoint generation of 3D television | |
CN104756489B (en) | A kind of virtual visual point synthesizing method and system | |
CN101271583B (en) | A Fast Image Drawing Method Based on Depth Map | |
CN103051908B (en) | Disparity map-based hole filling device | |
US9378583B2 (en) | Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume | |
CN105262958B (en) | A kind of the panorama feature splicing system and its method of virtual view | |
CN102034265B (en) | Three-dimensional view acquisition method | |
CN102307312A (en) | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology | |
CN106254854B (en) | Preparation method, the apparatus and system of 3-D image | |
CN103810685A (en) | Super resolution processing method for depth image | |
CN102447927B (en) | Method for warping three-dimensional image with camera calibration parameter | |
CN103024421B (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
CN101404777B (en) | A View Synthesis Method Based on Depth Image Rendering | |
CN102325259A (en) | Method and device for synthesizing virtual viewpoints in multi-viewpoint video | |
CN111047709B (en) | Binocular vision naked eye 3D image generation method | |
CN104065946B (en) | Hole Filling Method Based on Image Sequence | |
CN103248909B (en) | Method and system of converting monocular video into stereoscopic video | |
CN102547338A (en) | DIBR (Depth Image Based Rendering) system suitable for 3D (Three-Dimensional) television | |
CN101873509A (en) | A Method for Eliminating Background and Edge Jitter of Depth Map Sequences | |
CN102368826A (en) | Real time adaptive generation method from double-viewpoint video to multi-viewpoint video | |
CN103269438A (en) | Method of Depth Image Rendering Based on 3D Video and Free Viewpoint Television | |
CN105488760A (en) | Virtual image stitching method based on flow field | |
CN107147894B (en) | A kind of virtual visual point image generating method in Auto-stereo display | |
CN102957930B (en) | Method and system for automatically identifying 3D (Three-Dimensional) format of digital content | |
CN106028020B (en) | A kind of virtual perspective image cavity complementing method based on multi-direction prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131106 Termination date: 20160919 |
|
CF01 | Termination of patent right due to non-payment of annual fee |