CN102867321A - Glasses virtual try-on interactive service system and method - Google Patents
Glasses virtual try-on interactive service system and method Download PDFInfo
- Publication number
- CN102867321A CN102867321A CN201110192119XA CN201110192119A CN102867321A CN 102867321 A CN102867321 A CN 102867321A CN 201110192119X A CN201110192119X A CN 201110192119XA CN 201110192119 A CN201110192119 A CN 201110192119A CN 102867321 A CN102867321 A CN 102867321A
- Authority
- CN
- China
- Prior art keywords
- feature points
- glasses
- feature
- interactive service
- try
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011521 glass Substances 0.000 title claims abstract description 87
- 238000000034 method Methods 0.000 title claims abstract description 82
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 56
- 238000005070 sampling Methods 0.000 claims abstract description 18
- 210000001747 pupil Anatomy 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 9
- 230000015572 biosynthetic process Effects 0.000 claims description 8
- 238000003786 synthesis reaction Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 2
- 230000003993 interaction Effects 0.000 claims 3
- 230000002194 synthesizing effect Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
技术领域 technical field
本发明涉及一种提供真人影像的眼镜三维(3D)虚拟试戴互动服务系统与方法,特别涉及一种用于电子商务互动系统上的眼镜3D试戴互动服务系统与方法。The present invention relates to a glasses three-dimensional (3D) virtual try-on interactive service system and method that provides real images, and in particular to a glasses 3D try-on interactive service system and method used in an e-commerce interactive system.
背景技术 Background technique
随着电子商务的蓬勃发展,越来越多的消费者也越来越依赖电子商务互动平台系统来选择自己喜爱的货物与商品。而商品与模特的合并展示图片、电子商品试戴系统与软件,也越来越吸引消费者的注意,进而引起消费者的购买欲望。而提供真人影像的眼镜试戴系统更是受到消费者注目与欢迎的试戴系统之一,使用者只要应用自己的照片,便可从数以万计的眼镜商品中找到自己喜爱与合适的眼镜。With the vigorous development of e-commerce, more and more consumers are increasingly relying on the e-commerce interactive platform system to choose their favorite goods and commodities. The combined display pictures of products and models, and electronic product try-on systems and software are also attracting more and more attention from consumers, which in turn arouses consumers' desire to buy. The glasses try-on system that provides real-life images is one of the try-on systems that have attracted the attention and popularity of consumers. Users can find their favorite and suitable glasses from tens of thousands of glasses products as long as they use their own photos. .
然而,传统的眼镜试戴系统多为二维(2D)平面,使用者只能观看自己试戴眼镜的正面影像,而无法由左右侧面来观看自己试戴眼镜的侧面影像。且传统的眼镜试戴系统也无法随着消费者脸部的移动与转动,适当地将眼镜与消费者的脸部作合成。因此,往往造成所合成的影像过于突兀或不真实。However, most of the traditional glasses try-on systems are two-dimensional (2D) planes, and users can only watch the frontal image of the glasses they are trying on, but cannot watch the side images of themselves trying on the glasses from the left and right sides. Moreover, the traditional glasses try-on system cannot properly synthesize the glasses with the consumer's face as the consumer's face moves and rotates. Therefore, the synthesized image is often too abrupt or unreal.
因此,鉴于传统眼镜试戴系统与方法缺乏有效且经济机制来解决传统的问题,因此亟需提出一种新颖的眼镜虚拟试戴互动服务系统与方法,能够经过精确的模拟与运算,来解决上述的问题。Therefore, in view of the lack of an effective and economic mechanism to solve the traditional problems in the traditional glasses try-on system and method, it is urgent to propose a novel glasses virtual try-on interactive service system and method, which can solve the above problems through accurate simulation and calculation. The problem.
发明内容 Contents of the invention
为解决上述问题,本发明提供真人影像的眼镜虚拟试戴互动服务系统与方法,经过精确的模拟与运算,解决当消费者脸部的移动与转动时,无法将眼镜与消费者的脸部作适当的合成,从而产生合成影像过于突兀或不真实的问题。In order to solve the above-mentioned problems, the present invention provides an interactive service system and method for glasses virtual try-on with live images. After precise simulation and calculation, it solves the problem that the glasses and the face of the consumer cannot be matched when the face of the consumer moves and rotates. Proper synthesis, resulting in the problem that the synthetic image is too abrupt or unreal.
根据具体实施例,本发明提供真人影像的一种眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架以定位人脸,并取得第一人脸影像;对该第一人脸影像的眼睛,取得多个第一特征点;在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息;利用画面对比判断该人脸是否有动态移动,并透过下一个该画面取得该第二人脸影像;结合搜寻对比范围与追踪特征信息方法取得该第二人脸影像内多个第二特征点,进而取得多个第二特征点定位信息;对比该第一人脸影像与该第二人脸影像的相对位置信息差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及将预设的眼镜模型合成于该多个第二特征点的位置。According to a specific embodiment, the present invention provides a virtual glasses try-on interactive service method for real-person images, which is characterized by: locating the face through the frame in the screen, and obtaining the first face image; The eyes of the image obtain a plurality of first feature points; sample a plurality of first feature information around each first feature point, and store the plurality of first feature information; use screen comparison to determine whether the face has dynamic movement, And obtain the second face image through the next frame; obtain a plurality of second feature points in the second face image by combining the search comparison range and tracking feature information method, and then obtain the positioning information of a plurality of second feature points; Comparing the relative position information difference between the first human face image and the second human face image to judge the position, moving state and zoom ratio of the human face, and then calculate the positions of the plurality of second feature points; The assumed glasses model is synthesized at the positions of the plurality of second feature points.
根据该具体实施例,本发明提供真人影像的一种眼镜虚拟试戴互动服务系统,其特征包括:影像撷取单元,透过画面内的框架来定位人脸,并取得第一人脸影像;处理单元,耦接该影像撷取单元,对该第一人脸影像的眼睛,取得多个第一特征点,且在每个第一特征点周围设计取样模式以取得有利的多个第一特征信息,并储存该多个第一特征信息,且利用画面对比判断该人脸是否有动态移动,并透过下一个该画面取得该第二人脸影像,结合搜寻对比范围与追踪特征信息方法并取得该第二人脸影像内多个第二特征点,进而取得多个第二特征点定位信息;分析单元,耦接该处理单元,用以对比该第一人脸影像与该第二人脸影像的相对位置信息差异,以判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及合成单元,耦接该分析单元,将预设的一虚拟眼镜模型合成于该多个第二特征点的位置。后续以此方法类推取得动态的第三、第四的特征点的位置。According to this specific embodiment, the present invention provides a virtual glasses try-on interactive service system for real-person images, which is characterized by: an image capture unit that locates a face through a frame in the screen and obtains a first face image; A processing unit, coupled to the image capture unit, obtains a plurality of first feature points for the eyes of the first human face image, and designs a sampling pattern around each first feature point to obtain a plurality of favorable first features information, and store the plurality of first feature information, and use the screen comparison to judge whether the face has dynamic movement, and obtain the second face image through the next screen, combine the search comparison range and tracking feature information method and Obtaining a plurality of second feature points in the second face image, and then obtaining location information of a plurality of second feature points; an analysis unit, coupled to the processing unit, for comparing the first face image with the second face The relative position information difference of the image is used to judge the position, moving state and zoom ratio of the face, and then calculate the positions of the plurality of second feature points; The glasses model is synthesized at the positions of the plurality of second feature points. Follow up by analogy with this method to obtain the positions of the dynamic third and fourth feature points.
根据另一具体实施例,本发明提供一种隐形眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架来定位一人脸,并取得第一人脸影像;对该第一人脸影像的眼睛瞳孔,取得多个第一特征点;在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息;以及将预设的隐形眼镜模型合成于该多个第二特征点的位置。后续以此方法类推取得动态的第三、第四的特征点的位置。According to another specific embodiment, the present invention provides a contact lens virtual try-on interactive service method, which is characterized in that it includes: locating a human face through a frame in the screen, and obtaining a first human face image; The eye pupil of the image is obtained by obtaining a plurality of first feature points; sampling a plurality of first feature information around each first feature point, and storing the plurality of first feature information; and synthesizing a preset contact lens model on the The positions of the plurality of second feature points. Follow up by analogy with this method to obtain the positions of the dynamic third and fourth feature points.
为进一步对本发明有更深入的说明,并借以下图示、图号说明及发明详细说明,希望能够对审查工作有所帮助。In order to provide a more in-depth description of the present invention, the following illustrations, figure number descriptions and detailed descriptions of the invention are used, hoping to be helpful to the examination work.
附图说明 Description of drawings
图1A、图1B是根据本发明具体实施例的一种眼镜虚拟试戴互动服务方法;Fig. 1A and Fig. 1B are a kind of glasses virtual try-on interactive service method according to a specific embodiment of the present invention;
图2A~图2D是根据图1方法的操作手段;Fig. 2A~Fig. 2D are the operation means according to Fig. 1 method;
图3A~图3C显示人脸的倾斜、转动与移动角度;3A to 3C show the tilt, rotation and movement angles of the human face;
图4是眼镜合成人脸的状态示意图;Fig. 4 is a schematic diagram of the state of glasses synthetic human face;
图5A~图5C是根据本发明具体实施例的三维眼镜制作方法;5A to 5C are the manufacturing method of 3D glasses according to the specific embodiment of the present invention;
图6A~图6B是根据本发明的另一实施例的一种隐形眼镜虚拟试戴互动服务方法;6A-6B are a contact lens virtual try-on interactive service method according to another embodiment of the present invention;
图7A~图7B是根据图6方法的操作手段;Fig. 7A~Fig. 7B are the operation means according to Fig. 6 method;
图8是根据本发明的具体实施例的一种眼镜虚拟试戴互动服务系统。Fig. 8 is a virtual try-on interactive service system for glasses according to a specific embodiment of the present invention.
附图标记reference sign
步骤s101~s107Steps s101 to s107
步骤s601~s607Steps s601-s607
81影像撷取单元81 image capture unit
82处理单元82 processing units
83分析单元83 analytical units
84合成单元84 synthesis units
85眼镜数据库85 glasses database
具体实施方式 Detailed ways
配合下列图式说明本创作的详细结构,及其连结关系。Use the following diagrams to illustrate the detailed structure and connections of this creation.
图1是根据本发明的具体实施例的真人影像一种眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架来定位人脸,并取得第一人脸影像(步骤s101),例如,如图2A所示,本具体实施例在画面中显示一虚线框架并要求使用者脸部正面进入框架,以符合虚线框大小并将双眼对齐横线,来定位使用者的脸部,并拍摄下该第一人脸影像。接着,对该第一人脸影像的眼睛特征处,例如两侧眼角,取得多个第一特征点(步骤s102),此多个第一特征点可以但不限于包括左右眼的两侧眼角以及嘴角两侧。例如,如图2B中画叉所示,本具体实施例,可以手动点选该第一人脸影像的左右眼的两侧眼角点以及嘴角两侧点,以分别取得一个或多个第一特征点。另外,在另一具体实施例也可以使用人脸辨识方式让程序自动抓取特征点。此外,本实施例在此步骤(s102)之后,更可根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像或该多个第一特征点。若符合,则进行下一步动作,应用搜寻运算,来判断该多个第一特征点是否位于该框架内,若不是,则重新取得该第一人脸影像或该多个第一特征点,若是,则进行下一步动作。其中,判断特征点位置是否符合人脸逻辑的方式有左(右)眼的两侧眼角点是否在虚线框左(右)半边、同一只眼的内外眼角是否相隔一定间距、同一只眼的内外眼角或左右嘴角高度是否差距过大等。若有一项不符合上述限定条件,则要求重新取得该第一特征点或该第一人脸影像。Fig. 1 is a real-person image according to a specific embodiment of the present invention, a kind of glasses virtual try-on interactive service method, which is characterized by: positioning the face through the frame in the screen, and obtaining the first face image (step s101), For example, as shown in FIG. 2A , the present embodiment displays a dotted line frame on the screen and requires the front of the user's face to enter the frame to match the size of the dotted line frame and align the eyes with the horizontal line to locate the user's face, and The first human face image is taken. Next, obtain a plurality of first feature points (step s102) at the eye features of the first human face image, such as the corners of the eyes on both sides (step s102). The sides of the mouth. For example, as shown with a cross in Figure 2B, in this specific embodiment, the corner points on both sides of the left and right eyes of the first human face image and the points on both sides of the corner of the mouth can be manually selected to obtain one or more first features respectively. point. In addition, in another specific embodiment, the face recognition method can also be used to allow the program to automatically capture feature points. In addition, after this step (s102), this embodiment can further judge whether the first human face image conforms to the human face logic according to the plurality of first feature points, and if not, then reacquire the first human face image or the plurality of first feature points. If so, proceed to the next step, apply a search operation to determine whether the plurality of first feature points are located in the frame, if not, then re-acquire the first human face image or the plurality of first feature points, if so , proceed to the next step. Among them, the way to judge whether the position of the feature points conforms to the logic of the face includes whether the corner points on both sides of the left (right) eye are in the left (right) half of the dotted line frame, whether the inner and outer corners of the same eye are separated by a certain distance, whether the inner and outer corners of the same eye Whether the height difference between the corners of the eyes or the left and right corners of the mouth is too large, etc. If one of the above-mentioned limiting conditions is not met, it is required to obtain the first feature point or the first human face image again.
接下来,在每个第一特征点周围设计取样模式以取得有利的多个第一特征信息,并储存该多个第一特征信息(步骤s103),且特征包括同时储存该多个第一特征点与该多个第一特征点间的点间距。取样模式进一步而言,该特征信息为像素色彩信息,且该像素色彩信息由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n为正整数。或者是,该像素色彩信息是由每一个该特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n为正整数以及该半圆至少涵盖一眼角。如图2C所示,本具体实施例以取样模式的方法抓取特征点的邻近像素色彩信息,同时记录各特征点之间的点间距。例如,由特征点以半圆辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息,且该半圆至少完全涵盖一眼角。或者是,如图2D所示,在另一具体实施例,该取样模式由特征点以辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息。Next, design a sampling pattern around each first feature point to obtain favorable multiple first feature information, and store the multiple first feature information (step s103), and the features include simultaneously storing the multiple first feature information A point distance between the point and the plurality of first feature points. In terms of sampling mode, the feature information is pixel color information, and the pixel color information is radiated in m directions from each feature point, and n pixels are obtained in each direction as color information, and the m and n are positive integers. Alternatively, the pixel color information is obtained from m directions radiated by each feature point in a semicircle, and n pixels are obtained in each direction as color information, and the m and n are positive integers and the semicircle Cover at least the corner of one eye. As shown in FIG. 2C , in this specific embodiment, the color information of adjacent pixels of feature points is captured by the method of sampling mode, and the point spacing between each feature point is recorded at the same time. For example, the feature point extends outward in 8 directions in the form of semicircle radiation diffusion, and takes 7 points and a total of 56 color information as the feature information of the feature point, and the semicircle completely covers at least the corner of the eye. Or, as shown in Figure 2D, in another specific embodiment, the sampling mode extends outward in 8 directions from the feature point in a radial diffusion manner, and takes 7 points and a total of 56 color information as the feature point feature information.
接着,取得下一个画面中该第二人脸影像,并利用画面对比判断该人脸是否有动态移动(步骤s104)。其中在时间间隔内,对比该画面与下一个该画面中的人脸,并动态追踪该多个第一特征点是否有移动轨迹。例如,本具体实施例,该画面与下一个该画面像素以相减的方式对比在此时间间隔内移动的物体,若特征点附近有明显移动痕迹,表示人脸在这段时间有动作,则进行后续步骤与运算。反之,如果人脸没有动作,则不做后续追踪处理。在另一具体实施例中,还可以以一背景图与该画面相减的方式,对比在该时间间隔内移动的物体。或者是,经过前后画面像素差的特征信息中的白点(动点)数量决定目前影像中移动量的程度,若白点数量多,则表示人脸正在移动,则进行后续步骤与运算。反之,如果特征点附近没有白点,表示人脸没有明显移动痕迹,则不作后续追踪处理。Next, obtain the second human face image in the next frame, and use frame comparison to determine whether the human face has dynamic movement (step s104 ). Wherein, within the time interval, compare the face in the frame with the face in the next frame, and dynamically track whether the plurality of first feature points have moving tracks. For example, in this specific embodiment, this picture is compared with the next picture pixel in a subtractive manner to the object moving within this time interval, if there are obvious moving traces near the feature points, it means that the face has moved during this time, then Carry out subsequent steps and operations. Conversely, if there is no movement on the face, no follow-up processing will be performed. In another specific embodiment, objects moving within the time interval may also be compared in a manner of subtracting a background image from the frame. Alternatively, the number of white dots (moving dots) in the feature information of the pixel difference between the front and back images determines the degree of movement in the current image. If the number of white dots is large, it means that the face is moving, and the subsequent steps and calculations are performed. On the contrary, if there is no white point near the feature point, it means that the face has no obvious moving traces, and no follow-up tracking process will be performed.
接着,应用噪声滤除法,来滤除下一个该画面内的噪声,以避免后面对比过程中受到噪声干扰而提高失误率,且该噪声滤除法为高斯模糊法、中值法与均值法的其中之一。Then, the noise filtering method is applied to filter out the noise in the next picture, so as to avoid the noise interference in the subsequent comparison process and increase the error rate, and the noise filtering method is one of the Gaussian blur method, the median method and the mean method one.
当前述有移动痕迹时,将进一步结合搜寻对比范围并追踪特征信息,进而取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息(步骤s105),并且对比该第一人脸影像与该第二人脸影像的相对位置信息差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置(步骤s106)。其中,至少预设一对比范围,来对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征之间多个误差值,且排序该复数个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置。在本具体实施例中,根据两种不同的状态而有不同的决定法:(1)搜寻状态:当刚点选完特征点第一次开始侦测时或追踪失败时会进入此状态,在此状态下,对比的范围是静态的,仅局限在使用者初次点选特征点的邻近范围内作对比,意即强迫使用者脸对着虚线框的位置,才有可能进入对比范围。(2)追踪状态:在前一个画面有成功对比到特征点的情况下则为追踪状态,在此状态下,对比的区域为上一个画面中对比到的特征点的邻近区域,意即此区域是动态的会跟着目前追到的特征点移动的。且在对比的范围内,会包含依设计的取样模式取N个像素,每个像素会以取样模式的方法来取得该像素点的临近56个点,将这些点的RGB与YCbCr色彩信息与起初录制的第一特征信息作对比,两者误差值为Error 1~Error N,将该这N个值作排序,取最小误差的前i个像素为候选点,例如10个,且再作一次分群去除离群值,最后平均这些较集中的像素值坐标,该结果即是最后追踪结果。如果,在N个点中找出的Error值都太大,则表示使用者移动程度已超出追踪范围,或有额外的遮蔽物出现在特征点前,此时则判定追踪失败,不再进行后面的运算。When there are moving traces mentioned above, it will further combine the search and comparison range and track feature information, and then obtain a plurality of second feature points in the second face image, and then obtain a plurality of second feature information (step s105), and compare the The relative position information difference between the first human face image and the second human face image is used to determine the position, moving state and scaling ratio of the human face, and then calculate the positions of the plurality of second feature points (step s106 ). Wherein, at least a comparison range is preset to compare the plurality of first characteristic information and the plurality of second characteristic information, and take a plurality of error values between the plurality of first characteristic information and the plurality of second characteristics, And sort the plurality of error values, and obtain the first i smallest error values, and then obtain the positions of the plurality of second feature points. In this specific embodiment, there are different decision methods according to two different states: (1) search state: when the feature points are just selected and start detection for the first time or when the tracking fails, it will enter this state. In this state, the range of comparison is static, and the comparison is only limited to the adjacent range of the feature point selected by the user for the first time, which means that the user is forced to face the position of the dotted frame to enter the comparison range. (2) Tracking state: when the feature points are successfully compared in the previous screen, it is in the tracking state. In this state, the compared area is the adjacent area of the feature points compared in the previous screen, which means this area It is dynamic and will move with the feature points currently being chased. And within the scope of the comparison, it will include sampling N pixels according to the designed sampling mode, and each pixel will use the sampling mode to obtain 56 points adjacent to the pixel point, and the RGB and YCbCr color information of these points will be compared with the initial The recorded first feature information is compared, the error value of the two is Error 1 ~ Error N, sort the N values, take the first i pixels with the smallest error as candidate points, for example, 10, and do another grouping Remove outliers, and finally average these concentrated pixel value coordinates, and the result is the final tracking result. If the Error values found in the N points are all too large, it means that the user’s movement has exceeded the tracking range, or there are additional occluders appearing in front of the feature points. At this time, it is determined that the tracking has failed, and no further follow-up will be performed. operation.
此外,本实例更可进行几何计算,例如,根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度。若该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设允许值,则重新取得该第二人脸影像。如图3A所示,根据该第一人脸影像中的两眼两侧眼角点的坐标位于水平轴,而该第二人脸影像的两眼两侧眼角点坐标则为上仰角度,因此,可以用斜率计算该人脸的倾斜度。如图3B所示,根据该第一人脸影像中两眼眼角间的距离D1与该第二人脸影像中两眼眼角间的距离D2,以长度变化计算该人脸的远近比例。如图3C所示,根据该第一人脸影像中两眼两侧眼角的比例1∶1与该第二人脸影像中两眼眼角的比例相比于该第一人脸影像中两眼两侧眼角的比例为1.1∶1或1.2∶1,以比例变化计算该人脸的旋转角度以及根据该第一人脸影像中两眼与嘴巴位于水平轴,而该第二人脸影像中两眼与嘴巴相比较在该第一人脸影像中两眼与嘴巴之比例为1∶1.1,则以比例变化计算该人脸的俯仰角度。In addition, this example can further perform geometric calculations, for example, according to the positions of the plurality of first feature points and the plurality of second feature points, calculate the inclination of the face with the slope, and calculate the slope of the human face according to the distance between the plurality of first feature points. The distance between the distance and the plurality of second feature points, the distance ratio of the face is calculated by the length change, and the ratio of the human face is calculated by the ratio of the first feature point and the second feature point. Face rotation and pitch angles. If the inclination, the distance ratio, the rotation angle or the pitch angle of the human face exceeds a preset allowable value, the second human face image is reacquired. As shown in FIG. 3A , according to the coordinates of the corners of the two eyes on both sides of the first face image are located on the horizontal axis, and the coordinates of the corners of the eyes on both sides of the second face image are upward angles, therefore, The inclination of the face can be calculated using the slope. As shown in FIG. 3B , according to the distance D1 between the eye corners of the first human face image and the distance D2 between the two eye corners of the second human face image, the distance ratio of the human face is calculated according to the length change. As shown in Figure 3C, according to the ratio of the two eye corners of the two eyes in the first face image and the ratio of the two eye corners in the second face image to the ratio of the two eyes in the first face image The ratio of the side corners of the eyes is 1.1:1 or 1.2:1, the rotation angle of the face is calculated by the ratio change and according to the two eyes and the mouth in the first face image are on the horizontal axis, and the two eyes in the second face image Compared with the mouth, the ratio of the two eyes to the mouth in the first human face image is 1:1.1, and the pitch angle of the human face is calculated according to the ratio change.
接着,由预设的眼镜数据库提取预设的眼镜模型,来合成该多个第二特征点的位置(步骤s107),且根据该人脸的尺寸、该多个第一特征点与误差值,以缩放与旋转该眼镜模型,进而合适地合成到该人脸,且该眼镜数据库用于储存眼镜模型的数据。如图4所示,由上述步骤所计算出的三维空间信息,即可将眼镜模型智能性地旋转到符合目前使用者动向的位置,又由于人脸大小不同,需配合该多数的第一特征点坐标对眼镜尺寸作缩放与旋转,并合成在该人脸上。在此步骤之后,可结束该方法,或者是继续下一个搜寻与追踪的步骤。本具体实施例,该眼镜模型为三维(3D)眼镜模型,其制作方法可透过摄像装置(例如,数字相机)由一实体眼镜的正面、正左侧面与正右侧面三个方向个别取得该实体眼镜的平面影像(如图5A与5B所示),但不受限于此。或者是,将该实体眼镜作正负90度旋转,以取得该实体眼镜的正面、正左侧面与正右侧面三个平面影像。接着,将所取得的该三个平面影像应用影像合成软件与硬件,将其组合成该三维眼镜模型(如图5C所示)。当该三维眼镜模型制作完成后,还可应用各种参数(例如,旋转参数、色彩参数、比例参数与透明度参数等),用于调整该三维眼镜的位置与颜色,或者是应用选项(例如,选择正面视图或左右侧视图)来欣赏该三维眼镜模型(如图5C所示)。本具体实施例仅考虑不具有眼镜镜片三维镜框的制作,然而,熟悉该技术人员都应明白本发明也可用于制作具有眼镜镜片的眼镜模型。Next, the preset glasses model is extracted from the preset glasses database to synthesize the positions of the plurality of second feature points (step s107), and according to the size of the face, the plurality of first feature points and the error value, The glasses model can be scaled and rotated, and then synthesized to the human face appropriately, and the glasses database is used to store the data of the glasses model. As shown in Figure 4, the three-dimensional space information calculated by the above steps can intelligently rotate the glasses model to a position that conforms to the current user's movement, and because the faces are different in size, it is necessary to match the first feature of the majority The point coordinates scale and rotate the size of the glasses and synthesize them on the face of the person. After this step, the method can end, or continue to the next search and trace step. In this specific embodiment, the glasses model is a three-dimensional (3D) glasses model, and its manufacturing method can be divided into three directions: the front, the front left side, and the right side of a physical glasses through an imaging device (for example, a digital camera). Obtain the plane image of the real glasses (as shown in FIGS. 5A and 5B ), but is not limited thereto. Alternatively, the real glasses are rotated by plus or minus 90 degrees to obtain three plane images of the front, front left and right sides of the real glasses. Then, the three obtained plane images are combined into the three-dimensional glasses model (as shown in FIG. 5C ) by applying image synthesis software and hardware. After the 3D glasses model is completed, various parameters (such as rotation parameters, color parameters, scale parameters and transparency parameters, etc.) can also be applied to adjust the position and color of the 3D glasses, or to apply options (such as, Select the front view or the left and right side views) to enjoy the three-dimensional glasses model (as shown in Figure 5C). This specific embodiment only considers the manufacture of a three-dimensional frame without spectacle lenses. However, those skilled in the art should understand that the present invention can also be used to manufacture spectacle models with spectacle lenses.
图6根据本发明另一具体实施例是隐形眼镜虚拟试戴互动服务方法。本具体实施例与上述具体实施例差异在于本实施例是以眼睛瞳孔为特征点与结合取样模式来取得特征信息,且适用于虚拟模拟试戴隐形眼镜,而其余方法与运算都与上述具体实施例近似,故在本实例仅详述这些差异处的细节,而与上述具体实施例近似的步骤与方法,则不再赘述。本具体实施例的3D模拟方法特征包括:透过画面内一框架以定位人脸,并取得第一人脸影像(步骤s601)。对该第一人脸影像的眼睛瞳孔特征处,取得多个第一特征点(步骤s602),且多个第一特征点包括左右眼的瞳孔以及嘴角两侧,例如,如图7A中画叉所示,在本具体实施例中,手动点选该第一人脸影像的左右眼的瞳孔点以及嘴角两侧点,来分别所取得一个或多个第一特征点。在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息(步骤s603)。而本具体实施例,判断特征点位置是否符合人脸逻辑的方式有左(右)眼的瞳孔是否在虚线框左(右)半边、同一只眼的瞳孔与两侧眼角是否相隔一定间距、眼睛的瞳孔与左右嘴角高度是否差距过大等。若有一项不符合上述限定条件,则要求重新取得该第一特征点或该第一人脸影像。Fig. 6 is a contact lens virtual try-on interactive service method according to another specific embodiment of the present invention. The difference between this specific embodiment and the above specific embodiment is that this embodiment uses the eye pupil as a feature point and combines the sampling mode to obtain feature information, and is suitable for virtual simulation of trying on contact lenses, while other methods and calculations are the same as the above specific implementation Therefore, only the details of these differences are described in detail in this example, and the steps and methods similar to the above-mentioned specific embodiments are not repeated here. The characteristics of the 3D simulation method in this embodiment include: locating a human face through a frame in the frame, and obtaining a first human face image (step s601 ). A plurality of first feature points (step s602) are obtained at the eye pupil feature of the first human face image, and the plurality of first feature points include the pupils of the left and right eyes and both sides of the corners of the mouth, for example, as shown in Figure 7A with a cross As shown, in this specific embodiment, the pupil points of the left and right eyes and points on both sides of the corner of the mouth of the first human face image are manually selected to obtain one or more first feature points respectively. Sampling a plurality of first feature information around each first feature point, and storing the plurality of first feature information (step s603). In this specific embodiment, the way to judge whether the position of the feature points conforms to the logic of the face includes whether the pupil of the left (right) eye is in the left (right) half of the dotted line frame, whether the pupil of the same eye is separated from the corners of the eyes on both sides by a certain distance, whether the pupil of the eye Whether the height gap between the pupils and the left and right corners of the mouth is too large. If one of the above-mentioned limiting conditions is not met, it is required to obtain the first feature point or the first human face image again.
接着,取得下一个画面中该第二人脸影像,并判断该人脸是否有动态移动(步骤s604),然后,搜寻对比范围并追踪特征信息,进而取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息(步骤s605),且特征包括同时储存该多个第一特征点之间的点间距。如图7B所示,在本具体实施例取样的方法抓取特征点的邻近像素色彩信息,同时记录各特征点之间的点间距,例如,由特征点来辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息。对比该第一人脸影像与该第二人脸影像,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置(步骤s606),例如,根据该第一人脸影像中瞳孔点的坐标位于水平轴,而该第二人脸影像瞳孔点坐标则为上仰角度,因此,可以用斜率计算该人脸的倾斜度。例如,根据该第一人脸影像中两眼瞳孔间的距离与该第二人脸影像中两眼瞳孔间的距离,以长度变化计算该人脸的远近比例。例如,根据该第一人脸影像中两眼瞳孔与该第二人脸影像中两眼瞳孔的比例,来比例变化计算该人脸的旋转角度以及根据该第一人脸影像中两眼瞳孔与嘴巴与该第二人脸影像中两眼瞳孔与嘴巴之间的比例,相比较该第一人脸影像中两眼与嘴巴的比例,来比例变化计算该人脸的旋转角度与俯仰角度。如果该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过一预设允许值,则重新取得该第二人脸影像。以及将预设的隐形眼镜模型合成在该多个第二特征点的位置(步骤607),且根据该人脸的尺寸与该多个第一特征点,来缩放与旋转该眼镜模型,进而合适地合成到该人脸。在此步骤之后,可结束该方法,或者是继续下一个搜寻与追踪的步骤。Next, obtain the second human face image in the next frame, and judge whether the human face has dynamic movement (step s604), then search for a comparison range and track feature information, and then obtain multiple first human face images in the second human face image two feature points, and then obtain a plurality of second feature information (step s605), and the feature includes storing the point distance between the plurality of first feature points at the same time. As shown in Figure 7B, the sampling method in this specific embodiment captures the color information of adjacent pixels of the feature points, and records the point spacing between each feature point at the same time, for example, the way of radial diffusion from the feature points extends outward to 8 direction, and take 7 points and a total of 56 color information as the feature information of this feature point. Comparing the first human face image and the second human face image to determine the position, moving state and scaling of the human face, and then calculate the positions of the plurality of second feature points (step s606), for example, according to the The coordinates of the pupil point in the first human face image are located on the horizontal axis, while the coordinates of the pupil point in the second human face image are the upward angle, therefore, the inclination of the human face can be calculated by using the slope. For example, according to the distance between the two pupils in the first human face image and the distance between the two pupils in the second human face image, the distance ratio of the human face is calculated according to the length change. For example, according to the ratio of the pupils of the two eyes in the first human face image to the pupils of the two eyes in the second human face image, the rotation angle of the face is calculated according to the ratio of the pupils of the two eyes in the first human face image and the ratio of the pupils of the two eyes in the first human face image to The ratio between the mouth and the pupils of the two eyes and the mouth in the second face image is compared with the ratio between the eyes and the mouth in the first face image, and the rotation angle and the pitch angle of the face are calculated according to the proportional changes. If the inclination, distance ratio, rotation angle or pitch angle of the face exceeds a preset allowable value, the second face image is reacquired. And synthesizing the preset contact lens model at the positions of the plurality of second feature points (step 607), and scaling and rotating the lens model according to the size of the face and the plurality of first feature points, and then suitable synthesized to the face. After this step, the method can end, or continue to the next search and trace step.
图8根据本发明的具体实施例是眼镜虚拟试戴互动服务系统。该系统特征包括:撷取单元81、处理单元82、分析单元83与合成单元84。该撷取单元81透过画面内的框架来定位人脸,并取得第一人脸影像,且该撷取单元81可为一摄影装置,例如,网络摄影机等。该处理单元82,耦接该撷取单元81,对该第一人脸影像的眼睛特征处如两侧眼角,取得多个第一特征点,且在每个第一特征点周围依取样模式取样多个第一特征信息,并储存该多个第一特征信息,且取得下一个画面中该第二人脸影像,并判断该人脸是否有动态移动,且取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息,且多个第一特征点包括但不限于左右眼的两侧眼角以及嘴角两侧。且处理单元82特征包括根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像。若符合,则进行下一步动作,该处理单元82特征包括应用的搜寻运算,以判断该多个第一特征点是否位于该框架内,若不是,则重新取得该第一人脸影像,若是,则进行下一步动作。上述取样模式中该特征信息为像素色彩信息,且该处理单元82是由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,其中该m与n为正整数。或者是,该处理单元82由每一个该特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该半圆至少涵盖一眼角,其中该m与n为正整数。Fig. 8 is an interactive service system for glasses virtual try-on according to a specific embodiment of the present invention. The system features include: an acquisition unit 81 , a processing unit 82 , an analysis unit 83 and a synthesis unit 84 . The capture unit 81 locates the face through the frame in the frame, and obtains the first face image, and the capture unit 81 can be a photographing device, such as a network camera. The processing unit 82, coupled to the extraction unit 81, obtains a plurality of first feature points at the eye features of the first human face image, such as the corners of the eyes on both sides, and samples around each first feature point according to the sampling mode. A plurality of first feature information, and store the plurality of first feature information, and obtain the second face image in the next frame, and judge whether the face has dynamic movement, and obtain the second face image. A second feature point, and then obtain a plurality of second feature information, and a plurality of first feature points include but not limited to the corners of the eyes on both sides of the left and right eyes and both sides of the corner of the mouth. And the processing unit 82 is characterized by judging whether the first human face image conforms to the human face logic according to the plurality of first feature points, and reacquiring the first human face image if not. If match, then proceed to the next step, the processing unit 82 features include applied search operations to judge whether the plurality of first feature points are located in the frame, if not, then re-acquire the first human face image, if so, Then proceed to the next step. In the above sampling mode, the characteristic information is pixel color information, and the processing unit 82 radiates m directions from each characteristic point, and obtains n pixel points in each direction as color information, wherein the m and n are positive integers. Alternatively, the processing unit 82 radiates m directions from each of the feature points in a semicircle, and obtains n pixels in each direction as color information, and the semicircle covers at least the corner of one eye, wherein the m and n is a positive integer.
该分析单元83,耦接该处理单元82,用以对比该第一人脸影像与该第二人脸影像的相对位置信息的差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置。该分析单元83于时间间隔内,对比该画面与下一个该画面中的人脸,并追踪该多个第一特征点是否有移动轨迹,并应用噪声滤除法,来滤除下一个该画面内的噪声,且该噪声滤除法为高斯模糊法、中值法与均值法其中之一。该分析单元还用于预设至少一对比范围,以对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征之间复数个误差值,且排序该多个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置,上述的i是正整数。该分析单元根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度,且若该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设容许值,则重新取得该第二人脸影像。合成单元84,耦接该分析单元83,由预设的眼镜数据库85提取预设的一眼镜模型并将预设的该眼镜模型合成于该多个第二特征点的位置,且该合成单元根据该人脸的尺寸与该多个第一特征点,以缩放与旋转该眼镜模型,进而合成至该人脸。本实例的眼镜试戴虚拟仿真系统,透过适当的修改操作与步骤也可适用于隐形眼镜虚拟试戴互动服务系统。The analysis unit 83, coupled to the processing unit 82, is used to compare the difference in relative position information between the first human face image and the second human face image to determine the position, moving state and scaling of the human face, and then Calculate the positions of the plurality of second feature points. The analysis unit 83 compares the face in the frame with the face in the next frame within a time interval, and tracks whether the plurality of first feature points have moving tracks, and applies a noise filtering method to filter out the faces in the next frame. noise, and the noise filtering method is one of Gaussian blur method, median method and mean method. The analysis unit is also used to preset at least one comparison range to compare the plurality of first characteristic information and the plurality of second characteristic information, and take a complex number between the plurality of first characteristic information and the plurality of second characteristics error values, and sort the plurality of error values, and obtain the first i minimum error values, and then obtain the positions of the plurality of second feature points, and the above i is a positive integer. The analysis unit calculates the inclination of the human face according to the positions of the plurality of first feature points and the plurality of second feature points, and calculates the slope of the face according to the distance between the plurality of first feature points and the plurality of second feature points. the distance between the points, calculate the distance ratio of the human face with the length change and calculate the rotation angle and pitch angle of the human face with the proportional change according to the ratio of the plurality of first feature points to the plurality of second feature points, and If the inclination, distance ratio, rotation angle or pitch angle of the face exceeds a preset allowable value, the second face image is acquired again. The synthesis unit 84 is coupled to the analysis unit 83, extracts a preset glasses model from the preset glasses database 85 and synthesizes the preset glasses model at the positions of the plurality of second feature points, and the synthesis unit according to The size of the human face and the plurality of first feature points are used to scale and rotate the glasses model, and then synthesized to the human face. The glasses try-on virtual simulation system in this example can also be applied to the contact lens virtual try-on interactive service system through appropriate modification operations and steps.
以上所述,只是本发明的最佳实施例而已,不能以此来限定本发明所实施的范围,在不背离本发明精神及其实质的情况下,熟悉本领域的技术人员当可根据本发明作出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明所附的权利要求的保护范围。The above is only the best embodiment of the present invention, and it cannot be used to limit the scope of the present invention. Without departing from the spirit and essence of the present invention, those skilled in the art should be able to implement the present invention according to the present invention. Various corresponding changes and modifications are made, but these corresponding changes and modifications should belong to the protection scope of the appended claims of the present invention.
Claims (41)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110192119XA CN102867321A (en) | 2011-07-05 | 2011-07-05 | Glasses virtual try-on interactive service system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110192119XA CN102867321A (en) | 2011-07-05 | 2011-07-05 | Glasses virtual try-on interactive service system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102867321A true CN102867321A (en) | 2013-01-09 |
Family
ID=47446177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110192119XA Pending CN102867321A (en) | 2011-07-05 | 2011-07-05 | Glasses virtual try-on interactive service system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102867321A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400119A (en) * | 2013-07-31 | 2013-11-20 | 南京融图创斯信息科技有限公司 | Face recognition technology-based mixed reality spectacle interactive display method |
CN103413118A (en) * | 2013-07-18 | 2013-11-27 | 毕胜 | On-line glasses try-on method |
CN104217350A (en) * | 2014-06-17 | 2014-12-17 | 北京京东尚科信息技术有限公司 | Virtual try-on realization method and device |
CN104299143A (en) * | 2014-10-20 | 2015-01-21 | 上海电机学院 | Virtual try-in method and device |
CN105095841A (en) * | 2014-05-22 | 2015-11-25 | 小米科技有限责任公司 | Method and device for generating eyeglasses |
WO2016011792A1 (en) * | 2014-07-25 | 2016-01-28 | 杨国煌 | Method for proportionally synthesizing image of article |
CN106203364A (en) * | 2016-07-14 | 2016-12-07 | 广州帕克西软件开发有限公司 | System and method is tried in a kind of 3D glasses interaction on |
CN106384388A (en) * | 2016-09-20 | 2017-02-08 | 福州大学 | Method and system for try-on of Internet glasses in real time based on HTML5 and augmented reality technology |
CN106412441A (en) * | 2016-11-04 | 2017-02-15 | 珠海市魅族科技有限公司 | Video anti-shake control method and terminal |
CN106530229A (en) * | 2016-11-07 | 2017-03-22 | 成都通甲优博科技有限责任公司 | Manufacturing method and display method of post virtualization processed virtual cosmetic contact lenses |
CN106775535A (en) * | 2016-12-26 | 2017-05-31 | 温州职业技术学院 | A kind of virtual try-in device of eyeglass based on rim detection and method |
CN107330969A (en) * | 2017-06-07 | 2017-11-07 | 深圳市易尚展示股份有限公司 | Glasses virtual three-dimensional try-in method and glasses virtual three-dimensional try system on |
CN107749084A (en) * | 2017-10-24 | 2018-03-02 | 广州增强信息科技有限公司 | A virtual try-on method and system based on image three-dimensional reconstruction technology |
CN112861760A (en) * | 2017-07-25 | 2021-05-28 | 虹软科技股份有限公司 | Method and device for facial expression recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1264474A (en) * | 1997-05-16 | 2000-08-23 | 保谷株式会社 | System for making spectacles to order |
EP1231569A1 (en) * | 2001-02-06 | 2002-08-14 | Geometrix, Inc. | Interactive three-dimensional system for trying eyeglasses |
US20030123026A1 (en) * | 2000-05-18 | 2003-07-03 | Marc Abitbol | Spectacles fitting system and fitting methods useful therein |
CN101339606A (en) * | 2008-08-14 | 2009-01-07 | 北京中星微电子有限公司 | Human face critical organ contour characteristic points positioning and tracking method and device |
CN101344971A (en) * | 2008-08-26 | 2009-01-14 | 陈玮 | Internet three-dimensional human body head portrait spectacles try-in method |
JP2010072910A (en) * | 2008-09-18 | 2010-04-02 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for generating three-dimensional model of face |
-
2011
- 2011-07-05 CN CN201110192119XA patent/CN102867321A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1264474A (en) * | 1997-05-16 | 2000-08-23 | 保谷株式会社 | System for making spectacles to order |
US20030123026A1 (en) * | 2000-05-18 | 2003-07-03 | Marc Abitbol | Spectacles fitting system and fitting methods useful therein |
EP1231569A1 (en) * | 2001-02-06 | 2002-08-14 | Geometrix, Inc. | Interactive three-dimensional system for trying eyeglasses |
CN101339606A (en) * | 2008-08-14 | 2009-01-07 | 北京中星微电子有限公司 | Human face critical organ contour characteristic points positioning and tracking method and device |
CN101344971A (en) * | 2008-08-26 | 2009-01-14 | 陈玮 | Internet three-dimensional human body head portrait spectacles try-in method |
JP2010072910A (en) * | 2008-09-18 | 2010-04-02 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for generating three-dimensional model of face |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413118A (en) * | 2013-07-18 | 2013-11-27 | 毕胜 | On-line glasses try-on method |
CN103413118B (en) * | 2013-07-18 | 2019-02-22 | 毕胜 | Online glasses try-on method |
CN103400119A (en) * | 2013-07-31 | 2013-11-20 | 南京融图创斯信息科技有限公司 | Face recognition technology-based mixed reality spectacle interactive display method |
CN103400119B (en) * | 2013-07-31 | 2017-02-15 | 徐坚 | Face recognition technology-based mixed reality spectacle interactive display method |
CN105095841A (en) * | 2014-05-22 | 2015-11-25 | 小米科技有限责任公司 | Method and device for generating eyeglasses |
WO2015192733A1 (en) * | 2014-06-17 | 2015-12-23 | 北京京东尚科信息技术有限公司 | Virtual fitting implementation method and device |
CN104217350B (en) * | 2014-06-17 | 2017-03-22 | 北京京东尚科信息技术有限公司 | Virtual try-on realization method and device |
US10360731B2 (en) | 2014-06-17 | 2019-07-23 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Method and device for implementing virtual fitting |
CN104217350A (en) * | 2014-06-17 | 2014-12-17 | 北京京东尚科信息技术有限公司 | Virtual try-on realization method and device |
WO2016011792A1 (en) * | 2014-07-25 | 2016-01-28 | 杨国煌 | Method for proportionally synthesizing image of article |
CN104299143A (en) * | 2014-10-20 | 2015-01-21 | 上海电机学院 | Virtual try-in method and device |
CN106203364A (en) * | 2016-07-14 | 2016-12-07 | 广州帕克西软件开发有限公司 | System and method is tried in a kind of 3D glasses interaction on |
CN106203364B (en) * | 2016-07-14 | 2019-05-24 | 广州帕克西软件开发有限公司 | System and method is tried in a kind of interaction of 3D glasses on |
CN106384388B (en) * | 2016-09-20 | 2019-03-12 | 福州大学 | Real-time try-on method and system for Internet glasses based on HTML5 and augmented reality technology |
CN106384388A (en) * | 2016-09-20 | 2017-02-08 | 福州大学 | Method and system for try-on of Internet glasses in real time based on HTML5 and augmented reality technology |
CN106412441A (en) * | 2016-11-04 | 2017-02-15 | 珠海市魅族科技有限公司 | Video anti-shake control method and terminal |
CN106412441B (en) * | 2016-11-04 | 2019-09-27 | 珠海市魅族科技有限公司 | A kind of video stabilization control method and terminal |
CN106530229B (en) * | 2016-11-07 | 2019-05-21 | 成都通甲优博科技有限责任公司 | The production method and display methods of virtualization processing virtual beauty pupil pupil piece afterwards |
CN106530229A (en) * | 2016-11-07 | 2017-03-22 | 成都通甲优博科技有限责任公司 | Manufacturing method and display method of post virtualization processed virtual cosmetic contact lenses |
CN106775535A (en) * | 2016-12-26 | 2017-05-31 | 温州职业技术学院 | A kind of virtual try-in device of eyeglass based on rim detection and method |
CN107330969A (en) * | 2017-06-07 | 2017-11-07 | 深圳市易尚展示股份有限公司 | Glasses virtual three-dimensional try-in method and glasses virtual three-dimensional try system on |
CN112861760A (en) * | 2017-07-25 | 2021-05-28 | 虹软科技股份有限公司 | Method and device for facial expression recognition |
CN107749084A (en) * | 2017-10-24 | 2018-03-02 | 广州增强信息科技有限公司 | A virtual try-on method and system based on image three-dimensional reconstruction technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102867321A (en) | Glasses virtual try-on interactive service system and method | |
US12106495B2 (en) | Three-dimensional stabilized 360-degree composite image capture | |
US10055851B2 (en) | Determining dimension of target object in an image using reference object | |
Park et al. | High-quality depth map upsampling and completion for RGB-D cameras | |
Itoh et al. | Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization | |
CN107852533B (en) | Three-dimensional content generating device and method for generating three-dimensional content | |
CN105407346B (en) | image segmentation method | |
CN105574921B (en) | Automated texture mapping and animation from images | |
JP2023015989A (en) | Item identification and tracking system | |
US10140513B2 (en) | Reference image slicing | |
US20150009214A1 (en) | Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis | |
US11403781B2 (en) | Methods and systems for intra-capture camera calibration | |
KR20160119176A (en) | 3-d image analyzer for determining viewing direction | |
TWI433049B (en) | Interactive service methods and systems for virtual glasses wearing | |
CN104881526B (en) | Article wearing method based on 3D and glasses try-on method | |
WO2015180659A1 (en) | Image processing method and image processing device | |
US20230052169A1 (en) | System and method for generating virtual pseudo 3d outputs from images | |
Ye et al. | Free-viewpoint video of human actors using multiple handheld kinects | |
CN111666792B (en) | Image recognition method, image acquisition and recognition method, and commodity recognition method | |
EP2946274B1 (en) | Methods and systems for creating swivel views from a handheld device | |
CN109525786A (en) | Method for processing video frequency, device, terminal device and storage medium | |
Chen et al. | Casual 6-dof: free-viewpoint panorama using a handheld 360 camera | |
EP2237227A1 (en) | Video sequence processing method and system | |
JP7326965B2 (en) | Image processing device, image processing program, and image processing method | |
CN112580463A (en) | Three-dimensional human skeleton data identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130109 |