[go: up one dir, main page]

CN102867321A - Glasses virtual try-on interactive service system and method - Google Patents

Glasses virtual try-on interactive service system and method Download PDF

Info

Publication number
CN102867321A
CN102867321A CN201110192119XA CN201110192119A CN102867321A CN 102867321 A CN102867321 A CN 102867321A CN 201110192119X A CN201110192119X A CN 201110192119XA CN 201110192119 A CN201110192119 A CN 201110192119A CN 102867321 A CN102867321 A CN 102867321A
Authority
CN
China
Prior art keywords
feature points
glasses
feature
interactive service
try
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110192119XA
Other languages
Chinese (zh)
Inventor
吴念祖
池瑞敏
刘启能
周久善
陈伟铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KOBAYASHI OPTICAL CO Ltd
Claridy Solutions Inc
Original Assignee
KOBAYASHI OPTICAL CO Ltd
Claridy Solutions Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KOBAYASHI OPTICAL CO Ltd, Claridy Solutions Inc filed Critical KOBAYASHI OPTICAL CO Ltd
Priority to CN201110192119XA priority Critical patent/CN102867321A/en
Publication of CN102867321A publication Critical patent/CN102867321A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a virtual glasses try-on interactive service system and method taking a real person as a service object. The method comprises the following steps: the human face is positioned through a frame in the picture, and a corresponding human face image is obtained, so that a plurality of first feature point positions are obtained. And sampling a plurality of first feature information around each first feature point. And using picture comparison to judge whether the human face has dynamic movement, and obtaining a second human face image through the next picture. And combining the method of searching the contrast range and tracking the feature information to obtain a plurality of second feature points in the second face image, thereby obtaining a plurality of second feature point positioning information. And then, judging the position, the moving state and the scaling of the face by using the corresponding position information difference of the first face image and the second face image, and further calculating the positions of the plurality of second characteristic points. And synthesizing a preset glasses model at the positions of the second characteristic points.

Description

眼镜虚拟试戴互动服务系统与方法Glasses virtual try-on interactive service system and method

技术领域 technical field

本发明涉及一种提供真人影像的眼镜三维(3D)虚拟试戴互动服务系统与方法,特别涉及一种用于电子商务互动系统上的眼镜3D试戴互动服务系统与方法。The present invention relates to a glasses three-dimensional (3D) virtual try-on interactive service system and method that provides real images, and in particular to a glasses 3D try-on interactive service system and method used in an e-commerce interactive system.

背景技术 Background technique

随着电子商务的蓬勃发展,越来越多的消费者也越来越依赖电子商务互动平台系统来选择自己喜爱的货物与商品。而商品与模特的合并展示图片、电子商品试戴系统与软件,也越来越吸引消费者的注意,进而引起消费者的购买欲望。而提供真人影像的眼镜试戴系统更是受到消费者注目与欢迎的试戴系统之一,使用者只要应用自己的照片,便可从数以万计的眼镜商品中找到自己喜爱与合适的眼镜。With the vigorous development of e-commerce, more and more consumers are increasingly relying on the e-commerce interactive platform system to choose their favorite goods and commodities. The combined display pictures of products and models, and electronic product try-on systems and software are also attracting more and more attention from consumers, which in turn arouses consumers' desire to buy. The glasses try-on system that provides real-life images is one of the try-on systems that have attracted the attention and popularity of consumers. Users can find their favorite and suitable glasses from tens of thousands of glasses products as long as they use their own photos. .

然而,传统的眼镜试戴系统多为二维(2D)平面,使用者只能观看自己试戴眼镜的正面影像,而无法由左右侧面来观看自己试戴眼镜的侧面影像。且传统的眼镜试戴系统也无法随着消费者脸部的移动与转动,适当地将眼镜与消费者的脸部作合成。因此,往往造成所合成的影像过于突兀或不真实。However, most of the traditional glasses try-on systems are two-dimensional (2D) planes, and users can only watch the frontal image of the glasses they are trying on, but cannot watch the side images of themselves trying on the glasses from the left and right sides. Moreover, the traditional glasses try-on system cannot properly synthesize the glasses with the consumer's face as the consumer's face moves and rotates. Therefore, the synthesized image is often too abrupt or unreal.

因此,鉴于传统眼镜试戴系统与方法缺乏有效且经济机制来解决传统的问题,因此亟需提出一种新颖的眼镜虚拟试戴互动服务系统与方法,能够经过精确的模拟与运算,来解决上述的问题。Therefore, in view of the lack of an effective and economic mechanism to solve the traditional problems in the traditional glasses try-on system and method, it is urgent to propose a novel glasses virtual try-on interactive service system and method, which can solve the above problems through accurate simulation and calculation. The problem.

发明内容 Contents of the invention

为解决上述问题,本发明提供真人影像的眼镜虚拟试戴互动服务系统与方法,经过精确的模拟与运算,解决当消费者脸部的移动与转动时,无法将眼镜与消费者的脸部作适当的合成,从而产生合成影像过于突兀或不真实的问题。In order to solve the above-mentioned problems, the present invention provides an interactive service system and method for glasses virtual try-on with live images. After precise simulation and calculation, it solves the problem that the glasses and the face of the consumer cannot be matched when the face of the consumer moves and rotates. Proper synthesis, resulting in the problem that the synthetic image is too abrupt or unreal.

根据具体实施例,本发明提供真人影像的一种眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架以定位人脸,并取得第一人脸影像;对该第一人脸影像的眼睛,取得多个第一特征点;在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息;利用画面对比判断该人脸是否有动态移动,并透过下一个该画面取得该第二人脸影像;结合搜寻对比范围与追踪特征信息方法取得该第二人脸影像内多个第二特征点,进而取得多个第二特征点定位信息;对比该第一人脸影像与该第二人脸影像的相对位置信息差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及将预设的眼镜模型合成于该多个第二特征点的位置。According to a specific embodiment, the present invention provides a virtual glasses try-on interactive service method for real-person images, which is characterized by: locating the face through the frame in the screen, and obtaining the first face image; The eyes of the image obtain a plurality of first feature points; sample a plurality of first feature information around each first feature point, and store the plurality of first feature information; use screen comparison to determine whether the face has dynamic movement, And obtain the second face image through the next frame; obtain a plurality of second feature points in the second face image by combining the search comparison range and tracking feature information method, and then obtain the positioning information of a plurality of second feature points; Comparing the relative position information difference between the first human face image and the second human face image to judge the position, moving state and zoom ratio of the human face, and then calculate the positions of the plurality of second feature points; The assumed glasses model is synthesized at the positions of the plurality of second feature points.

根据该具体实施例,本发明提供真人影像的一种眼镜虚拟试戴互动服务系统,其特征包括:影像撷取单元,透过画面内的框架来定位人脸,并取得第一人脸影像;处理单元,耦接该影像撷取单元,对该第一人脸影像的眼睛,取得多个第一特征点,且在每个第一特征点周围设计取样模式以取得有利的多个第一特征信息,并储存该多个第一特征信息,且利用画面对比判断该人脸是否有动态移动,并透过下一个该画面取得该第二人脸影像,结合搜寻对比范围与追踪特征信息方法并取得该第二人脸影像内多个第二特征点,进而取得多个第二特征点定位信息;分析单元,耦接该处理单元,用以对比该第一人脸影像与该第二人脸影像的相对位置信息差异,以判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及合成单元,耦接该分析单元,将预设的一虚拟眼镜模型合成于该多个第二特征点的位置。后续以此方法类推取得动态的第三、第四的特征点的位置。According to this specific embodiment, the present invention provides a virtual glasses try-on interactive service system for real-person images, which is characterized by: an image capture unit that locates a face through a frame in the screen and obtains a first face image; A processing unit, coupled to the image capture unit, obtains a plurality of first feature points for the eyes of the first human face image, and designs a sampling pattern around each first feature point to obtain a plurality of favorable first features information, and store the plurality of first feature information, and use the screen comparison to judge whether the face has dynamic movement, and obtain the second face image through the next screen, combine the search comparison range and tracking feature information method and Obtaining a plurality of second feature points in the second face image, and then obtaining location information of a plurality of second feature points; an analysis unit, coupled to the processing unit, for comparing the first face image with the second face The relative position information difference of the image is used to judge the position, moving state and zoom ratio of the face, and then calculate the positions of the plurality of second feature points; The glasses model is synthesized at the positions of the plurality of second feature points. Follow up by analogy with this method to obtain the positions of the dynamic third and fourth feature points.

根据另一具体实施例,本发明提供一种隐形眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架来定位一人脸,并取得第一人脸影像;对该第一人脸影像的眼睛瞳孔,取得多个第一特征点;在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息;以及将预设的隐形眼镜模型合成于该多个第二特征点的位置。后续以此方法类推取得动态的第三、第四的特征点的位置。According to another specific embodiment, the present invention provides a contact lens virtual try-on interactive service method, which is characterized in that it includes: locating a human face through a frame in the screen, and obtaining a first human face image; The eye pupil of the image is obtained by obtaining a plurality of first feature points; sampling a plurality of first feature information around each first feature point, and storing the plurality of first feature information; and synthesizing a preset contact lens model on the The positions of the plurality of second feature points. Follow up by analogy with this method to obtain the positions of the dynamic third and fourth feature points.

为进一步对本发明有更深入的说明,并借以下图示、图号说明及发明详细说明,希望能够对审查工作有所帮助。In order to provide a more in-depth description of the present invention, the following illustrations, figure number descriptions and detailed descriptions of the invention are used, hoping to be helpful to the examination work.

附图说明 Description of drawings

图1A、图1B是根据本发明具体实施例的一种眼镜虚拟试戴互动服务方法;Fig. 1A and Fig. 1B are a kind of glasses virtual try-on interactive service method according to a specific embodiment of the present invention;

图2A~图2D是根据图1方法的操作手段;Fig. 2A~Fig. 2D are the operation means according to Fig. 1 method;

图3A~图3C显示人脸的倾斜、转动与移动角度;3A to 3C show the tilt, rotation and movement angles of the human face;

图4是眼镜合成人脸的状态示意图;Fig. 4 is a schematic diagram of the state of glasses synthetic human face;

图5A~图5C是根据本发明具体实施例的三维眼镜制作方法;5A to 5C are the manufacturing method of 3D glasses according to the specific embodiment of the present invention;

图6A~图6B是根据本发明的另一实施例的一种隐形眼镜虚拟试戴互动服务方法;6A-6B are a contact lens virtual try-on interactive service method according to another embodiment of the present invention;

图7A~图7B是根据图6方法的操作手段;Fig. 7A~Fig. 7B are the operation means according to Fig. 6 method;

图8是根据本发明的具体实施例的一种眼镜虚拟试戴互动服务系统。Fig. 8 is a virtual try-on interactive service system for glasses according to a specific embodiment of the present invention.

附图标记reference sign

步骤s101~s107Steps s101 to s107

步骤s601~s607Steps s601-s607

81影像撷取单元81 image capture unit

82处理单元82 processing units

83分析单元83 analytical units

84合成单元84 synthesis units

85眼镜数据库85 glasses database

具体实施方式 Detailed ways

配合下列图式说明本创作的详细结构,及其连结关系。Use the following diagrams to illustrate the detailed structure and connections of this creation.

图1是根据本发明的具体实施例的真人影像一种眼镜虚拟试戴互动服务方法,其特征包括:透过画面内的框架来定位人脸,并取得第一人脸影像(步骤s101),例如,如图2A所示,本具体实施例在画面中显示一虚线框架并要求使用者脸部正面进入框架,以符合虚线框大小并将双眼对齐横线,来定位使用者的脸部,并拍摄下该第一人脸影像。接着,对该第一人脸影像的眼睛特征处,例如两侧眼角,取得多个第一特征点(步骤s102),此多个第一特征点可以但不限于包括左右眼的两侧眼角以及嘴角两侧。例如,如图2B中画叉所示,本具体实施例,可以手动点选该第一人脸影像的左右眼的两侧眼角点以及嘴角两侧点,以分别取得一个或多个第一特征点。另外,在另一具体实施例也可以使用人脸辨识方式让程序自动抓取特征点。此外,本实施例在此步骤(s102)之后,更可根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像或该多个第一特征点。若符合,则进行下一步动作,应用搜寻运算,来判断该多个第一特征点是否位于该框架内,若不是,则重新取得该第一人脸影像或该多个第一特征点,若是,则进行下一步动作。其中,判断特征点位置是否符合人脸逻辑的方式有左(右)眼的两侧眼角点是否在虚线框左(右)半边、同一只眼的内外眼角是否相隔一定间距、同一只眼的内外眼角或左右嘴角高度是否差距过大等。若有一项不符合上述限定条件,则要求重新取得该第一特征点或该第一人脸影像。Fig. 1 is a real-person image according to a specific embodiment of the present invention, a kind of glasses virtual try-on interactive service method, which is characterized by: positioning the face through the frame in the screen, and obtaining the first face image (step s101), For example, as shown in FIG. 2A , the present embodiment displays a dotted line frame on the screen and requires the front of the user's face to enter the frame to match the size of the dotted line frame and align the eyes with the horizontal line to locate the user's face, and The first human face image is taken. Next, obtain a plurality of first feature points (step s102) at the eye features of the first human face image, such as the corners of the eyes on both sides (step s102). The sides of the mouth. For example, as shown with a cross in Figure 2B, in this specific embodiment, the corner points on both sides of the left and right eyes of the first human face image and the points on both sides of the corner of the mouth can be manually selected to obtain one or more first features respectively. point. In addition, in another specific embodiment, the face recognition method can also be used to allow the program to automatically capture feature points. In addition, after this step (s102), this embodiment can further judge whether the first human face image conforms to the human face logic according to the plurality of first feature points, and if not, then reacquire the first human face image or the plurality of first feature points. If so, proceed to the next step, apply a search operation to determine whether the plurality of first feature points are located in the frame, if not, then re-acquire the first human face image or the plurality of first feature points, if so , proceed to the next step. Among them, the way to judge whether the position of the feature points conforms to the logic of the face includes whether the corner points on both sides of the left (right) eye are in the left (right) half of the dotted line frame, whether the inner and outer corners of the same eye are separated by a certain distance, whether the inner and outer corners of the same eye Whether the height difference between the corners of the eyes or the left and right corners of the mouth is too large, etc. If one of the above-mentioned limiting conditions is not met, it is required to obtain the first feature point or the first human face image again.

接下来,在每个第一特征点周围设计取样模式以取得有利的多个第一特征信息,并储存该多个第一特征信息(步骤s103),且特征包括同时储存该多个第一特征点与该多个第一特征点间的点间距。取样模式进一步而言,该特征信息为像素色彩信息,且该像素色彩信息由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n为正整数。或者是,该像素色彩信息是由每一个该特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n为正整数以及该半圆至少涵盖一眼角。如图2C所示,本具体实施例以取样模式的方法抓取特征点的邻近像素色彩信息,同时记录各特征点之间的点间距。例如,由特征点以半圆辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息,且该半圆至少完全涵盖一眼角。或者是,如图2D所示,在另一具体实施例,该取样模式由特征点以辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息。Next, design a sampling pattern around each first feature point to obtain favorable multiple first feature information, and store the multiple first feature information (step s103), and the features include simultaneously storing the multiple first feature information A point distance between the point and the plurality of first feature points. In terms of sampling mode, the feature information is pixel color information, and the pixel color information is radiated in m directions from each feature point, and n pixels are obtained in each direction as color information, and the m and n are positive integers. Alternatively, the pixel color information is obtained from m directions radiated by each feature point in a semicircle, and n pixels are obtained in each direction as color information, and the m and n are positive integers and the semicircle Cover at least the corner of one eye. As shown in FIG. 2C , in this specific embodiment, the color information of adjacent pixels of feature points is captured by the method of sampling mode, and the point spacing between each feature point is recorded at the same time. For example, the feature point extends outward in 8 directions in the form of semicircle radiation diffusion, and takes 7 points and a total of 56 color information as the feature information of the feature point, and the semicircle completely covers at least the corner of the eye. Or, as shown in Figure 2D, in another specific embodiment, the sampling mode extends outward in 8 directions from the feature point in a radial diffusion manner, and takes 7 points and a total of 56 color information as the feature point feature information.

接着,取得下一个画面中该第二人脸影像,并利用画面对比判断该人脸是否有动态移动(步骤s104)。其中在时间间隔内,对比该画面与下一个该画面中的人脸,并动态追踪该多个第一特征点是否有移动轨迹。例如,本具体实施例,该画面与下一个该画面像素以相减的方式对比在此时间间隔内移动的物体,若特征点附近有明显移动痕迹,表示人脸在这段时间有动作,则进行后续步骤与运算。反之,如果人脸没有动作,则不做后续追踪处理。在另一具体实施例中,还可以以一背景图与该画面相减的方式,对比在该时间间隔内移动的物体。或者是,经过前后画面像素差的特征信息中的白点(动点)数量决定目前影像中移动量的程度,若白点数量多,则表示人脸正在移动,则进行后续步骤与运算。反之,如果特征点附近没有白点,表示人脸没有明显移动痕迹,则不作后续追踪处理。Next, obtain the second human face image in the next frame, and use frame comparison to determine whether the human face has dynamic movement (step s104 ). Wherein, within the time interval, compare the face in the frame with the face in the next frame, and dynamically track whether the plurality of first feature points have moving tracks. For example, in this specific embodiment, this picture is compared with the next picture pixel in a subtractive manner to the object moving within this time interval, if there are obvious moving traces near the feature points, it means that the face has moved during this time, then Carry out subsequent steps and operations. Conversely, if there is no movement on the face, no follow-up processing will be performed. In another specific embodiment, objects moving within the time interval may also be compared in a manner of subtracting a background image from the frame. Alternatively, the number of white dots (moving dots) in the feature information of the pixel difference between the front and back images determines the degree of movement in the current image. If the number of white dots is large, it means that the face is moving, and the subsequent steps and calculations are performed. On the contrary, if there is no white point near the feature point, it means that the face has no obvious moving traces, and no follow-up tracking process will be performed.

接着,应用噪声滤除法,来滤除下一个该画面内的噪声,以避免后面对比过程中受到噪声干扰而提高失误率,且该噪声滤除法为高斯模糊法、中值法与均值法的其中之一。Then, the noise filtering method is applied to filter out the noise in the next picture, so as to avoid the noise interference in the subsequent comparison process and increase the error rate, and the noise filtering method is one of the Gaussian blur method, the median method and the mean method one.

当前述有移动痕迹时,将进一步结合搜寻对比范围并追踪特征信息,进而取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息(步骤s105),并且对比该第一人脸影像与该第二人脸影像的相对位置信息差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置(步骤s106)。其中,至少预设一对比范围,来对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征之间多个误差值,且排序该复数个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置。在本具体实施例中,根据两种不同的状态而有不同的决定法:(1)搜寻状态:当刚点选完特征点第一次开始侦测时或追踪失败时会进入此状态,在此状态下,对比的范围是静态的,仅局限在使用者初次点选特征点的邻近范围内作对比,意即强迫使用者脸对着虚线框的位置,才有可能进入对比范围。(2)追踪状态:在前一个画面有成功对比到特征点的情况下则为追踪状态,在此状态下,对比的区域为上一个画面中对比到的特征点的邻近区域,意即此区域是动态的会跟着目前追到的特征点移动的。且在对比的范围内,会包含依设计的取样模式取N个像素,每个像素会以取样模式的方法来取得该像素点的临近56个点,将这些点的RGB与YCbCr色彩信息与起初录制的第一特征信息作对比,两者误差值为Error 1~Error N,将该这N个值作排序,取最小误差的前i个像素为候选点,例如10个,且再作一次分群去除离群值,最后平均这些较集中的像素值坐标,该结果即是最后追踪结果。如果,在N个点中找出的Error值都太大,则表示使用者移动程度已超出追踪范围,或有额外的遮蔽物出现在特征点前,此时则判定追踪失败,不再进行后面的运算。When there are moving traces mentioned above, it will further combine the search and comparison range and track feature information, and then obtain a plurality of second feature points in the second face image, and then obtain a plurality of second feature information (step s105), and compare the The relative position information difference between the first human face image and the second human face image is used to determine the position, moving state and scaling ratio of the human face, and then calculate the positions of the plurality of second feature points (step s106 ). Wherein, at least a comparison range is preset to compare the plurality of first characteristic information and the plurality of second characteristic information, and take a plurality of error values between the plurality of first characteristic information and the plurality of second characteristics, And sort the plurality of error values, and obtain the first i smallest error values, and then obtain the positions of the plurality of second feature points. In this specific embodiment, there are different decision methods according to two different states: (1) search state: when the feature points are just selected and start detection for the first time or when the tracking fails, it will enter this state. In this state, the range of comparison is static, and the comparison is only limited to the adjacent range of the feature point selected by the user for the first time, which means that the user is forced to face the position of the dotted frame to enter the comparison range. (2) Tracking state: when the feature points are successfully compared in the previous screen, it is in the tracking state. In this state, the compared area is the adjacent area of the feature points compared in the previous screen, which means this area It is dynamic and will move with the feature points currently being chased. And within the scope of the comparison, it will include sampling N pixels according to the designed sampling mode, and each pixel will use the sampling mode to obtain 56 points adjacent to the pixel point, and the RGB and YCbCr color information of these points will be compared with the initial The recorded first feature information is compared, the error value of the two is Error 1 ~ Error N, sort the N values, take the first i pixels with the smallest error as candidate points, for example, 10, and do another grouping Remove outliers, and finally average these concentrated pixel value coordinates, and the result is the final tracking result. If the Error values found in the N points are all too large, it means that the user’s movement has exceeded the tracking range, or there are additional occluders appearing in front of the feature points. At this time, it is determined that the tracking has failed, and no further follow-up will be performed. operation.

此外,本实例更可进行几何计算,例如,根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度。若该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设允许值,则重新取得该第二人脸影像。如图3A所示,根据该第一人脸影像中的两眼两侧眼角点的坐标位于水平轴,而该第二人脸影像的两眼两侧眼角点坐标则为上仰角度,因此,可以用斜率计算该人脸的倾斜度。如图3B所示,根据该第一人脸影像中两眼眼角间的距离D1与该第二人脸影像中两眼眼角间的距离D2,以长度变化计算该人脸的远近比例。如图3C所示,根据该第一人脸影像中两眼两侧眼角的比例1∶1与该第二人脸影像中两眼眼角的比例相比于该第一人脸影像中两眼两侧眼角的比例为1.1∶1或1.2∶1,以比例变化计算该人脸的旋转角度以及根据该第一人脸影像中两眼与嘴巴位于水平轴,而该第二人脸影像中两眼与嘴巴相比较在该第一人脸影像中两眼与嘴巴之比例为1∶1.1,则以比例变化计算该人脸的俯仰角度。In addition, this example can further perform geometric calculations, for example, according to the positions of the plurality of first feature points and the plurality of second feature points, calculate the inclination of the face with the slope, and calculate the slope of the human face according to the distance between the plurality of first feature points. The distance between the distance and the plurality of second feature points, the distance ratio of the face is calculated by the length change, and the ratio of the human face is calculated by the ratio of the first feature point and the second feature point. Face rotation and pitch angles. If the inclination, the distance ratio, the rotation angle or the pitch angle of the human face exceeds a preset allowable value, the second human face image is reacquired. As shown in FIG. 3A , according to the coordinates of the corners of the two eyes on both sides of the first face image are located on the horizontal axis, and the coordinates of the corners of the eyes on both sides of the second face image are upward angles, therefore, The inclination of the face can be calculated using the slope. As shown in FIG. 3B , according to the distance D1 between the eye corners of the first human face image and the distance D2 between the two eye corners of the second human face image, the distance ratio of the human face is calculated according to the length change. As shown in Figure 3C, according to the ratio of the two eye corners of the two eyes in the first face image and the ratio of the two eye corners in the second face image to the ratio of the two eyes in the first face image The ratio of the side corners of the eyes is 1.1:1 or 1.2:1, the rotation angle of the face is calculated by the ratio change and according to the two eyes and the mouth in the first face image are on the horizontal axis, and the two eyes in the second face image Compared with the mouth, the ratio of the two eyes to the mouth in the first human face image is 1:1.1, and the pitch angle of the human face is calculated according to the ratio change.

接着,由预设的眼镜数据库提取预设的眼镜模型,来合成该多个第二特征点的位置(步骤s107),且根据该人脸的尺寸、该多个第一特征点与误差值,以缩放与旋转该眼镜模型,进而合适地合成到该人脸,且该眼镜数据库用于储存眼镜模型的数据。如图4所示,由上述步骤所计算出的三维空间信息,即可将眼镜模型智能性地旋转到符合目前使用者动向的位置,又由于人脸大小不同,需配合该多数的第一特征点坐标对眼镜尺寸作缩放与旋转,并合成在该人脸上。在此步骤之后,可结束该方法,或者是继续下一个搜寻与追踪的步骤。本具体实施例,该眼镜模型为三维(3D)眼镜模型,其制作方法可透过摄像装置(例如,数字相机)由一实体眼镜的正面、正左侧面与正右侧面三个方向个别取得该实体眼镜的平面影像(如图5A与5B所示),但不受限于此。或者是,将该实体眼镜作正负90度旋转,以取得该实体眼镜的正面、正左侧面与正右侧面三个平面影像。接着,将所取得的该三个平面影像应用影像合成软件与硬件,将其组合成该三维眼镜模型(如图5C所示)。当该三维眼镜模型制作完成后,还可应用各种参数(例如,旋转参数、色彩参数、比例参数与透明度参数等),用于调整该三维眼镜的位置与颜色,或者是应用选项(例如,选择正面视图或左右侧视图)来欣赏该三维眼镜模型(如图5C所示)。本具体实施例仅考虑不具有眼镜镜片三维镜框的制作,然而,熟悉该技术人员都应明白本发明也可用于制作具有眼镜镜片的眼镜模型。Next, the preset glasses model is extracted from the preset glasses database to synthesize the positions of the plurality of second feature points (step s107), and according to the size of the face, the plurality of first feature points and the error value, The glasses model can be scaled and rotated, and then synthesized to the human face appropriately, and the glasses database is used to store the data of the glasses model. As shown in Figure 4, the three-dimensional space information calculated by the above steps can intelligently rotate the glasses model to a position that conforms to the current user's movement, and because the faces are different in size, it is necessary to match the first feature of the majority The point coordinates scale and rotate the size of the glasses and synthesize them on the face of the person. After this step, the method can end, or continue to the next search and trace step. In this specific embodiment, the glasses model is a three-dimensional (3D) glasses model, and its manufacturing method can be divided into three directions: the front, the front left side, and the right side of a physical glasses through an imaging device (for example, a digital camera). Obtain the plane image of the real glasses (as shown in FIGS. 5A and 5B ), but is not limited thereto. Alternatively, the real glasses are rotated by plus or minus 90 degrees to obtain three plane images of the front, front left and right sides of the real glasses. Then, the three obtained plane images are combined into the three-dimensional glasses model (as shown in FIG. 5C ) by applying image synthesis software and hardware. After the 3D glasses model is completed, various parameters (such as rotation parameters, color parameters, scale parameters and transparency parameters, etc.) can also be applied to adjust the position and color of the 3D glasses, or to apply options (such as, Select the front view or the left and right side views) to enjoy the three-dimensional glasses model (as shown in Figure 5C). This specific embodiment only considers the manufacture of a three-dimensional frame without spectacle lenses. However, those skilled in the art should understand that the present invention can also be used to manufacture spectacle models with spectacle lenses.

图6根据本发明另一具体实施例是隐形眼镜虚拟试戴互动服务方法。本具体实施例与上述具体实施例差异在于本实施例是以眼睛瞳孔为特征点与结合取样模式来取得特征信息,且适用于虚拟模拟试戴隐形眼镜,而其余方法与运算都与上述具体实施例近似,故在本实例仅详述这些差异处的细节,而与上述具体实施例近似的步骤与方法,则不再赘述。本具体实施例的3D模拟方法特征包括:透过画面内一框架以定位人脸,并取得第一人脸影像(步骤s601)。对该第一人脸影像的眼睛瞳孔特征处,取得多个第一特征点(步骤s602),且多个第一特征点包括左右眼的瞳孔以及嘴角两侧,例如,如图7A中画叉所示,在本具体实施例中,手动点选该第一人脸影像的左右眼的瞳孔点以及嘴角两侧点,来分别所取得一个或多个第一特征点。在每个第一特征点周围取样多个第一特征信息,并储存该多个第一特征信息(步骤s603)。而本具体实施例,判断特征点位置是否符合人脸逻辑的方式有左(右)眼的瞳孔是否在虚线框左(右)半边、同一只眼的瞳孔与两侧眼角是否相隔一定间距、眼睛的瞳孔与左右嘴角高度是否差距过大等。若有一项不符合上述限定条件,则要求重新取得该第一特征点或该第一人脸影像。Fig. 6 is a contact lens virtual try-on interactive service method according to another specific embodiment of the present invention. The difference between this specific embodiment and the above specific embodiment is that this embodiment uses the eye pupil as a feature point and combines the sampling mode to obtain feature information, and is suitable for virtual simulation of trying on contact lenses, while other methods and calculations are the same as the above specific implementation Therefore, only the details of these differences are described in detail in this example, and the steps and methods similar to the above-mentioned specific embodiments are not repeated here. The characteristics of the 3D simulation method in this embodiment include: locating a human face through a frame in the frame, and obtaining a first human face image (step s601 ). A plurality of first feature points (step s602) are obtained at the eye pupil feature of the first human face image, and the plurality of first feature points include the pupils of the left and right eyes and both sides of the corners of the mouth, for example, as shown in Figure 7A with a cross As shown, in this specific embodiment, the pupil points of the left and right eyes and points on both sides of the corner of the mouth of the first human face image are manually selected to obtain one or more first feature points respectively. Sampling a plurality of first feature information around each first feature point, and storing the plurality of first feature information (step s603). In this specific embodiment, the way to judge whether the position of the feature points conforms to the logic of the face includes whether the pupil of the left (right) eye is in the left (right) half of the dotted line frame, whether the pupil of the same eye is separated from the corners of the eyes on both sides by a certain distance, whether the pupil of the eye Whether the height gap between the pupils and the left and right corners of the mouth is too large. If one of the above-mentioned limiting conditions is not met, it is required to obtain the first feature point or the first human face image again.

接着,取得下一个画面中该第二人脸影像,并判断该人脸是否有动态移动(步骤s604),然后,搜寻对比范围并追踪特征信息,进而取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息(步骤s605),且特征包括同时储存该多个第一特征点之间的点间距。如图7B所示,在本具体实施例取样的方法抓取特征点的邻近像素色彩信息,同时记录各特征点之间的点间距,例如,由特征点来辐射扩散的方式向外延伸8个方向,并分别取7个点共56个色彩信息当作此特征点的特征信息。对比该第一人脸影像与该第二人脸影像,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置(步骤s606),例如,根据该第一人脸影像中瞳孔点的坐标位于水平轴,而该第二人脸影像瞳孔点坐标则为上仰角度,因此,可以用斜率计算该人脸的倾斜度。例如,根据该第一人脸影像中两眼瞳孔间的距离与该第二人脸影像中两眼瞳孔间的距离,以长度变化计算该人脸的远近比例。例如,根据该第一人脸影像中两眼瞳孔与该第二人脸影像中两眼瞳孔的比例,来比例变化计算该人脸的旋转角度以及根据该第一人脸影像中两眼瞳孔与嘴巴与该第二人脸影像中两眼瞳孔与嘴巴之间的比例,相比较该第一人脸影像中两眼与嘴巴的比例,来比例变化计算该人脸的旋转角度与俯仰角度。如果该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过一预设允许值,则重新取得该第二人脸影像。以及将预设的隐形眼镜模型合成在该多个第二特征点的位置(步骤607),且根据该人脸的尺寸与该多个第一特征点,来缩放与旋转该眼镜模型,进而合适地合成到该人脸。在此步骤之后,可结束该方法,或者是继续下一个搜寻与追踪的步骤。Next, obtain the second human face image in the next frame, and judge whether the human face has dynamic movement (step s604), then search for a comparison range and track feature information, and then obtain multiple first human face images in the second human face image two feature points, and then obtain a plurality of second feature information (step s605), and the feature includes storing the point distance between the plurality of first feature points at the same time. As shown in Figure 7B, the sampling method in this specific embodiment captures the color information of adjacent pixels of the feature points, and records the point spacing between each feature point at the same time, for example, the way of radial diffusion from the feature points extends outward to 8 direction, and take 7 points and a total of 56 color information as the feature information of this feature point. Comparing the first human face image and the second human face image to determine the position, moving state and scaling of the human face, and then calculate the positions of the plurality of second feature points (step s606), for example, according to the The coordinates of the pupil point in the first human face image are located on the horizontal axis, while the coordinates of the pupil point in the second human face image are the upward angle, therefore, the inclination of the human face can be calculated by using the slope. For example, according to the distance between the two pupils in the first human face image and the distance between the two pupils in the second human face image, the distance ratio of the human face is calculated according to the length change. For example, according to the ratio of the pupils of the two eyes in the first human face image to the pupils of the two eyes in the second human face image, the rotation angle of the face is calculated according to the ratio of the pupils of the two eyes in the first human face image and the ratio of the pupils of the two eyes in the first human face image to The ratio between the mouth and the pupils of the two eyes and the mouth in the second face image is compared with the ratio between the eyes and the mouth in the first face image, and the rotation angle and the pitch angle of the face are calculated according to the proportional changes. If the inclination, distance ratio, rotation angle or pitch angle of the face exceeds a preset allowable value, the second face image is reacquired. And synthesizing the preset contact lens model at the positions of the plurality of second feature points (step 607), and scaling and rotating the lens model according to the size of the face and the plurality of first feature points, and then suitable synthesized to the face. After this step, the method can end, or continue to the next search and trace step.

图8根据本发明的具体实施例是眼镜虚拟试戴互动服务系统。该系统特征包括:撷取单元81、处理单元82、分析单元83与合成单元84。该撷取单元81透过画面内的框架来定位人脸,并取得第一人脸影像,且该撷取单元81可为一摄影装置,例如,网络摄影机等。该处理单元82,耦接该撷取单元81,对该第一人脸影像的眼睛特征处如两侧眼角,取得多个第一特征点,且在每个第一特征点周围依取样模式取样多个第一特征信息,并储存该多个第一特征信息,且取得下一个画面中该第二人脸影像,并判断该人脸是否有动态移动,且取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息,且多个第一特征点包括但不限于左右眼的两侧眼角以及嘴角两侧。且处理单元82特征包括根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像。若符合,则进行下一步动作,该处理单元82特征包括应用的搜寻运算,以判断该多个第一特征点是否位于该框架内,若不是,则重新取得该第一人脸影像,若是,则进行下一步动作。上述取样模式中该特征信息为像素色彩信息,且该处理单元82是由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,其中该m与n为正整数。或者是,该处理单元82由每一个该特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该半圆至少涵盖一眼角,其中该m与n为正整数。Fig. 8 is an interactive service system for glasses virtual try-on according to a specific embodiment of the present invention. The system features include: an acquisition unit 81 , a processing unit 82 , an analysis unit 83 and a synthesis unit 84 . The capture unit 81 locates the face through the frame in the frame, and obtains the first face image, and the capture unit 81 can be a photographing device, such as a network camera. The processing unit 82, coupled to the extraction unit 81, obtains a plurality of first feature points at the eye features of the first human face image, such as the corners of the eyes on both sides, and samples around each first feature point according to the sampling mode. A plurality of first feature information, and store the plurality of first feature information, and obtain the second face image in the next frame, and judge whether the face has dynamic movement, and obtain the second face image. A second feature point, and then obtain a plurality of second feature information, and a plurality of first feature points include but not limited to the corners of the eyes on both sides of the left and right eyes and both sides of the corner of the mouth. And the processing unit 82 is characterized by judging whether the first human face image conforms to the human face logic according to the plurality of first feature points, and reacquiring the first human face image if not. If match, then proceed to the next step, the processing unit 82 features include applied search operations to judge whether the plurality of first feature points are located in the frame, if not, then re-acquire the first human face image, if so, Then proceed to the next step. In the above sampling mode, the characteristic information is pixel color information, and the processing unit 82 radiates m directions from each characteristic point, and obtains n pixel points in each direction as color information, wherein the m and n are positive integers. Alternatively, the processing unit 82 radiates m directions from each of the feature points in a semicircle, and obtains n pixels in each direction as color information, and the semicircle covers at least the corner of one eye, wherein the m and n is a positive integer.

该分析单元83,耦接该处理单元82,用以对比该第一人脸影像与该第二人脸影像的相对位置信息的差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置。该分析单元83于时间间隔内,对比该画面与下一个该画面中的人脸,并追踪该多个第一特征点是否有移动轨迹,并应用噪声滤除法,来滤除下一个该画面内的噪声,且该噪声滤除法为高斯模糊法、中值法与均值法其中之一。该分析单元还用于预设至少一对比范围,以对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征之间复数个误差值,且排序该多个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置,上述的i是正整数。该分析单元根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度,且若该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设容许值,则重新取得该第二人脸影像。合成单元84,耦接该分析单元83,由预设的眼镜数据库85提取预设的一眼镜模型并将预设的该眼镜模型合成于该多个第二特征点的位置,且该合成单元根据该人脸的尺寸与该多个第一特征点,以缩放与旋转该眼镜模型,进而合成至该人脸。本实例的眼镜试戴虚拟仿真系统,透过适当的修改操作与步骤也可适用于隐形眼镜虚拟试戴互动服务系统。The analysis unit 83, coupled to the processing unit 82, is used to compare the difference in relative position information between the first human face image and the second human face image to determine the position, moving state and scaling of the human face, and then Calculate the positions of the plurality of second feature points. The analysis unit 83 compares the face in the frame with the face in the next frame within a time interval, and tracks whether the plurality of first feature points have moving tracks, and applies a noise filtering method to filter out the faces in the next frame. noise, and the noise filtering method is one of Gaussian blur method, median method and mean method. The analysis unit is also used to preset at least one comparison range to compare the plurality of first characteristic information and the plurality of second characteristic information, and take a complex number between the plurality of first characteristic information and the plurality of second characteristics error values, and sort the plurality of error values, and obtain the first i minimum error values, and then obtain the positions of the plurality of second feature points, and the above i is a positive integer. The analysis unit calculates the inclination of the human face according to the positions of the plurality of first feature points and the plurality of second feature points, and calculates the slope of the face according to the distance between the plurality of first feature points and the plurality of second feature points. the distance between the points, calculate the distance ratio of the human face with the length change and calculate the rotation angle and pitch angle of the human face with the proportional change according to the ratio of the plurality of first feature points to the plurality of second feature points, and If the inclination, distance ratio, rotation angle or pitch angle of the face exceeds a preset allowable value, the second face image is acquired again. The synthesis unit 84 is coupled to the analysis unit 83, extracts a preset glasses model from the preset glasses database 85 and synthesizes the preset glasses model at the positions of the plurality of second feature points, and the synthesis unit according to The size of the human face and the plurality of first feature points are used to scale and rotate the glasses model, and then synthesized to the human face. The glasses try-on virtual simulation system in this example can also be applied to the contact lens virtual try-on interactive service system through appropriate modification operations and steps.

以上所述,只是本发明的最佳实施例而已,不能以此来限定本发明所实施的范围,在不背离本发明精神及其实质的情况下,熟悉本领域的技术人员当可根据本发明作出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明所附的权利要求的保护范围。The above is only the best embodiment of the present invention, and it cannot be used to limit the scope of the present invention. Without departing from the spirit and essence of the present invention, those skilled in the art should be able to implement the present invention according to the present invention. Various corresponding changes and modifications are made, but these corresponding changes and modifications should belong to the protection scope of the appended claims of the present invention.

Claims (41)

1.一种眼镜虚拟试戴互动服务方法,其特征在于,1. A virtual glasses try-on interactive service method, characterized in that, 透过一画面中的框架来定位一人脸,并取得第一人脸影像;Locating a human face through a frame in a frame, and obtaining a first human face image; 在眼睛特征部位,依照取样模式取得多个第一特征点与特征信息并储存该多个第一特征信息与该多个第一特征点间的点间距;At the characteristic part of the eye, a plurality of first feature points and feature information are obtained according to the sampling mode, and the distance between the plurality of first feature information and the plurality of first feature points is stored; 用动态影像判断与搜寻追踪该特征信息取得下一个画面中的第二人脸影像,并取得该第二人脸影像内多个第二特征点,从而取得多个第二特征信息;Using the dynamic image to judge and search and track the feature information to obtain a second face image in the next frame, and obtain a plurality of second feature points in the second face image, thereby obtaining a plurality of second feature information; 对比该第一人脸影像与该第二人脸影像的相对位置信息的差异,以判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及将预设的眼镜模型合成于该多个第二特征点的位置。Comparing the difference between the relative position information of the first human face image and the second human face image to determine the position, movement state and zoom ratio of the human face, and then calculate the positions of the plurality of second feature points; and The preset glasses models are synthesized at the positions of the plurality of second feature points. 2.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,该多个第一特征点包括左右眼的两侧眼角。2. The interactive service method for virtual glasses try-on according to claim 1, wherein the plurality of first feature points include the corners of the left and right eyes. 3.如权利要求2所述的眼镜虚拟试戴互动服务方法,其特征在于,该多个第一特征点特征包括嘴角两侧。3. The interactive service method for virtual glasses try-on according to claim 2, wherein the plurality of first feature point features include both sides of the corner of the mouth. 4.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,根据该多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像或该多个第一特征点。4. The glasses virtual try-on interactive service method according to claim 1, characterized in that, according to the plurality of first feature points, it is judged whether the first human face image conforms to the logic of the human face, and if not, it is re- Obtain the first human face image or the plurality of first feature points. 5.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,应用搜寻运算,来判断该多个第一特征点是否位于该框架内,若没有,则重新取得该第一人脸影像或该复数个第一特征点。5. The interactive service method for virtual glasses try-on as claimed in claim 1, wherein a search operation is applied to determine whether the plurality of first feature points are located in the frame, and if not, the first person is reacquired The face image or the plurality of first feature points. 6.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,该技术特征为像素色彩信息。6. The interactive service method for virtual glasses try-on according to claim 1, wherein the technical feature is pixel color information. 7.如权利要求6所述的眼镜虚拟试戴互动服务方法,其特征在于,该像素色彩信息是由每一个特征点辐射出m个方向,并在每一个方向上取得n个像素点,来作为色彩信息,且该m与n为正整数。7. The glasses virtual try-on interactive service method according to claim 6, wherein the pixel color information is radiated in m directions from each feature point, and n pixels are obtained in each direction to obtain As color information, and the m and n are positive integers. 8.如权利要求6所述的眼镜虚拟试戴互动服务方法,其特征在于,该像素色彩信息是由每一个特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n为正整数以及该半圆至少涵盖一眼角。8. The glasses virtual try-on interactive service method according to claim 6, wherein the pixel color information is radiated in m directions by each feature point in a semicircle, and n pixels are obtained in each direction , as color information, and the m and n are positive integers and the semicircle at least covers the corner of one eye. 9.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,在时间间隔内,对比该画面与下一个该画面中的人脸,并动态追踪该多个第一特征点是否有移动轨迹。9. The glasses virtual try-on interactive service method according to claim 1, characterized in that, within a time interval, compare the face in the frame with the face in the next frame, and dynamically track whether the plurality of first feature points There are moving tracks. 10.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,应用一噪声滤除法,来滤除下一个该画面内的噪声。10 . The interactive service method for virtual glasses try-on according to claim 1 , wherein a noise filtering method is applied to filter out noise in the next frame. 11 . 11.如权利要求10所述的眼镜三维试戴互动服务方法,其特征在于,该噪声滤除法是高斯模糊法、中值法与均值法其中之一。11. The interactive service method for three-dimensional try-on glasses according to claim 10, wherein the noise filtering method is one of Gaussian blur method, median method and mean method. 12.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,预设一对比范围,来对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征资之间多个误差值,且排序该多个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置。12. The glasses virtual try-on interactive service method according to claim 1, wherein a comparison range is preset to compare the plurality of first characteristic information with the plurality of second characteristic information, and take the plurality of A plurality of error values between the first characteristic information and the plurality of second characteristic information, sorting the plurality of error values, obtaining the first i minimum error values, and further obtaining the positions of the plurality of second characteristic points. 13.如权利要求1所述的眼镜虚拟互动服务方法,其特征在于,根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度。13 . The virtual interaction service method for glasses according to claim 1 , wherein, according to the positions of the plurality of first feature points and the plurality of second feature points, the inclination of the human face is calculated by slope. 14 . 14.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例。14. The interactive service method for virtual glasses try-on according to claim 1, characterized in that, according to the distance between the plurality of first feature points and the distance between the plurality of second feature points, the length change of the person is calculated. The distance ratio of the face. 15.如权利要求1所述的眼镜虚拟试戴互动服务方法,其特征在于,根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度。15. The glasses virtual try-on interactive service method according to claim 1, characterized in that, according to the ratio of the plurality of first feature points to the plurality of second feature points, the rotation angle of the human face is calculated by proportional changes and pitch angle. 16.如权利要求13、14或15项所述的眼镜虚拟试戴互动服务方法,其特征在于,若该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设容许值,则重新取得该第二人脸影像。16. The interactive service method for virtual glasses try-on according to claim 13, 14 or 15, wherein if the inclination, distance ratio, rotation angle or pitch angle of the human face exceeds the preset allowable value, the Obtain the second face image. 17.如权利要求1所述的眼镜虚拟互动服务方法,其特征在于,根据该人脸的尺寸与该多个第一特征点,以缩放与旋转该眼镜模型,进而合成到该人脸。17. The glasses virtual interaction service method according to claim 1, wherein the glasses model is scaled and rotated according to the size of the face and the plurality of first feature points, and then synthesized into the face. 18.如权利要求1所述的眼镜虚拟互动服务方法,其特征在于,该眼镜模型是一三维眼镜模型。18. The glasses virtual interaction service method according to claim 1, wherein the glasses model is a three-dimensional glasses model. 19.如权利要求18所述的眼镜三维虚拟互动服务方法,其特征在于,透过一摄像装置,由至少三个方向拍摄一实体眼镜,并取得该三个方向的平面影像;以及组合该三个方向的平面影像,来取得该三维眼镜模型。19. The three-dimensional virtual interactive service method for glasses as claimed in claim 18, wherein a camera device is used to shoot a physical glasses from at least three directions, and obtain plane images of the three directions; and combine the three A plane image in one direction to obtain the 3D glasses model. 20.一种眼镜虚拟试戴互动服务系统,其特征在于,20. A glasses virtual try-on interactive service system, characterized in that, 一撷取单元,透过一画面中的框架以定位一人脸,并取得第一人脸影像;A capture unit locates a human face through a frame in a frame, and obtains a first human face image; 处理单元,耦接该撷取单元,位于眼睛特征处,依取样模式取得多个第一特征点与特征信息并储存该多个第一特征信息与该多个第一特征点间的点间距,且利用动态影像判断与搜寻追踪特征信息取得下一个画面中的第二人脸影像,并取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息;The processing unit is coupled to the extraction unit and located at the feature of the eye, acquires a plurality of first feature points and feature information according to a sampling mode and stores the plurality of first feature information and the distance between the plurality of first feature points, And using dynamic image judgment and search and tracking feature information to obtain a second face image in the next frame, and obtain a plurality of second feature points in the second face image, and then obtain a plurality of second feature information; 分析单元,耦接该处理单元,用以对比该第一人脸影像与该第二人脸影像的相对位置信息的差异,来判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;The analysis unit, coupled to the processing unit, is used to compare the difference in relative position information between the first human face image and the second human face image to determine the position, moving state and scaling ratio of the human face, and then calculate the the positions of multiple second feature points; 合成单元,耦接该分析单元,将预设的一眼镜模型合成于该多个第二特征点的位置;以及一眼镜数据库,该数据库储存眼镜模型的数据。A synthesis unit, coupled to the analysis unit, synthesizes a preset glasses model at the positions of the plurality of second feature points; and a glasses database, which stores data of the glasses model. 21.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该撷取单元是可为一摄影装置。21. The glasses virtual try-on interactive service system according to claim 20, wherein the capture unit is a camera device. 22.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该多个第一特征点包括左右眼的两侧眼角。22. The glasses virtual try-on interactive service system according to claim 20, wherein the plurality of first feature points include the corners of the left and right eyes. 23.如权利要求22所述的眼镜虚拟试戴互动服务系统,其特征在于,该多个第一特征点还包括嘴角两侧。23. The glasses virtual try-on interactive service system according to claim 22, wherein the plurality of first feature points also include two sides of the corners of the mouth. 24.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该处理单元特征包括根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像或该多个第一特征点。24. The glasses virtual try-on interactive service system according to claim 20, wherein the processing unit features include judging whether the first face image conforms to the face logic based on a plurality of first feature points, and if not If it matches, the first human face image or the plurality of first feature points are reacquired. 25.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该处理单元特征包括应用一搜寻运算,来判断该多个第一特征点是否位于该框架内,若没有,则重新取得该第一人脸影像或该多个第一特征点。25. The glasses virtual try-on interactive service system according to claim 20, wherein the processing unit includes applying a search operation to determine whether the plurality of first feature points are located within the frame, and if not, then Reacquiring the first face image or the plurality of first feature points. 26.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该特征信息是像素色彩信息,且该处理单元是由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,其中该m与n是正整数。26. The glasses virtual try-on interactive service system according to claim 20, wherein the feature information is pixel color information, and the processing unit radiates m directions from each feature point, and in each Obtain n pixels in the direction as color information, wherein m and n are positive integers. 27.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该特征信息是像素色彩信息,且该处理单元由每一个该特征点呈半圆辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该半圆至少涵盖一眼角,其中该m与n是正整数。27. The glasses virtual try-on interactive service system according to claim 20, wherein the feature information is pixel color information, and the processing unit radiates m directions in a semicircle from each of the feature points, and each n pixels are obtained in one direction as color information, and the semicircle covers at least one eye corner, wherein m and n are positive integers. 28.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该分析单元在时间间隔内,对比该画面与下一个该画面中的人脸,并追踪该多个第一特征点是否有移动轨迹,并应用一噪声滤除法,以滤除下一个该画面内的噪声。28. The glasses virtual try-on interactive service system according to claim 20, wherein the analysis unit compares the face in the frame with the face in the next frame in a time interval, and tracks the plurality of first features Whether the point has a moving track, and apply a noise filtering method to filter out the noise in the next frame. 29.如权利要求28所述的眼镜虚拟试戴互动服务系统,其特征在于,该噪声滤除法是高斯模糊法、中值法与均值法其中之一。29. The interactive service system for virtual glasses try-on according to claim 28, wherein the noise filtering method is one of Gaussian blur method, median method and mean method. 30.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该分析单元预设一对比范围,以对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征资之间多个误差值,且排序该多个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置。30. The glasses virtual try-on interactive service system according to claim 20, wherein the analysis unit presets a comparison range to compare the plurality of first characteristic information with the plurality of second characteristic information, and take A plurality of error values between the plurality of first feature information and the plurality of second feature information, and sorting the plurality of error values, and obtaining the first i minimum error values, and then obtaining the positions of the plurality of second feature points . 31.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该分析单元根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,来比例变化计算该人脸的旋转角度与俯仰角度,并且如果该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设允许值,则重新取得该第二人脸影像。31. The glasses virtual try-on interactive service system according to claim 20, characterized in that, the analysis unit calculates the slope of the face according to the positions of the plurality of first feature points and the plurality of second feature points. Inclination, according to the distance between the plurality of first feature points and the distance between the plurality of second feature points, calculate the distance ratio of the face according to the length change and according to the plurality of first feature points and the plurality of second feature points The ratio of the two feature points is used to calculate the rotation angle and pitch angle of the face, and if the inclination, distance ratio, rotation angle or pitch angle of the face exceeds the preset allowable value, the second person is reacquired face image. 32.如权利要求20所述的眼镜虚拟试戴互动服务系统,其特征在于,该合成单元根据该人脸的尺寸与该多个第一特征点,以缩放与旋转该眼镜模型,进而合成到该人脸。32. The glasses virtual try-on interactive service system according to claim 20, wherein the synthesis unit scales and rotates the glasses model according to the size of the human face and the plurality of first feature points, and then synthesizes the glasses model into The face. 33.一种眼镜虚拟试戴互动服务方法,其特征在于,33. A virtual glasses try-on interactive service method, characterized in that, 透过一画面中的一框架来定位一人脸,并取得第一人脸影像;Locating a human face through a frame in a frame, and obtaining a first human face image; 在眼睛瞳孔特征处,依取样模式取得多个第一特征点与特征信息并储存该多个第一特征信息与该多个第一特征点间的点间距;At the feature of the pupil of the eye, a plurality of first feature points and feature information are obtained according to a sampling mode, and the distance between the plurality of first feature information and the plurality of first feature points is stored; 利用动态影像判断与搜寻追踪特征信息取得下一个画面中的该第二人脸影像,并取得该第二人脸影像内多个第二特征点,进而取得多个第二特征信息;Obtaining the second face image in the next frame by using dynamic image judgment and searching and tracking feature information, and obtaining a plurality of second feature points in the second face image, and then obtaining a plurality of second feature information; 对比该第一人脸影像与该第二人脸影像的相对位置信息的差异,以判断该人脸的位置、移动状态与缩放比例,进而计算出该多个第二特征点的位置;以及将预设的一隐形眼镜模型合成在该多个第二特征点的位置。Comparing the difference between the relative position information of the first human face image and the second human face image to determine the position, movement state and zoom ratio of the human face, and then calculate the positions of the plurality of second feature points; and A preset contact lens model is synthesized at the positions of the plurality of second feature points. 34.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,该多个第一特征点特征包括嘴角两侧。34. The interactive service method for virtual glasses try-on according to claim 33, wherein the plurality of first feature point features include both sides of the corner of the mouth. 35.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,根据多个第一特征点,来判断该第一人脸影像是否符合人脸逻辑,若不符合,则重新取得该第一人脸影像或该多个第一特征点。35. The glasses virtual try-on interactive service method according to claim 33, characterized in that, according to a plurality of first feature points, it is judged whether the first human face image conforms to the human face logic, and if not, it is re-obtained The first face image or the plurality of first feature points. 36.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,应用一搜寻运算,来判断该多个第一特征点是否位于该框架内,若没有,则重新取得该第一人脸影像或该多个第一特征点。36. The interactive service method for virtual glasses try-on according to claim 33, wherein a search operation is applied to determine whether the plurality of first feature points are located within the frame, and if not, the first feature points are reacquired. A face image or the plurality of first feature points. 37.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,该特征信息是像素色彩信息,且该像素色彩信息是由每一个该特征点辐射出m个方向,并在每一个方向上取得n个像素点,以作为色彩信息,且该m与n是正整数。37. The interactive service method for virtual glasses try-on as claimed in claim 33, wherein the characteristic information is pixel color information, and the pixel color information is radiated in m directions from each of the characteristic points, and each Obtain n pixels in one direction as color information, and m and n are positive integers. 38.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,在时间间隔内,对比该画面与下一个该画面中的人脸,并动态追踪该多个第一特征点是否有移动轨迹,并应用一噪声滤除法,以滤除下一个该画面内的噪声。38. The glasses virtual try-on interactive service method according to claim 33, wherein, within a time interval, compare the face in the frame with the face in the next frame, and dynamically track whether the plurality of first feature points There is a moving track, and a noise filtering method is applied to filter out the noise in the next frame. 39.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,预设一对比范围,来对比该多个第一特征信息与该多个第二特征信息,并取该多个第一特征信息与该多个第二特征资之间多个误差值,且排序该多个误差值,并取得前i个最小误差值,进而取得该多个第二特征点的位置。39. The glasses virtual try-on interactive service method according to claim 33, wherein a comparison range is preset to compare the plurality of first characteristic information with the plurality of second characteristic information, and take the plurality of A plurality of error values between the first characteristic information and the plurality of second characteristic information, sorting the plurality of error values, obtaining the first i minimum error values, and further obtaining the positions of the plurality of second characteristic points. 40.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,根据该多个第一特征点与该多个第二特征点的位置,以斜率计算该人脸的倾斜度、根据该多个第一特征点间的距离与该多个第二特征点间的距离,以长度变化计算该人脸的远近比例以及根据该多个第一特征点与该多个第二特征点的比例,以比例变化计算该人脸的旋转角度与俯仰角度,并且如果该人脸的倾斜度、远近比例、旋转角度或俯仰角度超过预设容许值,则重新取得该第二人脸影像。40. The glasses virtual try-on interactive service method according to claim 33, characterized in that, according to the positions of the plurality of first feature points and the plurality of second feature points, the inclination of the human face is calculated by slope, According to the distance between the plurality of first feature points and the distance between the plurality of second feature points, the distance ratio of the human face is calculated according to the length change and according to the plurality of first feature points and the plurality of second feature points Calculate the rotation angle and pitch angle of the face according to the ratio, and if the face tilt, distance ratio, rotation angle or pitch angle exceeds a preset allowable value, the second face image is reacquired. 41.如权利要求33所述的眼镜虚拟试戴互动服务方法,其特征在于,根据该人脸的尺寸与该多个第一特征点,以缩放与旋转该眼镜模型,进而合成到该人脸。41. The glasses virtual try-on interactive service method according to claim 33, wherein the glasses model is scaled and rotated according to the size of the face and the plurality of first feature points, and then synthesized into the face .
CN201110192119XA 2011-07-05 2011-07-05 Glasses virtual try-on interactive service system and method Pending CN102867321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110192119XA CN102867321A (en) 2011-07-05 2011-07-05 Glasses virtual try-on interactive service system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110192119XA CN102867321A (en) 2011-07-05 2011-07-05 Glasses virtual try-on interactive service system and method

Publications (1)

Publication Number Publication Date
CN102867321A true CN102867321A (en) 2013-01-09

Family

ID=47446177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110192119XA Pending CN102867321A (en) 2011-07-05 2011-07-05 Glasses virtual try-on interactive service system and method

Country Status (1)

Country Link
CN (1) CN102867321A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method
CN104217350A (en) * 2014-06-17 2014-12-17 北京京东尚科信息技术有限公司 Virtual try-on realization method and device
CN104299143A (en) * 2014-10-20 2015-01-21 上海电机学院 Virtual try-in method and device
CN105095841A (en) * 2014-05-22 2015-11-25 小米科技有限责任公司 Method and device for generating eyeglasses
WO2016011792A1 (en) * 2014-07-25 2016-01-28 杨国煌 Method for proportionally synthesizing image of article
CN106203364A (en) * 2016-07-14 2016-12-07 广州帕克西软件开发有限公司 System and method is tried in a kind of 3D glasses interaction on
CN106384388A (en) * 2016-09-20 2017-02-08 福州大学 Method and system for try-on of Internet glasses in real time based on HTML5 and augmented reality technology
CN106412441A (en) * 2016-11-04 2017-02-15 珠海市魅族科技有限公司 Video anti-shake control method and terminal
CN106530229A (en) * 2016-11-07 2017-03-22 成都通甲优博科技有限责任公司 Manufacturing method and display method of post virtualization processed virtual cosmetic contact lenses
CN106775535A (en) * 2016-12-26 2017-05-31 温州职业技术学院 A kind of virtual try-in device of eyeglass based on rim detection and method
CN107330969A (en) * 2017-06-07 2017-11-07 深圳市易尚展示股份有限公司 Glasses virtual three-dimensional try-in method and glasses virtual three-dimensional try system on
CN107749084A (en) * 2017-10-24 2018-03-02 广州增强信息科技有限公司 A virtual try-on method and system based on image three-dimensional reconstruction technology
CN112861760A (en) * 2017-07-25 2021-05-28 虹软科技股份有限公司 Method and device for facial expression recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1264474A (en) * 1997-05-16 2000-08-23 保谷株式会社 System for making spectacles to order
EP1231569A1 (en) * 2001-02-06 2002-08-14 Geometrix, Inc. Interactive three-dimensional system for trying eyeglasses
US20030123026A1 (en) * 2000-05-18 2003-07-03 Marc Abitbol Spectacles fitting system and fitting methods useful therein
CN101339606A (en) * 2008-08-14 2009-01-07 北京中星微电子有限公司 Human face critical organ contour characteristic points positioning and tracking method and device
CN101344971A (en) * 2008-08-26 2009-01-14 陈玮 Internet three-dimensional human body head portrait spectacles try-in method
JP2010072910A (en) * 2008-09-18 2010-04-02 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for generating three-dimensional model of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1264474A (en) * 1997-05-16 2000-08-23 保谷株式会社 System for making spectacles to order
US20030123026A1 (en) * 2000-05-18 2003-07-03 Marc Abitbol Spectacles fitting system and fitting methods useful therein
EP1231569A1 (en) * 2001-02-06 2002-08-14 Geometrix, Inc. Interactive three-dimensional system for trying eyeglasses
CN101339606A (en) * 2008-08-14 2009-01-07 北京中星微电子有限公司 Human face critical organ contour characteristic points positioning and tracking method and device
CN101344971A (en) * 2008-08-26 2009-01-14 陈玮 Internet three-dimensional human body head portrait spectacles try-in method
JP2010072910A (en) * 2008-09-18 2010-04-02 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for generating three-dimensional model of face

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method
CN103413118B (en) * 2013-07-18 2019-02-22 毕胜 Online glasses try-on method
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN103400119B (en) * 2013-07-31 2017-02-15 徐坚 Face recognition technology-based mixed reality spectacle interactive display method
CN105095841A (en) * 2014-05-22 2015-11-25 小米科技有限责任公司 Method and device for generating eyeglasses
WO2015192733A1 (en) * 2014-06-17 2015-12-23 北京京东尚科信息技术有限公司 Virtual fitting implementation method and device
CN104217350B (en) * 2014-06-17 2017-03-22 北京京东尚科信息技术有限公司 Virtual try-on realization method and device
US10360731B2 (en) 2014-06-17 2019-07-23 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for implementing virtual fitting
CN104217350A (en) * 2014-06-17 2014-12-17 北京京东尚科信息技术有限公司 Virtual try-on realization method and device
WO2016011792A1 (en) * 2014-07-25 2016-01-28 杨国煌 Method for proportionally synthesizing image of article
CN104299143A (en) * 2014-10-20 2015-01-21 上海电机学院 Virtual try-in method and device
CN106203364A (en) * 2016-07-14 2016-12-07 广州帕克西软件开发有限公司 System and method is tried in a kind of 3D glasses interaction on
CN106203364B (en) * 2016-07-14 2019-05-24 广州帕克西软件开发有限公司 System and method is tried in a kind of interaction of 3D glasses on
CN106384388B (en) * 2016-09-20 2019-03-12 福州大学 Real-time try-on method and system for Internet glasses based on HTML5 and augmented reality technology
CN106384388A (en) * 2016-09-20 2017-02-08 福州大学 Method and system for try-on of Internet glasses in real time based on HTML5 and augmented reality technology
CN106412441A (en) * 2016-11-04 2017-02-15 珠海市魅族科技有限公司 Video anti-shake control method and terminal
CN106412441B (en) * 2016-11-04 2019-09-27 珠海市魅族科技有限公司 A kind of video stabilization control method and terminal
CN106530229B (en) * 2016-11-07 2019-05-21 成都通甲优博科技有限责任公司 The production method and display methods of virtualization processing virtual beauty pupil pupil piece afterwards
CN106530229A (en) * 2016-11-07 2017-03-22 成都通甲优博科技有限责任公司 Manufacturing method and display method of post virtualization processed virtual cosmetic contact lenses
CN106775535A (en) * 2016-12-26 2017-05-31 温州职业技术学院 A kind of virtual try-in device of eyeglass based on rim detection and method
CN107330969A (en) * 2017-06-07 2017-11-07 深圳市易尚展示股份有限公司 Glasses virtual three-dimensional try-in method and glasses virtual three-dimensional try system on
CN112861760A (en) * 2017-07-25 2021-05-28 虹软科技股份有限公司 Method and device for facial expression recognition
CN107749084A (en) * 2017-10-24 2018-03-02 广州增强信息科技有限公司 A virtual try-on method and system based on image three-dimensional reconstruction technology

Similar Documents

Publication Publication Date Title
CN102867321A (en) Glasses virtual try-on interactive service system and method
US12106495B2 (en) Three-dimensional stabilized 360-degree composite image capture
US10055851B2 (en) Determining dimension of target object in an image using reference object
Park et al. High-quality depth map upsampling and completion for RGB-D cameras
Itoh et al. Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization
CN107852533B (en) Three-dimensional content generating device and method for generating three-dimensional content
CN105407346B (en) image segmentation method
CN105574921B (en) Automated texture mapping and animation from images
JP2023015989A (en) Item identification and tracking system
US10140513B2 (en) Reference image slicing
US20150009214A1 (en) Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis
US11403781B2 (en) Methods and systems for intra-capture camera calibration
KR20160119176A (en) 3-d image analyzer for determining viewing direction
TWI433049B (en) Interactive service methods and systems for virtual glasses wearing
CN104881526B (en) Article wearing method based on 3D and glasses try-on method
WO2015180659A1 (en) Image processing method and image processing device
US20230052169A1 (en) System and method for generating virtual pseudo 3d outputs from images
Ye et al. Free-viewpoint video of human actors using multiple handheld kinects
CN111666792B (en) Image recognition method, image acquisition and recognition method, and commodity recognition method
EP2946274B1 (en) Methods and systems for creating swivel views from a handheld device
CN109525786A (en) Method for processing video frequency, device, terminal device and storage medium
Chen et al. Casual 6-dof: free-viewpoint panorama using a handheld 360 camera
EP2237227A1 (en) Video sequence processing method and system
JP7326965B2 (en) Image processing device, image processing program, and image processing method
CN112580463A (en) Three-dimensional human skeleton data identification method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130109