[go: up one dir, main page]

CN101751118B - Object endpoint positioning method and application system - Google Patents

Object endpoint positioning method and application system Download PDF

Info

Publication number
CN101751118B
CN101751118B CN2008101856396A CN200810185639A CN101751118B CN 101751118 B CN101751118 B CN 101751118B CN 2008101856396 A CN2008101856396 A CN 2008101856396A CN 200810185639 A CN200810185639 A CN 200810185639A CN 101751118 B CN101751118 B CN 101751118B
Authority
CN
China
Prior art keywords
points
those selected
raw video
end points
concave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101856396A
Other languages
Chinese (zh)
Other versions
CN101751118A (en
Inventor
王科翔
陈柏戎
李家昶
郭建春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN2008101856396A priority Critical patent/CN101751118B/en
Publication of CN101751118A publication Critical patent/CN101751118A/en
Application granted granted Critical
Publication of CN101751118B publication Critical patent/CN101751118B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

一种物体端点定位方法,用以定位一物体的两肢体的末端位置。于此方法中,对所取得的原始影像进行前景处理以取得一前景影像。前景影像对应至原始影像中的物体轮廓。依据前景影像取得多个转折点,其联机形成相近于物体轮廓的多边形曲线。依照各转折点的夹角将其分类为凸点或凹点,并选出多个被选凸点及多个被选凹点。选择被选凸点的其二作为两暂定端点。所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机形成与原始影像中的物体的两肢体对应的三角形。依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。

Figure 200810185639

A method for locating the endpoints of an object is used to locate the end positions of two limbs of an object. In this method, the original image obtained is subjected to foreground processing to obtain a foreground image. The foreground image corresponds to the object contour in the original image. A plurality of turning points are obtained based on the foreground image, and the connection of the turning points forms a polygonal curve close to the object contour. The turning points are classified into convex points or concave points according to the angles thereof, and a plurality of selected convex points and a plurality of selected concave points are selected. Two of the selected convex points are selected as two temporary endpoints. The connection of the two selected temporary endpoints and a selected concave point located between the two temporary endpoints forms a triangle corresponding to the two limbs of the object in the original image. Two positioning endpoints are determined based on the two temporary endpoints to locate the end positions of the two limbs of the object.

Figure 200810185639

Description

物体端点定位方法及应用的系统Object endpoint positioning method and application system

技术领域 technical field

本发明是有关于一种物体端点定位方法及应用的系统,且特别是有关于一种用以定位物体的两肢体的末端位置的物体端点定位方法及应用的系统。The present invention relates to an object endpoint positioning method and an applied system, and in particular relates to an object endpoint positioning method and an applied system for locating the end positions of two limbs of an object.

背景技术 Background technique

人机互动接口为一种能让人与计算机进行互动的接口。一般而言,人机互动接口通常包括键盘、鼠标、或触碰式面板。于此种接口中,人称为使用者。使用者通过人机互动接口,便能达到控制计算机、或与计算机互动的目的。A human-computer interaction interface is an interface that allows a human to interact with a computer. Generally speaking, the human-computer interaction interface usually includes a keyboard, a mouse, or a touch panel. In such an interface, a person is called a user. Through the human-computer interaction interface, the user can achieve the purpose of controlling the computer or interacting with the computer.

在美国专利第5,524,637号所公开的专利中,提出一种可量测生理运作的互动系统(“interactive system for measuring physiologicalexertion”),是将一加速度传感器(accelerometer)配戴于使用者的双脚上,并令使用者站立于一压力感测板(pressure sensor)上,以判断出使用者双脚的施力大小、与双脚的移动加速度。如此,使用者便能通过双脚的施力与移动速度,来与互动系统进行互动。In the patent disclosed in U.S. Patent No. 5,524,637, an interactive system for measuring physiological exercise ("interactive system for measuring physiological exercise") is proposed, which is to wear an acceleration sensor (accelerometer) on the user's feet , and make the user stand on a pressure sensor board (pressure sensor) to judge the force exerted by the user's feet and the moving acceleration of the feet. In this way, the user can interact with the interactive system through the force exerted and the moving speed of the feet.

再者,近年来,亦有研究人员利用影像处理技术,来增加使用者与计算机之间的互动性。在美国专利第6,308,565号所公开的专利中,提出一种于多维空间中追踪及评价动作技能的系统及方法(“system and methodfor tracking and assessing movement skills in multidimensionalspace”),是将主动或被动式的标识(marker)黏贴于使用者的双脚上,并利用影像处理的方式来侦测标识的移动量,来判断出使用者的双脚于空间中的坐标位置及移动状态。如此,使用者便能通过双脚的移动方式,来与互动系统进行互动。Furthermore, in recent years, some researchers have used image processing technology to increase the interaction between users and computers. In the patent disclosed in U.S. Patent No. 6,308,565, a system and method for tracking and assessing movement skills in multidimensional space ("system and method for tracking and assessing movement skills in multidimensional space") is proposed, which uses active or passive identification The (marker) is pasted on the user's feet, and image processing is used to detect the movement of the mark to determine the coordinate position and movement state of the user's feet in space. In this way, the user can interact with the interactive system by moving the feet.

虽然已经有多种人机互动接口被提出来,然而,于上述实施方式中,若要让人机互动接口能判断出使用者的双脚的末端位置,则必须令使用者配戴特殊装置或服装(如上述的加速度传感器、及标志)。如此,将会造成使用者的不方便,且还可能会降低使用者的使用意愿。因此,如何定位使用者的双脚的末端位置,且不会造成使用者的不方便,仍然为业界所致力的课题之一。Although a variety of human-computer interaction interfaces have been proposed, in the above-mentioned embodiments, if the user-computer interaction interface is to be able to determine the end positions of the user's feet, the user must wear a special device or Clothing (such as the above-mentioned accelerometer, and signs). In this way, it will cause inconvenience to the user, and may also reduce the willingness of the user to use. Therefore, how to locate the end positions of the user's feet without causing inconvenience to the user is still one of the issues that the industry is working on.

发明内容 Contents of the invention

本发明的目的在于提供一种物体端点定位方法及应用的系统,能定位一物体的两肢体的末端位置。此物体的两肢体可为人体的双脚或两手指。通过此物体端点定位方法及应用其置,使用者不需配戴特殊装置或服装。如此,将能提高使用者的使用方便性。The object of the present invention is to provide a method for locating an object endpoint and an applied system, which can locate the terminal positions of two limbs of an object. The two limbs of this object can be the double feet or two fingers of human body. With the method for locating the endpoint of the object and its application, the user does not need to wear special devices or clothing. In this way, the user's convenience of use can be improved.

为实现上述目的,根据本发明的第一方面,提出一种物体端点定位方法,用以定位一物体的两肢体的末端位置。此方法包括下列步骤。首先,取得一原始影像,原始影像具有对应于物体的影像信息。接着,对原始影像进行前景处理以取得一前景影像。前景影像对应至物体的轮廓。然后,依据前景影像取得多个转折点,转折点的联机可形成与原始影像中物体的轮廓实质上相近的一多边形曲线。之后,依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及属个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。再来,选择此些被选凸点的其二作为两暂定端点,所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机形成与原始影像中的物体的两肢体对应的一三角形。最后,依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。To achieve the above object, according to the first aspect of the present invention, a method for locating the endpoints of an object is proposed, which is used for locating the endpoints of two limbs of an object. This method includes the following steps. First, an original image is obtained, and the original image has image information corresponding to an object. Next, foreground processing is performed on the original image to obtain a foreground image. The foreground image corresponds to the outline of the object. Then, a plurality of turning points are obtained according to the foreground image, and the connection of turning points can form a polygonal curve which is substantially similar to the contour of the object in the original image. Afterwards, according to each turning point and the angle formed by the corresponding adjacent two turning points, a plurality of convex points and a concave point are determined from these turning points, and a plurality of selected convex points and concave points are selected along a predetermined direction. Multiple selected pits. Again, select two of these selected convex points as two tentative endpoints, the connection between the selected two tentative endpoints and a selected concave point between the two tentative endpoints forms the connection with the object in the original image A triangle corresponding to the two limbs. Finally, two positioning endpoints are determined according to the two provisional endpoints to locate the end positions of the two limbs of the object.

根据本发明的第二方面,提出一种物体端点定位系统,用以定位一物体的两肢体的末端位置。此系统包括一撷取单元、一处理单元、一匹配单元、及一定位单元。撷取单元用以取得一原始影像,此原始影像具有对应于物体的影像信息。处理单元用以对该原始影像进行前景处理以取得一前景影像,前景影像对应至物体的轮廓。处理单元另用以依据前景影像取得多个转折点,此些转折点的联机可形成与原始影像中的物体的轮廓实质上相近的一多边形曲线。处理单元用以依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及多个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。匹配单元用以选择此些被选凸点的其二作为两暂定端点,所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机系形成与原始影像中的物体的两肢体对应的一三角形。定位单元用以依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。According to a second aspect of the present invention, an object endpoint positioning system is provided for locating the endpoints of two limbs of an object. The system includes a capture unit, a processing unit, a matching unit, and a positioning unit. The capturing unit is used for obtaining an original image, and the original image has image information corresponding to the object. The processing unit is used to perform foreground processing on the original image to obtain a foreground image, and the foreground image corresponds to the outline of the object. The processing unit is also used to obtain a plurality of turning points according to the foreground image, and the connection of these turning points can form a polygonal curve which is substantially similar to the outline of the object in the original image. The processing unit is used to determine a plurality of convex points and a plurality of concave points from these turning points according to the angle formed by each turning point and two corresponding adjacent turning points, and select a plurality of selected convex points along a predetermined direction. point and multiple selected concave points. The matching unit is used to select two of the selected convex points as two tentative endpoints, and the connection between the selected two tentative endpoints and a selected concave point between the two tentative endpoints is formed with the original image. The two limbs of the object correspond to a triangle. The positioning unit is used for determining two positioning endpoints according to the two tentative endpoints, so as to locate the terminal positions of the two limbs of the object.

附图说明 Description of drawings

图1绘示为本发明一实施例的物体端点定位方法的流程图。FIG. 1 is a flowchart of a method for locating an object endpoint according to an embodiment of the present invention.

图2绘示为应用图l的物体端点定位方法的物体端点定位系统的方块图。FIG. 2 is a block diagram of an object endpoint positioning system applying the object endpoint positioning method shown in FIG. 1 .

图3~7分别绘示为物体端点定位系统于执行物体端点定位方法时所产生的多种影像的一例。FIGS. 3-7 respectively illustrate an example of various images generated by the object endpoint positioning system when executing the object endpoint positioning method.

附图中主要组件符号说明Explanation of main component symbols in the drawings

200物体端点定位系统;210撷取单元;220处理单元;230匹配单元;240定位单元;250追踪单元;a1~a4被选凸点;b1、b2被选凹点;c1~cn转折点;D1预定方向;F2轮廓;F3多边形曲线;Im1原始影像;Im2前景影像;Px、Py定位端点;S110~S160流程步骤;t1、t2暂定端点;t1’、t2’追踪端点。200 object endpoint positioning system; 210 capture unit; 220 processing unit; 230 matching unit; 240 positioning unit; Direction; F2 outline; F3 polygonal curve; Im1 original image; Im2 foreground image; Px, Py positioning endpoint; S110-S160 process steps; t1, t2 tentative endpoint;

具体实施方式 Detailed ways

为让本发明的上述内容能更明显易懂,下文特举较佳实施例,并配合附图作详细说明。In order to make the above content of the present invention more comprehensible, preferred embodiments are specifically cited below and described in detail with accompanying drawings.

请参照图1,其绘示为本发明一实施例的物体端点定位方法的流程图。此方法用以定位一物体的两肢体的末端位置。此方法包括下列步骤。Please refer to FIG. 1 , which is a flowchart of a method for locating an object endpoint according to an embodiment of the present invention. This method is used to locate the end positions of the two limbs of an object. This method includes the following steps.

首先,如步骤S110所示,取得一原始影像,其具有对应于此物体的影像信息。接着,如步骤S120所示,对此原始影像进行前景处理,以取得一前景影像。此前景影像对应至此物体的轮廓。First, as shown in step S110, an original image is obtained, which has image information corresponding to the object. Next, as shown in step S120, foreground processing is performed on the original image to obtain a foreground image. The foreground image corresponds to the outline of the object.

然后,于步骤S130中,依据此前景影像取得多个转折点。此些转折点的联机可形成与此原始影像中的此物体的轮廓实质上相近的一多边形曲线。之后,于步骤S140中,依照各此转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及多个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。Then, in step S130, a plurality of turning points are obtained according to the foreground image. The connection of the turning points can form a polygonal curve substantially similar to the outline of the object in the original image. After that, in step S140, according to the included angle formed by each turning point and the corresponding two adjacent turning points, a plurality of convex points and a plurality of concave points are determined from these turning points, and a plurality of concave points are selected along a predetermined direction. A selected convex point and a plurality of selected concave points.

接着,于步骤S150中,选择此些被选凸点的其二作为两暂定端点。所选出的此两暂定端点及位于此两暂定端点之间的一个被选凹点的联机形成与此原始影像中的此物体的此两肢体对应的一三角形。最后,于步骤S160中,依据此两暂定端点决定两定位端点,来定位此物体的此两肢体的末端位置。Next, in step S150, select two of the selected bumps as two tentative endpoints. A connection of the selected two tentative endpoints and a selected concave point between the two tentative endpoints forms a triangle corresponding to the two limbs of the object in the original image. Finally, in step S160, two positioning endpoints are determined according to the two tentative endpoints to locate the terminal positions of the two limbs of the object.

以应用图1的物体端点定位方法的一物体端点定位系统为例详细说明如下。请同时参照图2、及图3~7。图2绘示为应用图1的物体端点定位方法的物体端点定位系统200的方块图。图3~7绘示为物体端点定位系统200于执行物体端点定位方法时所产生的多种影像的一例。Taking an object endpoint positioning system applying the object endpoint positioning method in FIG. 1 as an example, the detailed description is as follows. Please refer to Figure 2 and Figures 3 to 7 at the same time. FIG. 2 is a block diagram of an object endpoint location system 200 applying the object endpoint location method shown in FIG. 1 . 3-7 illustrate an example of various images generated by the object endpoint positioning system 200 when performing the object endpoint positioning method.

物体端点定位系统200能定位人体双脚F的末端位置Ft,如图3所示。物体端点定位系统200包括一撷取单元210、一处理单元220、一匹配单元230、一定位单元240、及一追踪单元250。The object endpoint positioning system 200 can locate the terminal position Ft of the human body's feet F, as shown in FIG. 3 . The object endpoint positioning system 200 includes a capture unit 210 , a processing unit 220 , a matching unit 230 , a positioning unit 240 , and a tracking unit 250 .

撷取单元210用以取得一原始影像Im1。如图3所示,此原始影像Im1是具有对应于人体双脚F的影像信息。The capturing unit 210 is used for obtaining an original image Im1. As shown in FIG. 3 , the original image Im1 has image information corresponding to the feet F of the human body.

处理单元220用以对原始影像Im1进行前景处理,以取得一前景影像。处理单元220对此原始影像Im1进行前景处理时,例如对对此原始影像Im1进行边缘侦测。如此,处理单元220会取得具有原始影像Im1的边缘信息的前景影像Im2,而此前景影像Im2将会具有双脚的轮廓F2,如图4所示。The processing unit 220 is configured to perform foreground processing on the original image Im1 to obtain a foreground image. When the processing unit 220 performs foreground processing on the original image Im1, for example, edge detection is performed on the original image Im1. In this way, the processing unit 220 will obtain the foreground image Im2 having the edge information of the original image Im1, and the foreground image Im2 will have the outline F2 of both feet, as shown in FIG. 4 .

在对原始影像Im1进行前景处理时,所取得的前景影像Im2通常会包含场景对象的轮廓,如双脚的轮廓F2与其它对象的轮廓A及B。因此,处理单元220可过滤前景影像Im2,以保留双脚的轮廓F2。于实作中,由于双脚的轮廓F2通常为具有最大面积的区块,故处理单元220便可由统计此些轮廓F2、A及B的区块面积,来找出双脚的轮廓F2并保留。When the foreground processing is performed on the original image Im1, the obtained foreground image Im2 usually includes the contours of scene objects, such as the contour F2 of the feet and the contours A and B of other objects. Therefore, the processing unit 220 can filter the foreground image Im2 to preserve the outline F2 of the feet. In practice, since the contour F2 of the feet is usually the block with the largest area, the processing unit 220 can find out the contour F2 of the feet and keep it by counting the block areas of the contours F2, A and B. .

接着,如图5所示,处理单元220另用以依据前景影像Im2取得多个转折点c1~cn。此些转折点c1~cn的联机可形成一多边形曲线F3,此多边形曲线F3的形状实质上相近于双脚的轮廓F2。Next, as shown in FIG. 5 , the processing unit 220 is further configured to obtain a plurality of turning points c1˜cn according to the foreground image Im2. The connection of these turning points c1˜cn can form a polygonal curve F3, and the shape of the polygonal curve F3 is substantially similar to the outline F2 of both feet.

之后,处理单元220用以依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点c1~cn决定出多个凸点及多个凹点。Afterwards, the processing unit 220 is used to determine a plurality of convex points and a plurality of concave points from the turning points c1-cn according to the included angle formed by each turning point and two corresponding adjacent turning points.

凸点及凹点例如可定义如下。定义为凸点的转折点所对应的夹角是介于0~120度之间的夹角;而定义为凹点的转折点所对应的夹角是大于240度的夹角。于上述定义中,夹角为多边形曲线F3的内侧角。如此,如图5所示,转折点c2及对应的相邻的两个转折点c1及c3所形成的夹角符合凸点的定义,而转折点c3及对应的相邻的两个转折点c2及c4所形成的夹角符合凹点的定义。For example, bumps and pits can be defined as follows. The included angle corresponding to the turning point defined as a convex point is an included angle between 0 and 120 degrees; and the included angle corresponding to a turning point defined as a concave point is an included angle greater than 240 degrees. In the above definition, the included angle is the inner angle of the polygonal curve F3. In this way, as shown in Figure 5, the angle formed by the turning point c2 and the corresponding two adjacent turning points c1 and c3 conforms to the definition of a bump, and the angle formed by the turning point c3 and the corresponding two adjacent turning points c2 and c4 The included angle conforms to the definition of a concave point.

举例来说,决定出多个凸点及多个凹点后,处理单元220沿一预定方向选出四个被选凸点a1~a4及两个被选凹点b1及b2。如图6所示,此预定方向例如为双脚F的末端相对于双脚F的顶端的方向D1。For example, after determining a plurality of bumps and a plurality of pits, the processing unit 220 selects four selected bumps a1 - a4 and two selected pits b1 and b2 along a predetermined direction. As shown in FIG. 6 , the predetermined direction is, for example, a direction D1 of the ends of the feet F relative to the top ends of the feet F. As shown in FIG.

接着,匹配单元230选择被选凸点a1~a4的其二作为两暂定端点t1及t2,所选出的两暂定端点t1及t2与位于两暂定端点t1及t2之间的一个被选凹点的联机形成与原始影像Im1中的人体双脚F对应的一三角形。如图7所示,两个被选凸点a1及a2与位于其间的一个被选凹点b1形成一三角形,而此三角形实质上定位出人体的双脚F的位置。Next, the matching unit 230 selects two of the selected bumps a1-a4 as the two tentative endpoints t1 and t2, and the selected two tentative endpoints t1 and t2 are connected to one of the selected bumps between the two tentative endpoints t1 and t2. The connection of the selected pit points forms a triangle corresponding to the human feet F in the original image Im1. As shown in FIG. 7 , two selected convex points a1 and a2 form a triangle with a selected concave point b1 located therebetween, and this triangle essentially locates the positions of the feet F of the human body.

进一步来说,当匹配单元230选择被选凸点a1~a4的其二作为两暂定端点t1及t2时,匹配单元230判断此些被选凸点a1~a4及此些被选凹点b1及b2是否符合三角特征匹配。Further, when the matching unit 230 selects two of the selected convex points a1-a4 as the two tentative endpoints t1 and t2, the matching unit 230 judges the selected convex points a1-a4 and the selected concave points b1 And whether b2 conforms to the triangular feature matching.

所谓的三角特征匹配是指,匹配单元230依据垂直于预定方向D1的一向量,判断此些被选凸点a1~a4的其二的联机斜率是否小于一预定斜率。而匹配单元230还判断此些被选凹点b1及b2的其一投影至向量上的位置,是否位于此些被选凸点a1~a4的其二投影至向量上的位置之间。The so-called triangular feature matching refers to that the matching unit 230 judges whether the online slope of the second of the selected bumps a1 - a4 is less than a predetermined slope according to a vector perpendicular to the predetermined direction D1 . The matching unit 230 also determines whether the position of one of the selected concave points b1 and b2 projected onto the vector is located between the positions of the other two of the selected convex points a1-a4 projected onto the vector.

举例来说,于图7中,上述预定斜率例如为45°的斜率。被选凸点a1及a2的联机斜率S1小于此预定斜率,而被选凹点b1投影至向量D2上的位置d1,位于两被选凸点a1及a2投影至向量D2上的位置d2及d3之间。因此,匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配。For example, in FIG. 7 , the aforementioned predetermined slope is, for example, a slope of 45°. The online slope S1 of the selected convex points a1 and a2 is smaller than the predetermined slope, and the selected concave point b1 is projected to the position d1 on the vector D2, which is located at the positions d2 and d3 of the two selected convex points a1 and a2 projected on the vector D2 between. Therefore, the matching unit 230 determines that the selected convex points a1 and a2 and the selected concave point b1 meet the triangular feature matching.

再者,如图6所示,若靠近双脚F的末端位置Ft之侧定义为下方,则预定方向D1由下方至上方的方向。在匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配后,匹配单元230还判断被选凸点a1及a2是否较被选凹点b1靠近下方。于图7所示的例中,匹配单元230将会判定两被选凸点a1及a2较被选凹点b1靠近下方。Furthermore, as shown in FIG. 6 , if the side close to the end position Ft of the feet F is defined as downward, the predetermined direction D1 is from downward to upward. After the matching unit 230 determines that the selected convex points a1 and a2 match the selected concave point b1 according to the triangular feature, the matching unit 230 further determines whether the selected convex points a1 and a2 are lower than the selected concave point b1. In the example shown in FIG. 7 , the matching unit 230 will determine that the two selected convex points a1 and a2 are lower than the selected concave point b1 .

进一步地,匹配单元230在判断被选凸点a1~a4及被选凹点b1及b2是否符合三角特征匹配的过程中,若匹配单元230除了判定被选凸点a1及a2与被选凹点b1符合三角特征匹配外,还判定另两个被选凸点与对应的一个被选凹点亦符合三角特征匹配。此时,匹配单元230还用以判断被选凸点a1及a2与被选凹点b1所形成的面积是否大于另二被选凸点与对应的一个被选凹点所形成的面积。Further, when the matching unit 230 judges whether the selected convex points a1-a4 and the selected concave points b1 and b2 meet the triangular feature matching process, if the matching unit 230 determines that the selected convex points a1 and a2 and the selected concave points In addition to b1 conforming to the triangular feature matching, it is also determined that the other two selected convex points and the corresponding selected concave point also conform to the triangular feature matching. At this time, the matching unit 230 is also used to determine whether the area formed by the selected convex points a1 and a2 and the selected concave point b1 is larger than the area formed by the other two selected convex points and a corresponding selected concave point.

于图7所示的例中,假设匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配,且判定被选凸点a1及a2是否较被选凹点b1靠近下方,还判定被选凸点a1及a2与被选凹点b1所形成的面积为最大。如此,匹配单元230决定选择此两被选凸点a1及a2作为两暂定端点t1及t2。In the example shown in FIG. 7 , assuming that the matching unit 230 determines that the selected convex points a1 and a2 and the selected concave point b1 conform to the triangular feature matching, and determines whether the selected convex points a1 and a2 are closer to the lower side than the selected concave point b1 , It is also determined that the area formed by the selected convex points a1 and a2 and the selected concave point b1 is the largest. In this way, the matching unit 230 decides to select the two selected bumps a1 and a2 as the two tentative endpoints t1 and t2 .

请参照前述的图2,在匹配单元230将被选凸点a1及a2作为两暂定端点t1及t2后,接着,定位单元240用以依据两暂定端点t1及t2决定两定位端点Px及Py。由图7可知,由于匹配单元230所决定出的被选凸点a1及a2的位置实质上为双脚F的末端位置Ft。如此,定位单元240所决定的两定位端点Px及Py将能定位出人体的双脚F的末端位置Ft。Please refer to the aforementioned FIG. 2, after the matching unit 230 uses the selected bumps a1 and a2 as the two tentative endpoints t1 and t2, then the positioning unit 240 is used to determine the two positioning endpoints Px and Px according to the two tentative endpoints t1 and t2. Py. It can be seen from FIG. 7 that the positions of the selected bumps a1 and a2 determined by the matching unit 230 are substantially the end positions Ft of the feet F. As shown in FIG. In this way, the two positioning endpoints Px and Py determined by the positioning unit 240 can locate the terminal positions Ft of the feet F of the human body.

于上述说明中,物体端点定位系统200是以影像处理的方式分析并判断影像的信息,来定位出人体的双脚的末端位置。相较于需令使用者配戴特殊装置或服装来定位人体的双脚的系统。本发明的物体端点定位系统200不需令使用者配戴特殊装置或服装。如此,将能提高使用者的使用方便性,而且,也不会有降低使用者的使用意愿的问题。In the above description, the object end point positioning system 200 analyzes and judges the information of the image in an image processing manner to locate the end positions of the feet of the human body. Compared to systems that require the user to wear special devices or clothing to position the human body's feet. The object endpoint positioning system 200 of the present invention does not require the user to wear special devices or clothing. In this way, the convenience of use of the user can be improved, and there is no problem of reducing the willingness of the user to use.

此外,当匹配单元230判定被选凸点a1~a4及被选凹点b1及b2不符合三角特征匹配时,则匹配单元230将会致能追踪单元250。受致能的追踪单元250依照两先前端点取得两追踪端点t1’及t2’。此两先前端点为定位单元240于一先前原始影像中所定位的物体的两肢体的末端位置。In addition, when the matching unit 230 determines that the selected convex points a1 - a4 and the selected concave points b1 and b2 do not meet the triangular feature matching, the matching unit 230 will enable the tracking unit 250 . The enabled tracking unit 250 obtains two tracking endpoints t1' and t2' according to the two previous endpoints. The two previous end points are the end positions of the two limbs of the object located by the positioning unit 240 in a previous original image.

更详细地说,追踪单元250可追踪先前一张原始影像中所定位的双脚的末端位置,来产生与实际的两定位端点相近的两追踪端点t1’及t2’。如此,即使定位单元240无法通过匹配单元230正确地判断出两定位端点Px及Py,定位单元240亦能依照追踪单元250所提供的两追踪端点t1’及t2’,来决定出与实际的两定位端点相近的两定位端点Px及Py。如此,将能提高物体端点定位系统200的操作稳定性。More specifically, the tracking unit 250 can track the end positions of both feet located in the previous original image to generate two tracking end points t1' and t2' which are close to the actual two positioning end points. In this way, even if the positioning unit 240 cannot correctly determine the two positioning endpoints Px and Py through the matching unit 230, the positioning unit 240 can also determine the actual two tracking endpoints t1' and t2' according to the tracking unit 250. Two positioning endpoints Px and Py that are close to the positioning endpoint. In this way, the operation stability of the object endpoint positioning system 200 can be improved.

在撷取单元210依序取得的多张原始影像中,由于物体(例如:人体的双脚F)的移动距离会在一定的范围的内,故定位单元240所定位的两肢体的末端位置的变化(例如:双脚F的末端位置Ft的变化)亦会在一定的范围的内。故知,若匹配单元240在某一张原始影像中无法出定位人体的双脚F的末端位置Ft,便可由追踪单元250追踪先前一张原始影像中所定位的双脚F的末端位置Ft,来找出与实际的两定位端点相近的两追踪端点t1’及t2’。以多个例子说明追踪单元250如何取得两追踪端点t1’及t2’  。In the multiple original images sequentially acquired by the capture unit 210, since the moving distance of the object (for example: the feet F of the human body) will be within a certain range, the position of the end positions of the two limbs located by the positioning unit 240 The change (for example: the change of the end position Ft of the feet F) will also be within a certain range. Therefore, if the matching unit 240 cannot locate the end position Ft of the feet F of the human body in a certain original image, the tracking unit 250 can track the end position Ft of the feet F located in the previous original image to Find two tracking endpoints t1' and t2' that are close to the actual two positioning endpoints. How the tracking unit 250 obtains the two tracking endpoints t1' and t2' is described with several examples.

于第一个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是于先前原始影像与原始影像Im中,依照两先前端点的周围像素的亮度变化,来决定两追踪端点t1’及t2’。In the first example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 determines the two endpoints according to the brightness changes of the surrounding pixels of the two previous endpoints in the previous original image and the original image Im. Trace endpoints t1' and t2'.

于第二个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是于先前原始影像与原始影像Im中,依照两先前端点的周围像素的颜色变化,来决定两追踪端点t1’及t2’。In the second example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 determines the two endpoints according to the color changes of the surrounding pixels of the two previous endpoints in the previous original image and the original image Im. Trace endpoints t1' and t2'.

于第三个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是依照两先前端点及另两先前端点所分别代表的位置,以预测或机率的方式,来决定两追踪端点t1’及t2’。另两先前端点为定位单元240于另一先前原始影像中所定位的物体的两肢体的末端位置,而原始影像Im、先前原始影像与另一先前原始影像是连续被撷取单元210取得。In the third example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 decides in a predictive or probabilistic manner according to the positions represented by the two previous endpoints and the other two previous endpoints respectively. Two tracking endpoints t1' and t2'. The other two previous endpoints are the end positions of the two limbs of the object positioned by the positioning unit 240 in another previous original image, and the original image Im, the previous original image and another previous original image are successively acquired by the capture unit 210 .

于上述的第一及第二个例子中,分别由原始影像Im及先前原始影像中的亮度、颜色变化,来决定两追踪端点t1’及t2’。而于第三个例子中,则是由预测或机率的方式,来决定两追踪端点t1’及t2’。此些例子是用以说明本发明之用,并非用以限制本发明。只要能在匹配单元230判定被选凸点a1~a4及被选凹点b1及b2不符合三角特征匹配时,利用追踪单元250产生与实际的两定位端点相近的两追踪端点t1’及t2’,皆在本发明的保护范围内。In the above-mentioned first and second examples, the two tracking endpoints t1' and t2' are determined respectively by the brightness and color changes in the original image Im and the previous original image. In the third example, the two tracking endpoints t1' and t2' are determined by prediction or probability. These examples are used to illustrate the present invention, not to limit the present invention. As long as the matching unit 230 determines that the selected convex points a1-a4 and the selected concave points b1 and b2 do not meet the triangular feature matching, the tracking unit 250 can be used to generate two tracking endpoints t1' and t2' that are close to the actual two positioning endpoints , are all within the protection scope of the present invention.

本发明的物体端点定位系统200在取得一张原始影像Im1后,便能决定两定位端点Px及Py,来定位出人体双脚F的末端位置Ft。进一步地,若物体端点定位系统200能于一时段中依序取得多张原始影像,便能于此些原始影像中依序出定位出人体双脚的末端位置。如此,物体端点定位系统200便能对此些位置信息进一步分析,来判断出人体双脚F于此时段中的移动方向、移动速度、或动作型态。The object endpoint positioning system 200 of the present invention can determine two positioning endpoints Px and Py after obtaining an original image Im1 to locate the terminal positions Ft of the human feet F. Furthermore, if the object endpoint positioning system 200 can sequentially obtain multiple original images in a period of time, it can sequentially locate the end positions of the human feet from these original images. In this way, the object endpoint positioning system 200 can further analyze the position information to determine the moving direction, moving speed, or action pattern of the feet F of the human body during this period.

再者,上述的说明是以物体端点定位系统200能定位人体双脚F的末端位置Ft为例,然亦不限于此。本发明的物体端点定位系统200亦能定位人体的两手指的末端位置。此时,上述的预定方向将会是手指的末端相对于手指的顶端的方向。亦即,此时的物体端点定位系统200是沿手指的末端相对于手指的顶端的方向,来选择被选凸点及被选凹点。如此,物体端点定位系统200亦能选择两被选凸点来作为两暂定端点,且此两暂定端点与位于其间的一个被选凹点的联机形成与两手指对应的一三角形,从而定位出两手指的末端位置。Furthermore, the above description is based on an example where the object endpoint positioning system 200 can locate the terminal positions Ft of the feet F of the human body, but it is not limited thereto. The object end point positioning system 200 of the present invention can also locate the end positions of the two fingers of the human body. At this time, the above-mentioned predetermined direction will be the direction of the tip of the finger relative to the tip of the finger. That is, at this time, the object endpoint positioning system 200 selects the selected convex point and the selected concave point along the direction of the tip of the finger relative to the tip of the finger. In this way, the object endpoint positioning system 200 can also select two selected convex points as two tentative endpoints, and the connection between the two tentative endpoints and a selected concave point in between forms a triangle corresponding to two fingers, thereby positioning out the end position of the two fingers.

本发明用上述实施例所描述的物体端点定位方法及应用的系统,能定位一物体的两肢体的末端位置。此物体的两肢体可为人体的双脚或两手指。通过此物体端点定位方法及应用其置,使用者不需配戴特殊装置或服装。如此,将能提高使用者的使用方便性,而且,也不会有降低使用者的使用意愿的问题。The present invention can locate the end positions of two limbs of an object by using the object end point positioning method described in the above embodiments and the applied system. The two limbs of this object can be the double feet or two fingers of human body. With the method for locating the endpoint of the object and its application, the user does not need to wear special devices or clothing. In this way, the convenience of use of the user can be improved, and there is no problem of reducing the willingness of the user to use.

综上所述,虽然本发明已以较佳实施例描述如上,然其并非用以限定本发明。本领域技术人员在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视申请的权利要求范围所界定的内容为准。In summary, although the present invention has been described above with preferred embodiments, it is not intended to limit the present invention. Those skilled in the art may make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention should be determined by the content defined by the scope of the claims of the application.

Claims (20)

1. object end positioning method, in order to the terminal position of two limbs of locating an object, this method comprises:
Obtain a raw video, this raw video has the image information corresponding to this object;
This raw video is carried out perspective process, and to obtain a prospect image, this prospect image corresponds to the profile of this object;
According to this prospect image, obtain a plurality of turning points, those turning points online form with this raw video in the close in fact polygon curve of profile of this object;
According to this turning point respectively and corresponding two adjacent formed angles of turning point, determining a plurality of salient points and a plurality of concave point from those turning points, and select a plurality of selected salient points and a plurality of selected concave point along a predetermined direction;
Its that select those selected salient points be two as two tentative end points, the corresponding triangle of these two limbs of this object in this two tentative end points selected and the online formation of a selected concave point between this two tentative end points and this raw video; And
Determine two location end points according to this two tentative end points, locate the terminal position of these two limbs of this object.
The method of claim 1, wherein this predetermined direction be relevant to this object in this raw video the end of these two limbs with respect to the direction on the top of these two limbs.
3. the method for claim 1, wherein select before this its two steps as these two tentative end points of those selected salient points, this method comprises:
Judge whether those selected salient points and those selected concave points meet the triangle characteristic matching; And
If those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching, then determine with those selected salient points this its two as this two tentative end points.
4. method as claimed in claim 3 wherein, comprises in this its two steps as these two tentative end points of decision with those selected salient points:
If those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching; And those selected salient points in addition two and another of those selected concave points also meet the triangle characteristic matching; Then judge those selected salient points this its two and the formed area of this one of which of those selected concave points; Whether greater than those selected salient points this in addition two and this another formed area of those selected concave points; If those selected salient points this its two and the formed area of this one of which of those selected concave points bigger, then carry out this decision with those selected salient points this its two as this two fix tentatively end points steps.
5. method as claimed in claim 3, wherein, judge that the step whether those selected salient points and those selected concave points meet the triangle characteristic matching comprises:
Vector according to vertical this predetermined direction; Whether this its online slope of two of judging those selected salient points is less than a predetermined slope; Whether and this one of which of judging those selected concave points is projected to the position on this vector, between this its two positions of being projected on this vector of those selected salient points.
6. method as claimed in claim 3; Wherein, If the side near the terminal position of this two limbs of this object is defined as the below, then this predetermined direction is the direction by below to top, and decision comprises this its two steps as these two tentative end points of those selected salient points:
Judge those selected salient points this its two whether than this one of which of those selected concave points near the below, if then carry out this its two the steps as this two tentative end points of this decision with those selected salient points.
7. method as claimed in claim 3 comprises:
Obtain two according to two first forward terminals and follow the trail of end points, this two first forward terminal is the terminal position of two limbs of this object of in a previous raw video, being located;
Wherein when those selected salient points and those selected concave points did not meet the triangle characteristic matching, then the step according to this this two location end points of two tentative end points decisions replaced with:
According to this two trackings end points, decide this two location end points.
8. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
In this previous raw video and this raw video, the brightness that accordings to the surrounding pixel of this two first forward terminal changes, and decides this two trackings end points.
9. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
In this previous raw video and this raw video, according to the change color of the surrounding pixel of this two first forward terminal, decide this two trackings end points.
10. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
According to this two first forward terminal and the two first forward terminals institute position of representative respectively in addition,, decide this two trackings end points with the mode of prediction or probability;
Wherein, two first forward terminals lie in the terminal position of two limbs of this object of being located in another previous raw video in addition, and this raw video, this previous raw video and this another previous raw video are obtained continuously.
11. an object end positioning system, in order to the terminal position of two limbs of locating an object, this system comprises:
One acquisition unit, in order to obtain a raw video, this raw video has the image information corresponding to this object;
One processing unit; In order to this raw video is carried out perspective process; To obtain a prospect image, this prospect image corresponds to the profile of this object, and this processing unit is in addition in order to obtain a plurality of turning points according to this prospect image; Those turning points online form with this raw video in the close in fact polygon curve of profile of this object; This processing unit is in order to according to this turning point respectively and corresponding two adjacent formed angles of turning point, determining a plurality of salient points and a plurality of concave point from those turning points, and selects a plurality of selected salient points and a plurality of selected concave point along a predetermined direction;
One matching unit; In order to select those selected salient points its two as two tentative end points, the corresponding triangle of these two limbs of this object in this two tentative end points selected and the online formation of a selected concave point between this two tentative end points and this raw video; And
Locating unit in order to determine two location end points according to this two tentative end points, is located the terminal position of these two limbs of this object.
12. system as claimed in claim 11, wherein, this predetermined direction is to be relevant to the end of these two limbs of this object in this raw video with respect to the direction on the top of these two limbs.
13. system as claimed in claim 11; Wherein, When this matching unit select those selected salient points this its two during as this two tentative end points; This matching unit judges whether those selected salient points and those selected concave points meet the triangle characteristic matching, if this matching unit judge those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching, then this matching unit decision select those selected salient points this its two as this two tentative end points.
14. system as claimed in claim 13; Wherein, If this matching unit judge those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching; And judge those selected salient points in addition two and another of those selected concave points also meet the triangle characteristic matching; Then this matching unit in order to judge those selected salient points this its two and the formed area of this one of which of those selected concave points; Whether greater than those selected salient points this in addition two and this another formed area of those selected concave points, if those selected salient points this its two and the formed area of this one of which of those selected concave points bigger, then this matching unit decision with those selected salient points this its two as this two tentative end points.
15. system as claimed in claim 13; Wherein, When this matching unit judged whether those selected salient points and those selected concave points meet the triangle characteristic matching, according to perpendicular to a vector of a predetermined direction, whether this its online slope of two of judging those selected salient points was less than a predetermined slope; Whether and this one of which of judging those selected concave points is projected to the position on this vector, between this its two positions of being projected on this vector of those selected salient points.
16. system as claimed in claim 13; Wherein, If the side near the terminal position of this two limbs of this object is defined as the below, then this predetermined direction is the direction by below to top, and those selected salient points this its two than this one of which of those selected concave points near the below.
17. system as claimed in claim 13, wherein, this matching unit comprises:
One tracing unit is followed the trail of end points, the terminal position of two limbs of this object that this two first forward terminal is located in a previous raw video for this positioning unit in order to obtain two according to two first forward terminals;
Wherein when this matching unit judged that those selected salient points and those selected concave points do not meet the triangle characteristic matching, this positioning unit replacement determined two location end points according to this two tentative end points, and two followed the trail of end points according to this, decided this two location end points.
18. system as claimed in claim 17; Wherein, when this tracing unit was obtained this two trackings end points, this tracing unit was in this previous raw video and this raw video; The brightness that accordings to the surrounding pixel of this two first forward terminal changes, and decides this two trackings end points.
19. system as claimed in claim 17; Wherein, when this tracing unit was obtained this two trackings end points, this tracing unit was in this previous raw video and this raw video; The change color that accordings to the surrounding pixel of this two first forward terminal decides this two trackings end points.
20. system as claimed in claim 17; Wherein, when this tracing unit is obtained this two when following the trail of end points, this tracing unit is according to this two first forward terminal and the position represented respectively of two first forward terminals institute in addition; With the mode of prediction or probability, decide this two trackings end points;
Wherein, the terminal position of two limbs of this object that these other two first forward terminals are located in another previous raw video for this positioning unit, this raw video, this previous raw video and this another previous raw video are to be obtained by this acquisition unit continuously.
CN2008101856396A 2008-12-17 2008-12-17 Object endpoint positioning method and application system Expired - Fee Related CN101751118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101856396A CN101751118B (en) 2008-12-17 2008-12-17 Object endpoint positioning method and application system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101856396A CN101751118B (en) 2008-12-17 2008-12-17 Object endpoint positioning method and application system

Publications (2)

Publication Number Publication Date
CN101751118A CN101751118A (en) 2010-06-23
CN101751118B true CN101751118B (en) 2012-02-22

Family

ID=42478166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101856396A Expired - Fee Related CN101751118B (en) 2008-12-17 2008-12-17 Object endpoint positioning method and application system

Country Status (1)

Country Link
CN (1) CN101751118B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524637A (en) * 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
US6308565B1 (en) * 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
CN1991691A (en) * 2005-12-30 2007-07-04 财团法人工业技术研究院 Interactive control platform system
CN101140491A (en) * 2006-09-07 2008-03-12 王舜清 Digital image cursor moving and positioning device system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524637A (en) * 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
US6308565B1 (en) * 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
CN1991691A (en) * 2005-12-30 2007-07-04 财团法人工业技术研究院 Interactive control platform system
CN101140491A (en) * 2006-09-07 2008-03-12 王舜清 Digital image cursor moving and positioning device system

Also Published As

Publication number Publication date
CN101751118A (en) 2010-06-23

Similar Documents

Publication Publication Date Title
US10747988B2 (en) Method and device for face tracking and smart terminal
US8830312B2 (en) Systems and methods for tracking human hands using parts based template matching within bounded regions
CN103164022B (en) Many fingers touch method and device, portable terminal
CN103677270B (en) A kind of man-machine interaction method based on eye-tracking
CN102096471B (en) Human-computer interaction method based on machine vision
US20130343610A1 (en) Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
Rossol et al. A multisensor technique for gesture recognition through intelligent skeletal pose analysis
CN114792443B (en) Intelligent device gesture recognition control method based on image recognition
CN102831439A (en) Gesture tracking method and gesture tracking system
CN101251784A (en) Laser Pointer Pointing and Spot Track Recognition Method
CN104778460B (en) A kind of monocular gesture identification method under complex background and illumination
KR101559502B1 (en) Method and recording medium for contactless input interface with real-time hand pose recognition
US9218060B2 (en) Virtual mouse driving apparatus and virtual mouse simulation method
CN105247461A (en) Determining Pitch and Yaw for Touchscreen Interaction
EP2528035A2 (en) Apparatus and method for detecting a vertex of an image
JP6651388B2 (en) Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system
TW201019241A (en) Method for identifying and tracing gesture
TWI431538B (en) Image based motion gesture recognition method and system thereof
CN101739122A (en) Gesture Recognition and Tracking Method
Ukita et al. Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera
CN101751118B (en) Object endpoint positioning method and application system
TWI405149B (en) Method and system for positioning terminals of object
Lee et al. An effective method for detecting facial features and face in human–robot interaction
CN117593763A (en) Bad sitting posture detection method and related equipment
CN114596582B (en) An augmented reality interaction method and system with visual and force feedback

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120222

CF01 Termination of patent right due to non-payment of annual fee