CN101751118B - Object endpoint positioning method and application system - Google Patents
Object endpoint positioning method and application system Download PDFInfo
- Publication number
- CN101751118B CN101751118B CN2008101856396A CN200810185639A CN101751118B CN 101751118 B CN101751118 B CN 101751118B CN 2008101856396 A CN2008101856396 A CN 2008101856396A CN 200810185639 A CN200810185639 A CN 200810185639A CN 101751118 B CN101751118 B CN 101751118B
- Authority
- CN
- China
- Prior art keywords
- points
- those selected
- raw video
- end points
- concave
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
一种物体端点定位方法,用以定位一物体的两肢体的末端位置。于此方法中,对所取得的原始影像进行前景处理以取得一前景影像。前景影像对应至原始影像中的物体轮廓。依据前景影像取得多个转折点,其联机形成相近于物体轮廓的多边形曲线。依照各转折点的夹角将其分类为凸点或凹点,并选出多个被选凸点及多个被选凹点。选择被选凸点的其二作为两暂定端点。所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机形成与原始影像中的物体的两肢体对应的三角形。依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。
A method for locating the endpoints of an object is used to locate the end positions of two limbs of an object. In this method, the original image obtained is subjected to foreground processing to obtain a foreground image. The foreground image corresponds to the object contour in the original image. A plurality of turning points are obtained based on the foreground image, and the connection of the turning points forms a polygonal curve close to the object contour. The turning points are classified into convex points or concave points according to the angles thereof, and a plurality of selected convex points and a plurality of selected concave points are selected. Two of the selected convex points are selected as two temporary endpoints. The connection of the two selected temporary endpoints and a selected concave point located between the two temporary endpoints forms a triangle corresponding to the two limbs of the object in the original image. Two positioning endpoints are determined based on the two temporary endpoints to locate the end positions of the two limbs of the object.
Description
技术领域 technical field
本发明是有关于一种物体端点定位方法及应用的系统,且特别是有关于一种用以定位物体的两肢体的末端位置的物体端点定位方法及应用的系统。The present invention relates to an object endpoint positioning method and an applied system, and in particular relates to an object endpoint positioning method and an applied system for locating the end positions of two limbs of an object.
背景技术 Background technique
人机互动接口为一种能让人与计算机进行互动的接口。一般而言,人机互动接口通常包括键盘、鼠标、或触碰式面板。于此种接口中,人称为使用者。使用者通过人机互动接口,便能达到控制计算机、或与计算机互动的目的。A human-computer interaction interface is an interface that allows a human to interact with a computer. Generally speaking, the human-computer interaction interface usually includes a keyboard, a mouse, or a touch panel. In such an interface, a person is called a user. Through the human-computer interaction interface, the user can achieve the purpose of controlling the computer or interacting with the computer.
在美国专利第5,524,637号所公开的专利中,提出一种可量测生理运作的互动系统(“interactive system for measuring physiologicalexertion”),是将一加速度传感器(accelerometer)配戴于使用者的双脚上,并令使用者站立于一压力感测板(pressure sensor)上,以判断出使用者双脚的施力大小、与双脚的移动加速度。如此,使用者便能通过双脚的施力与移动速度,来与互动系统进行互动。In the patent disclosed in U.S. Patent No. 5,524,637, an interactive system for measuring physiological exercise ("interactive system for measuring physiological exercise") is proposed, which is to wear an acceleration sensor (accelerometer) on the user's feet , and make the user stand on a pressure sensor board (pressure sensor) to judge the force exerted by the user's feet and the moving acceleration of the feet. In this way, the user can interact with the interactive system through the force exerted and the moving speed of the feet.
再者,近年来,亦有研究人员利用影像处理技术,来增加使用者与计算机之间的互动性。在美国专利第6,308,565号所公开的专利中,提出一种于多维空间中追踪及评价动作技能的系统及方法(“system and methodfor tracking and assessing movement skills in multidimensionalspace”),是将主动或被动式的标识(marker)黏贴于使用者的双脚上,并利用影像处理的方式来侦测标识的移动量,来判断出使用者的双脚于空间中的坐标位置及移动状态。如此,使用者便能通过双脚的移动方式,来与互动系统进行互动。Furthermore, in recent years, some researchers have used image processing technology to increase the interaction between users and computers. In the patent disclosed in U.S. Patent No. 6,308,565, a system and method for tracking and assessing movement skills in multidimensional space ("system and method for tracking and assessing movement skills in multidimensional space") is proposed, which uses active or passive identification The (marker) is pasted on the user's feet, and image processing is used to detect the movement of the mark to determine the coordinate position and movement state of the user's feet in space. In this way, the user can interact with the interactive system by moving the feet.
虽然已经有多种人机互动接口被提出来,然而,于上述实施方式中,若要让人机互动接口能判断出使用者的双脚的末端位置,则必须令使用者配戴特殊装置或服装(如上述的加速度传感器、及标志)。如此,将会造成使用者的不方便,且还可能会降低使用者的使用意愿。因此,如何定位使用者的双脚的末端位置,且不会造成使用者的不方便,仍然为业界所致力的课题之一。Although a variety of human-computer interaction interfaces have been proposed, in the above-mentioned embodiments, if the user-computer interaction interface is to be able to determine the end positions of the user's feet, the user must wear a special device or Clothing (such as the above-mentioned accelerometer, and signs). In this way, it will cause inconvenience to the user, and may also reduce the willingness of the user to use. Therefore, how to locate the end positions of the user's feet without causing inconvenience to the user is still one of the issues that the industry is working on.
发明内容 Contents of the invention
本发明的目的在于提供一种物体端点定位方法及应用的系统,能定位一物体的两肢体的末端位置。此物体的两肢体可为人体的双脚或两手指。通过此物体端点定位方法及应用其置,使用者不需配戴特殊装置或服装。如此,将能提高使用者的使用方便性。The object of the present invention is to provide a method for locating an object endpoint and an applied system, which can locate the terminal positions of two limbs of an object. The two limbs of this object can be the double feet or two fingers of human body. With the method for locating the endpoint of the object and its application, the user does not need to wear special devices or clothing. In this way, the user's convenience of use can be improved.
为实现上述目的,根据本发明的第一方面,提出一种物体端点定位方法,用以定位一物体的两肢体的末端位置。此方法包括下列步骤。首先,取得一原始影像,原始影像具有对应于物体的影像信息。接着,对原始影像进行前景处理以取得一前景影像。前景影像对应至物体的轮廓。然后,依据前景影像取得多个转折点,转折点的联机可形成与原始影像中物体的轮廓实质上相近的一多边形曲线。之后,依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及属个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。再来,选择此些被选凸点的其二作为两暂定端点,所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机形成与原始影像中的物体的两肢体对应的一三角形。最后,依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。To achieve the above object, according to the first aspect of the present invention, a method for locating the endpoints of an object is proposed, which is used for locating the endpoints of two limbs of an object. This method includes the following steps. First, an original image is obtained, and the original image has image information corresponding to an object. Next, foreground processing is performed on the original image to obtain a foreground image. The foreground image corresponds to the outline of the object. Then, a plurality of turning points are obtained according to the foreground image, and the connection of turning points can form a polygonal curve which is substantially similar to the contour of the object in the original image. Afterwards, according to each turning point and the angle formed by the corresponding adjacent two turning points, a plurality of convex points and a concave point are determined from these turning points, and a plurality of selected convex points and concave points are selected along a predetermined direction. Multiple selected pits. Again, select two of these selected convex points as two tentative endpoints, the connection between the selected two tentative endpoints and a selected concave point between the two tentative endpoints forms the connection with the object in the original image A triangle corresponding to the two limbs. Finally, two positioning endpoints are determined according to the two provisional endpoints to locate the end positions of the two limbs of the object.
根据本发明的第二方面,提出一种物体端点定位系统,用以定位一物体的两肢体的末端位置。此系统包括一撷取单元、一处理单元、一匹配单元、及一定位单元。撷取单元用以取得一原始影像,此原始影像具有对应于物体的影像信息。处理单元用以对该原始影像进行前景处理以取得一前景影像,前景影像对应至物体的轮廓。处理单元另用以依据前景影像取得多个转折点,此些转折点的联机可形成与原始影像中的物体的轮廓实质上相近的一多边形曲线。处理单元用以依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及多个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。匹配单元用以选择此些被选凸点的其二作为两暂定端点,所选出的两暂定端点及位于两暂定端点之间的一个被选凹点的联机系形成与原始影像中的物体的两肢体对应的一三角形。定位单元用以依据两暂定端点决定两定位端点,来定位物体的两肢体的末端位置。According to a second aspect of the present invention, an object endpoint positioning system is provided for locating the endpoints of two limbs of an object. The system includes a capture unit, a processing unit, a matching unit, and a positioning unit. The capturing unit is used for obtaining an original image, and the original image has image information corresponding to the object. The processing unit is used to perform foreground processing on the original image to obtain a foreground image, and the foreground image corresponds to the outline of the object. The processing unit is also used to obtain a plurality of turning points according to the foreground image, and the connection of these turning points can form a polygonal curve which is substantially similar to the outline of the object in the original image. The processing unit is used to determine a plurality of convex points and a plurality of concave points from these turning points according to the angle formed by each turning point and two corresponding adjacent turning points, and select a plurality of selected convex points along a predetermined direction. point and multiple selected concave points. The matching unit is used to select two of the selected convex points as two tentative endpoints, and the connection between the selected two tentative endpoints and a selected concave point between the two tentative endpoints is formed with the original image. The two limbs of the object correspond to a triangle. The positioning unit is used for determining two positioning endpoints according to the two tentative endpoints, so as to locate the terminal positions of the two limbs of the object.
附图说明 Description of drawings
图1绘示为本发明一实施例的物体端点定位方法的流程图。FIG. 1 is a flowchart of a method for locating an object endpoint according to an embodiment of the present invention.
图2绘示为应用图l的物体端点定位方法的物体端点定位系统的方块图。FIG. 2 is a block diagram of an object endpoint positioning system applying the object endpoint positioning method shown in FIG. 1 .
图3~7分别绘示为物体端点定位系统于执行物体端点定位方法时所产生的多种影像的一例。FIGS. 3-7 respectively illustrate an example of various images generated by the object endpoint positioning system when executing the object endpoint positioning method.
附图中主要组件符号说明Explanation of main component symbols in the drawings
200物体端点定位系统;210撷取单元;220处理单元;230匹配单元;240定位单元;250追踪单元;a1~a4被选凸点;b1、b2被选凹点;c1~cn转折点;D1预定方向;F2轮廓;F3多边形曲线;Im1原始影像;Im2前景影像;Px、Py定位端点;S110~S160流程步骤;t1、t2暂定端点;t1’、t2’追踪端点。200 object endpoint positioning system; 210 capture unit; 220 processing unit; 230 matching unit; 240 positioning unit; Direction; F2 outline; F3 polygonal curve; Im1 original image; Im2 foreground image; Px, Py positioning endpoint; S110-S160 process steps; t1, t2 tentative endpoint;
具体实施方式 Detailed ways
为让本发明的上述内容能更明显易懂,下文特举较佳实施例,并配合附图作详细说明。In order to make the above content of the present invention more comprehensible, preferred embodiments are specifically cited below and described in detail with accompanying drawings.
请参照图1,其绘示为本发明一实施例的物体端点定位方法的流程图。此方法用以定位一物体的两肢体的末端位置。此方法包括下列步骤。Please refer to FIG. 1 , which is a flowchart of a method for locating an object endpoint according to an embodiment of the present invention. This method is used to locate the end positions of the two limbs of an object. This method includes the following steps.
首先,如步骤S110所示,取得一原始影像,其具有对应于此物体的影像信息。接着,如步骤S120所示,对此原始影像进行前景处理,以取得一前景影像。此前景影像对应至此物体的轮廓。First, as shown in step S110, an original image is obtained, which has image information corresponding to the object. Next, as shown in step S120, foreground processing is performed on the original image to obtain a foreground image. The foreground image corresponds to the outline of the object.
然后,于步骤S130中,依据此前景影像取得多个转折点。此些转折点的联机可形成与此原始影像中的此物体的轮廓实质上相近的一多边形曲线。之后,于步骤S140中,依照各此转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点决定出多个凸点及多个凹点,并沿一预定方向选出多个被选凸点及多个被选凹点。Then, in step S130, a plurality of turning points are obtained according to the foreground image. The connection of the turning points can form a polygonal curve substantially similar to the outline of the object in the original image. After that, in step S140, according to the included angle formed by each turning point and the corresponding two adjacent turning points, a plurality of convex points and a plurality of concave points are determined from these turning points, and a plurality of concave points are selected along a predetermined direction. A selected convex point and a plurality of selected concave points.
接着,于步骤S150中,选择此些被选凸点的其二作为两暂定端点。所选出的此两暂定端点及位于此两暂定端点之间的一个被选凹点的联机形成与此原始影像中的此物体的此两肢体对应的一三角形。最后,于步骤S160中,依据此两暂定端点决定两定位端点,来定位此物体的此两肢体的末端位置。Next, in step S150, select two of the selected bumps as two tentative endpoints. A connection of the selected two tentative endpoints and a selected concave point between the two tentative endpoints forms a triangle corresponding to the two limbs of the object in the original image. Finally, in step S160, two positioning endpoints are determined according to the two tentative endpoints to locate the terminal positions of the two limbs of the object.
以应用图1的物体端点定位方法的一物体端点定位系统为例详细说明如下。请同时参照图2、及图3~7。图2绘示为应用图1的物体端点定位方法的物体端点定位系统200的方块图。图3~7绘示为物体端点定位系统200于执行物体端点定位方法时所产生的多种影像的一例。Taking an object endpoint positioning system applying the object endpoint positioning method in FIG. 1 as an example, the detailed description is as follows. Please refer to Figure 2 and Figures 3 to 7 at the same time. FIG. 2 is a block diagram of an object
物体端点定位系统200能定位人体双脚F的末端位置Ft,如图3所示。物体端点定位系统200包括一撷取单元210、一处理单元220、一匹配单元230、一定位单元240、及一追踪单元250。The object
撷取单元210用以取得一原始影像Im1。如图3所示,此原始影像Im1是具有对应于人体双脚F的影像信息。The capturing
处理单元220用以对原始影像Im1进行前景处理,以取得一前景影像。处理单元220对此原始影像Im1进行前景处理时,例如对对此原始影像Im1进行边缘侦测。如此,处理单元220会取得具有原始影像Im1的边缘信息的前景影像Im2,而此前景影像Im2将会具有双脚的轮廓F2,如图4所示。The
在对原始影像Im1进行前景处理时,所取得的前景影像Im2通常会包含场景对象的轮廓,如双脚的轮廓F2与其它对象的轮廓A及B。因此,处理单元220可过滤前景影像Im2,以保留双脚的轮廓F2。于实作中,由于双脚的轮廓F2通常为具有最大面积的区块,故处理单元220便可由统计此些轮廓F2、A及B的区块面积,来找出双脚的轮廓F2并保留。When the foreground processing is performed on the original image Im1, the obtained foreground image Im2 usually includes the contours of scene objects, such as the contour F2 of the feet and the contours A and B of other objects. Therefore, the
接着,如图5所示,处理单元220另用以依据前景影像Im2取得多个转折点c1~cn。此些转折点c1~cn的联机可形成一多边形曲线F3,此多边形曲线F3的形状实质上相近于双脚的轮廓F2。Next, as shown in FIG. 5 , the
之后,处理单元220用以依照各转折点及对应的相邻的两个转折点所形成的夹角,以从此些转折点c1~cn决定出多个凸点及多个凹点。Afterwards, the
凸点及凹点例如可定义如下。定义为凸点的转折点所对应的夹角是介于0~120度之间的夹角;而定义为凹点的转折点所对应的夹角是大于240度的夹角。于上述定义中,夹角为多边形曲线F3的内侧角。如此,如图5所示,转折点c2及对应的相邻的两个转折点c1及c3所形成的夹角符合凸点的定义,而转折点c3及对应的相邻的两个转折点c2及c4所形成的夹角符合凹点的定义。For example, bumps and pits can be defined as follows. The included angle corresponding to the turning point defined as a convex point is an included angle between 0 and 120 degrees; and the included angle corresponding to a turning point defined as a concave point is an included angle greater than 240 degrees. In the above definition, the included angle is the inner angle of the polygonal curve F3. In this way, as shown in Figure 5, the angle formed by the turning point c2 and the corresponding two adjacent turning points c1 and c3 conforms to the definition of a bump, and the angle formed by the turning point c3 and the corresponding two adjacent turning points c2 and c4 The included angle conforms to the definition of a concave point.
举例来说,决定出多个凸点及多个凹点后,处理单元220沿一预定方向选出四个被选凸点a1~a4及两个被选凹点b1及b2。如图6所示,此预定方向例如为双脚F的末端相对于双脚F的顶端的方向D1。For example, after determining a plurality of bumps and a plurality of pits, the
接着,匹配单元230选择被选凸点a1~a4的其二作为两暂定端点t1及t2,所选出的两暂定端点t1及t2与位于两暂定端点t1及t2之间的一个被选凹点的联机形成与原始影像Im1中的人体双脚F对应的一三角形。如图7所示,两个被选凸点a1及a2与位于其间的一个被选凹点b1形成一三角形,而此三角形实质上定位出人体的双脚F的位置。Next, the
进一步来说,当匹配单元230选择被选凸点a1~a4的其二作为两暂定端点t1及t2时,匹配单元230判断此些被选凸点a1~a4及此些被选凹点b1及b2是否符合三角特征匹配。Further, when the
所谓的三角特征匹配是指,匹配单元230依据垂直于预定方向D1的一向量,判断此些被选凸点a1~a4的其二的联机斜率是否小于一预定斜率。而匹配单元230还判断此些被选凹点b1及b2的其一投影至向量上的位置,是否位于此些被选凸点a1~a4的其二投影至向量上的位置之间。The so-called triangular feature matching refers to that the
举例来说,于图7中,上述预定斜率例如为45°的斜率。被选凸点a1及a2的联机斜率S1小于此预定斜率,而被选凹点b1投影至向量D2上的位置d1,位于两被选凸点a1及a2投影至向量D2上的位置d2及d3之间。因此,匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配。For example, in FIG. 7 , the aforementioned predetermined slope is, for example, a slope of 45°. The online slope S1 of the selected convex points a1 and a2 is smaller than the predetermined slope, and the selected concave point b1 is projected to the position d1 on the vector D2, which is located at the positions d2 and d3 of the two selected convex points a1 and a2 projected on the vector D2 between. Therefore, the
再者,如图6所示,若靠近双脚F的末端位置Ft之侧定义为下方,则预定方向D1由下方至上方的方向。在匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配后,匹配单元230还判断被选凸点a1及a2是否较被选凹点b1靠近下方。于图7所示的例中,匹配单元230将会判定两被选凸点a1及a2较被选凹点b1靠近下方。Furthermore, as shown in FIG. 6 , if the side close to the end position Ft of the feet F is defined as downward, the predetermined direction D1 is from downward to upward. After the
进一步地,匹配单元230在判断被选凸点a1~a4及被选凹点b1及b2是否符合三角特征匹配的过程中,若匹配单元230除了判定被选凸点a1及a2与被选凹点b1符合三角特征匹配外,还判定另两个被选凸点与对应的一个被选凹点亦符合三角特征匹配。此时,匹配单元230还用以判断被选凸点a1及a2与被选凹点b1所形成的面积是否大于另二被选凸点与对应的一个被选凹点所形成的面积。Further, when the
于图7所示的例中,假设匹配单元230判定被选凸点a1及a2与被选凹点b1符合三角特征匹配,且判定被选凸点a1及a2是否较被选凹点b1靠近下方,还判定被选凸点a1及a2与被选凹点b1所形成的面积为最大。如此,匹配单元230决定选择此两被选凸点a1及a2作为两暂定端点t1及t2。In the example shown in FIG. 7 , assuming that the
请参照前述的图2,在匹配单元230将被选凸点a1及a2作为两暂定端点t1及t2后,接着,定位单元240用以依据两暂定端点t1及t2决定两定位端点Px及Py。由图7可知,由于匹配单元230所决定出的被选凸点a1及a2的位置实质上为双脚F的末端位置Ft。如此,定位单元240所决定的两定位端点Px及Py将能定位出人体的双脚F的末端位置Ft。Please refer to the aforementioned FIG. 2, after the
于上述说明中,物体端点定位系统200是以影像处理的方式分析并判断影像的信息,来定位出人体的双脚的末端位置。相较于需令使用者配戴特殊装置或服装来定位人体的双脚的系统。本发明的物体端点定位系统200不需令使用者配戴特殊装置或服装。如此,将能提高使用者的使用方便性,而且,也不会有降低使用者的使用意愿的问题。In the above description, the object end
此外,当匹配单元230判定被选凸点a1~a4及被选凹点b1及b2不符合三角特征匹配时,则匹配单元230将会致能追踪单元250。受致能的追踪单元250依照两先前端点取得两追踪端点t1’及t2’。此两先前端点为定位单元240于一先前原始影像中所定位的物体的两肢体的末端位置。In addition, when the
更详细地说,追踪单元250可追踪先前一张原始影像中所定位的双脚的末端位置,来产生与实际的两定位端点相近的两追踪端点t1’及t2’。如此,即使定位单元240无法通过匹配单元230正确地判断出两定位端点Px及Py,定位单元240亦能依照追踪单元250所提供的两追踪端点t1’及t2’,来决定出与实际的两定位端点相近的两定位端点Px及Py。如此,将能提高物体端点定位系统200的操作稳定性。More specifically, the tracking unit 250 can track the end positions of both feet located in the previous original image to generate two tracking end points t1' and t2' which are close to the actual two positioning end points. In this way, even if the
在撷取单元210依序取得的多张原始影像中,由于物体(例如:人体的双脚F)的移动距离会在一定的范围的内,故定位单元240所定位的两肢体的末端位置的变化(例如:双脚F的末端位置Ft的变化)亦会在一定的范围的内。故知,若匹配单元240在某一张原始影像中无法出定位人体的双脚F的末端位置Ft,便可由追踪单元250追踪先前一张原始影像中所定位的双脚F的末端位置Ft,来找出与实际的两定位端点相近的两追踪端点t1’及t2’。以多个例子说明追踪单元250如何取得两追踪端点t1’及t2’ 。In the multiple original images sequentially acquired by the
于第一个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是于先前原始影像与原始影像Im中,依照两先前端点的周围像素的亮度变化,来决定两追踪端点t1’及t2’。In the first example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 determines the two endpoints according to the brightness changes of the surrounding pixels of the two previous endpoints in the previous original image and the original image Im. Trace endpoints t1' and t2'.
于第二个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是于先前原始影像与原始影像Im中,依照两先前端点的周围像素的颜色变化,来决定两追踪端点t1’及t2’。In the second example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 determines the two endpoints according to the color changes of the surrounding pixels of the two previous endpoints in the previous original image and the original image Im. Trace endpoints t1' and t2'.
于第三个例子中,当追踪单元250取得两追踪端点t1’及t2’时,追踪单元250是依照两先前端点及另两先前端点所分别代表的位置,以预测或机率的方式,来决定两追踪端点t1’及t2’。另两先前端点为定位单元240于另一先前原始影像中所定位的物体的两肢体的末端位置,而原始影像Im、先前原始影像与另一先前原始影像是连续被撷取单元210取得。In the third example, when the tracking unit 250 obtains the two tracking endpoints t1' and t2', the tracking unit 250 decides in a predictive or probabilistic manner according to the positions represented by the two previous endpoints and the other two previous endpoints respectively. Two tracking endpoints t1' and t2'. The other two previous endpoints are the end positions of the two limbs of the object positioned by the
于上述的第一及第二个例子中,分别由原始影像Im及先前原始影像中的亮度、颜色变化,来决定两追踪端点t1’及t2’。而于第三个例子中,则是由预测或机率的方式,来决定两追踪端点t1’及t2’。此些例子是用以说明本发明之用,并非用以限制本发明。只要能在匹配单元230判定被选凸点a1~a4及被选凹点b1及b2不符合三角特征匹配时,利用追踪单元250产生与实际的两定位端点相近的两追踪端点t1’及t2’,皆在本发明的保护范围内。In the above-mentioned first and second examples, the two tracking endpoints t1' and t2' are determined respectively by the brightness and color changes in the original image Im and the previous original image. In the third example, the two tracking endpoints t1' and t2' are determined by prediction or probability. These examples are used to illustrate the present invention, not to limit the present invention. As long as the
本发明的物体端点定位系统200在取得一张原始影像Im1后,便能决定两定位端点Px及Py,来定位出人体双脚F的末端位置Ft。进一步地,若物体端点定位系统200能于一时段中依序取得多张原始影像,便能于此些原始影像中依序出定位出人体双脚的末端位置。如此,物体端点定位系统200便能对此些位置信息进一步分析,来判断出人体双脚F于此时段中的移动方向、移动速度、或动作型态。The object
再者,上述的说明是以物体端点定位系统200能定位人体双脚F的末端位置Ft为例,然亦不限于此。本发明的物体端点定位系统200亦能定位人体的两手指的末端位置。此时,上述的预定方向将会是手指的末端相对于手指的顶端的方向。亦即,此时的物体端点定位系统200是沿手指的末端相对于手指的顶端的方向,来选择被选凸点及被选凹点。如此,物体端点定位系统200亦能选择两被选凸点来作为两暂定端点,且此两暂定端点与位于其间的一个被选凹点的联机形成与两手指对应的一三角形,从而定位出两手指的末端位置。Furthermore, the above description is based on an example where the object
本发明用上述实施例所描述的物体端点定位方法及应用的系统,能定位一物体的两肢体的末端位置。此物体的两肢体可为人体的双脚或两手指。通过此物体端点定位方法及应用其置,使用者不需配戴特殊装置或服装。如此,将能提高使用者的使用方便性,而且,也不会有降低使用者的使用意愿的问题。The present invention can locate the end positions of two limbs of an object by using the object end point positioning method described in the above embodiments and the applied system. The two limbs of this object can be the double feet or two fingers of human body. With the method for locating the endpoint of the object and its application, the user does not need to wear special devices or clothing. In this way, the convenience of use of the user can be improved, and there is no problem of reducing the willingness of the user to use.
综上所述,虽然本发明已以较佳实施例描述如上,然其并非用以限定本发明。本领域技术人员在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视申请的权利要求范围所界定的内容为准。In summary, although the present invention has been described above with preferred embodiments, it is not intended to limit the present invention. Those skilled in the art may make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention should be determined by the content defined by the scope of the claims of the application.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008101856396A CN101751118B (en) | 2008-12-17 | 2008-12-17 | Object endpoint positioning method and application system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008101856396A CN101751118B (en) | 2008-12-17 | 2008-12-17 | Object endpoint positioning method and application system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101751118A CN101751118A (en) | 2010-06-23 |
CN101751118B true CN101751118B (en) | 2012-02-22 |
Family
ID=42478166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101856396A Expired - Fee Related CN101751118B (en) | 2008-12-17 | 2008-12-17 | Object endpoint positioning method and application system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101751118B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524637A (en) * | 1994-06-29 | 1996-06-11 | Erickson; Jon W. | Interactive system for measuring physiological exertion |
US6308565B1 (en) * | 1995-11-06 | 2001-10-30 | Impulse Technology Ltd. | System and method for tracking and assessing movement skills in multidimensional space |
CN1991691A (en) * | 2005-12-30 | 2007-07-04 | 财团法人工业技术研究院 | Interactive control platform system |
CN101140491A (en) * | 2006-09-07 | 2008-03-12 | 王舜清 | Digital image cursor moving and positioning device system |
-
2008
- 2008-12-17 CN CN2008101856396A patent/CN101751118B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524637A (en) * | 1994-06-29 | 1996-06-11 | Erickson; Jon W. | Interactive system for measuring physiological exertion |
US6308565B1 (en) * | 1995-11-06 | 2001-10-30 | Impulse Technology Ltd. | System and method for tracking and assessing movement skills in multidimensional space |
CN1991691A (en) * | 2005-12-30 | 2007-07-04 | 财团法人工业技术研究院 | Interactive control platform system |
CN101140491A (en) * | 2006-09-07 | 2008-03-12 | 王舜清 | Digital image cursor moving and positioning device system |
Also Published As
Publication number | Publication date |
---|---|
CN101751118A (en) | 2010-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10747988B2 (en) | Method and device for face tracking and smart terminal | |
US8830312B2 (en) | Systems and methods for tracking human hands using parts based template matching within bounded regions | |
CN103164022B (en) | Many fingers touch method and device, portable terminal | |
CN103677270B (en) | A kind of man-machine interaction method based on eye-tracking | |
CN102096471B (en) | Human-computer interaction method based on machine vision | |
US20130343610A1 (en) | Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints | |
Rossol et al. | A multisensor technique for gesture recognition through intelligent skeletal pose analysis | |
CN114792443B (en) | Intelligent device gesture recognition control method based on image recognition | |
CN102831439A (en) | Gesture tracking method and gesture tracking system | |
CN101251784A (en) | Laser Pointer Pointing and Spot Track Recognition Method | |
CN104778460B (en) | A kind of monocular gesture identification method under complex background and illumination | |
KR101559502B1 (en) | Method and recording medium for contactless input interface with real-time hand pose recognition | |
US9218060B2 (en) | Virtual mouse driving apparatus and virtual mouse simulation method | |
CN105247461A (en) | Determining Pitch and Yaw for Touchscreen Interaction | |
EP2528035A2 (en) | Apparatus and method for detecting a vertex of an image | |
JP6651388B2 (en) | Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system | |
TW201019241A (en) | Method for identifying and tracing gesture | |
TWI431538B (en) | Image based motion gesture recognition method and system thereof | |
CN101739122A (en) | Gesture Recognition and Tracking Method | |
Ukita et al. | Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera | |
CN101751118B (en) | Object endpoint positioning method and application system | |
TWI405149B (en) | Method and system for positioning terminals of object | |
Lee et al. | An effective method for detecting facial features and face in human–robot interaction | |
CN117593763A (en) | Bad sitting posture detection method and related equipment | |
CN114596582B (en) | An augmented reality interaction method and system with visual and force feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120222 |
|
CF01 | Termination of patent right due to non-payment of annual fee |