CN1246793C - Method of hand language translation through a intermediate mode language - Google Patents
Method of hand language translation through a intermediate mode language Download PDFInfo
- Publication number
- CN1246793C CN1246793C CN 02121369 CN02121369A CN1246793C CN 1246793 C CN1246793 C CN 1246793C CN 02121369 CN02121369 CN 02121369 CN 02121369 A CN02121369 A CN 02121369A CN 1246793 C CN1246793 C CN 1246793C
- Authority
- CN
- China
- Prior art keywords
- sign language
- language
- points
- face
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 238000013519 translation Methods 0.000 title claims abstract description 37
- 238000001514 detection method Methods 0.000 claims description 42
- 230000033001 locomotion Effects 0.000 claims description 41
- 241000282414 Homo sapiens Species 0.000 claims description 40
- 230000006870 function Effects 0.000 claims description 23
- 230000001815 facial effect Effects 0.000 claims description 21
- 230000002194 synthesizing effect Effects 0.000 claims description 15
- 210000000056 organ Anatomy 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 8
- 230000007935 neutral effect Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 210000001364 upper extremity Anatomy 0.000 claims description 6
- 210000004247 hand Anatomy 0.000 claims description 5
- 230000007704 transition Effects 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 210000000744 eyelid Anatomy 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 210000000707 wrist Anatomy 0.000 claims description 4
- 210000003414 extremity Anatomy 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 2
- 210000000697 sensory organ Anatomy 0.000 claims 2
- 230000001131 transforming effect Effects 0.000 claims 2
- 238000006243 chemical reaction Methods 0.000 abstract description 6
- 230000009466 transformation Effects 0.000 description 29
- 239000013598 vector Substances 0.000 description 16
- 238000012986 modification Methods 0.000 description 11
- 230000004048 modification Effects 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 210000001061 forehead Anatomy 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 150000001875 compounds Chemical class 0.000 description 3
- 230000001953 sensory effect Effects 0.000 description 3
- 238000001308 synthesis method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种通过中间模式语言进行手语翻译的方法,包括:采集手语词语数据,提取该手语词语数据中的特征信息,根据该特征信息进行手语连续语句识别,然后记录中间模式语言数据的识别结果将中间模式语言数据转换为该非手语语言词语并输出;以及,采集非手语语言词语数据,根据中间模式语言数据与该非手语语言的对应关系,将该非手语语言词语转换为中间模式语言数据并记录,根据该中间模式语言数据,在手语词语库中找到相应的手语词语数据,再将该手语词语数据合成为手语图像信息输出;本发明将手语以及非手语模式的语言均与中间模式语言相对应,有利于手语翻译系统的扩展,方便了非手语语言与手语之间的相互转换。
A method for sign language translation through an intermediate pattern language, comprising: collecting sign language word data, extracting feature information in the sign language word data, performing sign language continuous sentence recognition according to the feature information, and then recording the recognition results of the intermediate pattern language data and converting the intermediate Convert the pattern language data into the non-sign language words and output them; and collect the non-sign language word data, and convert the non-sign language words into the intermediate pattern language data according to the corresponding relationship between the intermediate pattern language data and the non-sign language language and record , according to the intermediate mode language data, find the corresponding sign language word data in the sign language vocabulary database, and then synthesize the sign language word data into sign language image information output; the present invention corresponds both sign language and non-sign language mode languages to the intermediate mode language , which is conducive to the expansion of the sign language translation system, and facilitates the mutual conversion between non-sign language languages and sign languages.
Description
技术领域: Technical field :
本发明涉及一种通过中间模式语言进行手语翻译的方法,特别是指一种将手语通过一中间模式语言数据形式翻译为非手语语言以及将非手语语言通过一中间模式语言数据形式翻译为手语的方法。The present invention relates to a method for translating sign language through intermediate pattern language, in particular to a method for translating sign language into non-sign language through an intermediate pattern language data form and translating non-sign language language into sign language through an intermediate pattern language data form method.
背景技术: Background technology :
语言是人们相互之间进行交流不可或缺的工具,但是,当今世界的正在使用的语言种类数以百计,如果加上地方方言和聋人的手语等,其数量更是难以统计;如此众多的语言种类使得使用不同语言的人群之间的交流变得非常困难,不仅在生理健康的人群之间是这样,在一些生理上具有残疾的人群中,这一问题尤其突出;因此,各种语言之间的翻译一直是困扰着全人类的一大问题。Language is an indispensable tool for people to communicate with each other. However, there are hundreds of languages in use in the world today. If local dialects and sign languages of the deaf are added, the number is even more difficult to count; there are so many The variety of languages makes it very difficult to communicate between people who use different languages, not only between physically healthy people, but also among some physically disabled people; therefore, various languages The translation between them has always been a major problem plaguing all human beings.
随着人类科学技术的不断进步,尤其是在近20年内,计算机技术的迅猛发展,使得利用计算机将一种感知语言翻译成其他语言已成为了现实。但是,目前利用计算机进行语言翻译通常是仅仅将一种模式的语言直接翻译成另一种模式的语言;这种翻译的方法及其系统具有如下的缺点:With the continuous progress of human science and technology, especially in the past 20 years, the rapid development of computer technology has made it possible to use computers to translate a perceptual language into other languages. But, utilize computer to carry out language translation at present and usually only directly translate the language of a kind of mode into the language of another mode; The method of this translation and its system have following shortcoming:
在计算机自动翻译领域中,一种被翻译语言与一种翻译语言之间通常只有一种固定的词汇对应关系。因此,现有的翻译系统无法实现对一种被翻译语言做多种语言的翻译;如果以上述的翻译系统实现从一种被翻译语言到多种语言的翻译,就需要在被翻译语言与每一种翻译语言之间均建立相应固定的词汇对应关系;这样,一方面会使得设计和实现相应的翻译系统的工作量巨大,另一方面翻译系统的语言扩展也不易实现。In the field of automatic computer translation, there is usually only a fixed vocabulary correspondence between a translated language and a translated language. Therefore, the existing translation system cannot realize the translation of multiple languages to one translated language; A corresponding fixed vocabulary correspondence relationship is established between each translation language; this way, on the one hand, it will make the design and implementation of the corresponding translation system a huge workload, and on the other hand, the language extension of the translation system is not easy to realize.
发明内容: Invention content :
本发明的主要目的在于提供一种通过中间模式语言进行手语翻译的方法,将手语翻译成一中间模式语言,再将该中间模式的语言进一步翻译成所需的非手语语言形式;或者将非手语语言转换为中间模式语言,然后再将该中间模式语言翻译为手语。The main purpose of the present invention is to provide a method for sign language translation through intermediate mode language, sign language is translated into an intermediate mode language, and then the language of the intermediate mode is further translated into the required non-sign language language form; or non-sign language language Convert to an intermediate mode language, and then translate that intermediate mode language into sign language.
本发明的另一目的在于提供一种通过中间模式语言进行手语翻译的方法,手语以及非手语模式的语言均与中间模式语言相对应,有利于手语翻译系统的扩展,以实现非手语语言与手语之间的相互转换。Another object of the present invention is to provide a method for sign language translation through the intermediate mode language. Both sign language and non-sign language mode languages correspond to the intermediate mode language, which is conducive to the expansion of the sign language translation system, so as to realize non-sign language language and sign language. conversion between each other.
本发明的目的是这样实现的:The purpose of the present invention is achieved like this:
本发明提供了一种通过中间模式语言进行手语翻译的方法,是将手语通过一中间模式语言数据形式翻译为非手语语言,具体步骤包括:The present invention provides a method for interpreting sign language through an intermediate pattern language, which is to translate sign language into a non-sign language language through an intermediate pattern language data form, and the specific steps include:
步骤101:采集手语词语数据;Step 101: collecting sign language word data;
步骤102:提取该手语词语数据中的特征信息;Step 102: extracting feature information in the sign language word data;
步骤103:根据该特征信息进行手语连续语句识别,然后记录中间模式语言数据的识别结果;Step 103: Perform sign language continuous sentence recognition according to the feature information, and then record the recognition result of the intermediate mode language data;
步骤104:根据中间模式语言数据与相应的非手语语言的对应关系,将中间模式语言数据转换为该非手语语言词语并输出。Step 104: According to the corresponding relationship between the intermediate pattern language data and the corresponding non-sign language language, convert the intermediate pattern language data into the non-sign language words and output them.
本发明还提供了一种通过中间模式语言进行手语翻译的方法,是将非手语语言通过一中间模式语言数据形式翻译为手语,具体步骤包括:The present invention also provides a method for translating sign language through intermediate pattern language, which is to translate non-sign language language into sign language through an intermediate pattern language data form, and the specific steps include:
步骤201:采集非手语语言词语数据;Step 201: Collect non-sign language language word data;
步骤202:根据中间模式语言数据与该非手语语言的对应关系,将该非手语语言词语转换为中间模式语言数据并记录;Step 202: According to the corresponding relationship between the intermediate mode language data and the non-sign language language, the non-sign language words are converted into intermediate mode language data and recorded;
步骤203:根据该中间模式语言数据,在手语词语库中找到相应的手语词语数据,再将该手语词语数据合成为手语图像信息输出。Step 203: According to the intermediate mode language data, find the corresponding sign language word data in the sign language word database, and then synthesize the sign language word data into sign language image information for output.
手语翻译为非手语语言或非手语语言翻译为手语语言的方法还包括:在采集手语词语数据或非手语数据的同时,还采集相应的人脸信息;然后,提取该人脸信息中的特征数据,最后利用该特征数据在翻译输出时合成输出的人脸图像。The method for translating sign language into non-sign language or non-sign language into sign language also includes: collecting corresponding face information while collecting sign language word data or non-sign language data; and then extracting feature data in the face information , and finally use the feature data to synthesize the output face image during the translation output.
采集手语词语数据的具体方法为:采用数据手套采集手的各关节的传感数据;采用位置跟踪器输入手语手势的位置和方向数据;其中,数据手套装设在人体的左右手上;位置跟踪器包括一个发射器和一个以上的接收器;该发射器发出电磁波,接收器装设在人体的左右腕部,该接收器接收该电磁波并计算该接收器相对于发射器的位置和方向数据。The specific method of collecting sign language word data is: using data gloves to collect sensory data of each joint of the hand; using a position tracker to input the position and direction data of sign language gestures; wherein, the data glove is set on the left and right hands of the human body; the position tracker It includes a transmitter and more than one receiver; the transmitter emits electromagnetic waves, and the receiver is installed on the left and right wrists of the human body. The receiver receives the electromagnetic waves and calculates the position and direction data of the receiver relative to the transmitter.
提取该手语词语数据中的特征信息的具体方法为:计算左右两手相对于参照的位置和方向,对手的各关节传感数据的每个分量进行归一化处理,并将处理后的数据作为隐马尔可夫模型(Hi dden Markov Mode1,简称HMM)的训练样本,建立手语样本模型库。The specific method of extracting the feature information in the sign language word data is: calculate the position and direction of the left and right hands relative to the reference, perform normalization processing on each component of the sensory data of each joint of the hand, and use the processed data as an implicit Markov model (Hidden Markov Mode1, referred to as HMM) training samples, the establishment of sign language sample model library.
如上所述的一个HMM可用参数:λ=(A,B,π)表示,As mentioned above, an HMM can be expressed with parameters: λ=(A, B, π),
其中,A={aij}为状态转移概率矩阵,Among them, A={a ij } is the state transition probability matrix,
并且满足公式:aij=P[qt+1=Sj|qt=Si],1≤i,j≤N;And satisfy the formula: a ij =P[q t+1 =S j |q t =S i ], 1≤i, j≤N;
并满足约束条件:aij≥0,1≤i,j≤N,
上式中,N为模型的状态数;In the above formula, N is the state number of the model;
π={πi},πi表示从第i个状态结点开始的概率,π={π i }, π i represents the probability of starting from the i-th state node,
并且满足公式:πi=P[q1=Si],1≤i≤N;And satisfy the formula: π i =P[q 1 =S i ], 1≤i≤N;
并满足约束条件:πi≥0,
B={bj(k)}为观测信号的概率密度,当观察符号为连续矢量时,bj(k)是连续概率密度函数,并且:
其中,N为模型的状态数,M为混合项数,Ok为k时刻的观察向量;cjm为混合比(Mixing proportion),并且满足:Among them, N is the state number of the model, M is the number of mixing items, O k is the observation vector at time k; c jm is the mixing ratio (Mixing proportion), and it satisfies:
其中:G取为高斯概率密度函数,μjm和∑jm分别为高斯混合概率密度中第m个分量的均值向量和协方差矩阵;Wherein: G is taken as the Gaussian probability density function, and μ jm and ∑ jm are respectively the mean vector and the covariance matrix of the mth component in the Gaussian mixture probability density;
对应于同一个手势词的K组训练数据O=[O(1),O(2),…,O(k)],其中
πi,aij和cjm,μm,∑m的重估公式为:The revaluation formulas of π i , a ij and c jm , μ m , ∑ m are:
即在时刻t=1时,状态结点i出现的期望概率;That is, at time t=1, the expected probability of state node i appearing;
分子部分为从状态结点i转移到状态结点j的期望概率;The numerator part is the expected probability of transferring from state node i to state node j;
分母部分为从状态结点i转移的期望概率;The denominator part is the expected probability of transferring from state node i;
分子部分为在状态结点j的第m个分支出现的期望概率;The numerator part is the expected probability of appearing in the mth branch of the state node j;
分母部分为在状态结点j出现的期望概率;The denominator part is the expected probability of appearing at state node j;
分子部分为在状态结点j第m个分支出现观察序列0的期望概率;The molecular part is the expected probability of the observation sequence 0 appearing in the mth branch of the state node j;
分母部分为在状态结点j第m个分支出现的期望概率;The denominator part is the expected probability that the mth branch of the state node j appears;
分子部分为在状态结点j第m个分支出现观察序列0的均方差的期望概率;The molecular part is the expected probability of the mean square error of the observation sequence 0 appearing in the mth branch of the state node j;
分母部分为在状态结点j第m个分支出现的期望概率。The denominator part is the expected probability that the mth branch of state node j appears.
手语连续语句识别的具体方法为:The specific methods for continuous sentence recognition in sign language are as follows:
模型库建好以后,用韦特比(Viterbi)译码方法计算测试样本与各种可能的模型序列的似然概率,概率值最大的模型序列所对应的词序列即为识别结果。After the model library is built, use the Viterbi decoding method to calculate the likelihood probability of the test sample and various possible model sequences, and the word sequence corresponding to the model sequence with the largest probability value is the recognition result.
设词汇表的容量为v,词的模型编号为k=1,...,V,对应的模型参数为(πk,Ak,ck,μk,Uk),每个词的状态数均为L,输入手语帧序列编号为i=1,2,...,N:Suppose the capacity of the vocabulary is v, the model numbers of words are k=1,...,V, the corresponding model parameters are (π k , A k , c k , μ k , U k ), the state of each word The number is L, and the input sign language frame sequence number is i=1, 2,..., N:
当词汇在同一模型内转移时(i>1):When vocabulary is transferred within the same model (i > 1):
T(i,j,k)=kT(i,j,k)=k
当词汇在模型的边界(j=1)发生转移时:When the vocabulary is transferred at the boundary of the model (j=1):
开始时(i=1),Pr(1,k)=p(1,1,k),Pr(j,k)=0,j>1At the beginning (i=1), Pr(1,k)=p(1,1,k), Pr(j,k)=0, j>1
T(1,j,k)=-1T(1,j,k)=-1
F(1,j,k)=-1F(1,j,k)=-1
可用上述公式递归地求出各个Pr(L,k),从而可得到全局最大概率:The above formula can be used to recursively calculate each Pr(L, k), so that the global maximum probability can be obtained:
由T(i,j,k)和F(i,j,k)回溯出最佳路径(Ti,Fj)(倒序):The optimal path (T i , F j ) is backtracked from T(i, j, k) and F( i , j, k ) (in reverse order):
FN=LF N =L
Ti=T(i+1,Ti+1,Fi+1),N-1≥i≥1T i =T(i+1, T i+1 , F i+1 ), N-1≥i≥1
Fi=F(i+1,Ti+1,Fi+1),N-1≥i≥1F i =F(i+1, T i+1 , F i+1 ), N-1≥i≥1
得到识别结果;其中,Get the recognition result; among them,
p(i,j,k):为在词k的第j个状态出现第i帧的概率;p(i, j, k): the probability of the i-th frame appearing in the j-th state of word k;
Pr(j,k):为到当前输入帧i为止,从开始经状态转移到词k的第j个状态约最大概率;Pr(j, k): Up to the current input frame i, the approximate maximum probability of the jth state of word k is transferred from the start state to the word k;
T(i,j,k):记录前一帧所在的模型的序号;T(i, j, k): record the serial number of the model where the previous frame is located;
F(i,j,k):记录前一帧在模型T(i,j,k)中所处的状态。F(i, j, k): Record the state of the previous frame in the model T(i, j, k).
为了提高识别精度,在Viterbi搜索过程中嵌入二阶马尔科夫链(Bigram),即:语句的先验概率可用下式计算:In order to improve the recognition accuracy, a second-order Markov chain (Bigram) is embedded in the Viterbi search process, that is, the prior probability of the sentence can be calculated by the following formula:
其中,in,
W为被识别的语句;W is the recognized sentence;
w1,w2,...wn为被识别的语句W中的各字;w 1 , w 2 ,...w n are each word in the recognized sentence W;
P(w1|wi-1)为字对的出现频度。P(w 1 |wi -1 ) is the occurrence frequency of word pairs.
手语词语数据合成为手语图像信息的具体方法为:The specific method of synthesizing sign language word data into sign language image information is as follows:
采用VRML的人体表示模型建立虚拟人;Using the human body representation model of VRML to establish a virtual human;
确定该虚拟人各自由度的角度值;Determine the angle value of each degree of freedom of the virtual person;
计算虚拟人每个肢体的位置和方向,确定出虚拟人的一个姿态;Calculate the position and direction of each limb of the virtual human, and determine a posture of the virtual human;
忽略该姿态的非上肢关节角度;Ignore the non-upper limb joint angles of the pose;
按照规定的时间间隔连续显示一个手语运动中的每一个姿态,生成相应的手语运动图像。Continuously display each gesture in a sign language movement according to the specified time interval, and generate corresponding sign language movement images.
在生成手语运动图像时,还进一步在手语运动图像的相邻帧之间进行平滑插值:具体的插值根据如下的公式计算:When generating the sign language moving image, smooth interpolation is further performed between adjacent frames of the sign language moving image: the specific interpolation is calculated according to the following formula:
其中,in,
f1和f2分别为一个手语运动中两个相邻图像帧;f 1 and f 2 are two adjacent image frames in a sign language movement;
tf1和tf2分别为f1和f2距起始点的时间值;t f1 and t f2 are the time values of f 1 and f 2 from the starting point respectively;
t1为被插入帧距起始点的时间值;t 1 is the time value of the inserted frame from the starting point;
t1为起始点的时间值;t 1 is the time value of the starting point;
Gi(tf)为被插入的自由度曲线函数值G i (t f ) is the value of the interpolated degree of freedom curve function
Gi(f1)为起始点的自由度曲线函数值;G i (f 1 ) is the function value of the degree of freedom curve at the starting point;
Gi(tf1)为f1的自由度曲线函数;G i (t f1 ) is the degree of freedom curve function of f 1 ;
Gi(tf2)为f2的自由度曲线函数值。G i (t f2 ) is the function value of the degree of freedom curve of f 2 .
在生成手语运动图像时,还进一步采用基于四元组的运动插值方法对不连续手语帧中的复杂关节进行平滑过渡处理,具体的平滑处理根据如下的公式计算:When generating sign language motion images, a quadruple-based motion interpolation method is further used to perform smooth transition processing on complex joints in discontinuous sign language frames. The specific smoothing processing is calculated according to the following formula:
其中,in,
f1和f2分别为一个手语运动中两个相邻手势的图像帧;f 1 and f 2 are image frames of two adjacent gestures in a sign language movement;
tf1和tf2分别为f1和f2距起始点的时间值;t f1 and t f2 are the time values of f 1 and f 2 from the starting point respectively;
qf1和qf2分别为关节在tf1,tf2时刻的方向;q f1 and q f2 are the directions of joints at t f1 and t f2 respectively;
tf为插入帧距起始点的时间值;t f is the time value of the insertion frame from the starting point;
θ为.....并且由qf1·qf1=cosθ确定。θ is ... and is determined by q f1 ·q f1 = cos θ.
所述的提取该人脸信息中的特征数据至少包括:正面人脸特征的检测和侧面人脸特征点的检测;其中,The feature data extracted in the face information at least includes: the detection of frontal face features and the detection of side face feature points; wherein,
正面人脸特征的检测至少包括:面部特征粗定位、关键特征点检测和基于变形模板的特征形状检测;The detection of front face features at least includes: rough positioning of facial features, key feature point detection and feature shape detection based on deformed templates;
侧面人脸特征点检测至少包括:人脸侧面轮廓线的提取和人脸侧面特征点的检测。The feature point detection of the side face includes at least: the extraction of the profile line of the face and the detection of the feature points of the face.
所述的面部特征粗定位为:首先定位虹膜的位置,然后根据虹膜中心点的位置数据、面部器官结构的统计先验数据和面部灰度分布特性获得人脸其他器官的位置数据。The rough positioning of facial features is as follows: first locate the position of the iris, and then obtain the position data of other organs of the face according to the position data of the central point of the iris, the statistical prior data of the facial organ structure and the facial gray distribution characteristics.
人脸关键特征点的检测为:获取眼角点、嘴角点、下巴曲线上的主要特征点,作为相应的器官模板参数的初值;具体包括:眼睛关键点的检测,嘴部关健点的检测和下巴关键点的检测;其中:眼睛关键点包括左右眼角点和上下眼皮的界限点;嘴部关键点包括两个嘴角点、上唇最高点和下唇最低点;下巴的关健点包括左右嘴角的延长线与下巴的交点、过中唇点的垂线与下巴的交点、过左右嘴角点的垂线与下巴的交点、过左嘴角点往左下45度直线与下巴的交点、过右嘴角点往右下45度直线与下巴的交点。The detection of the key feature points of the face is: obtain the main feature points on the corners of the eyes, the corners of the mouth, and the chin curve, as the initial value of the corresponding organ template parameters; specifically include: the detection of the key points of the eyes, the detection of the key points of the mouth And the detection of the key points of the chin; among them: the key points of the eyes include the boundary points of the left and right corners of the eyes and the upper and lower eyelids; the key points of the mouth include the two corners of the mouth, the highest point of the upper lip and the lowest point of the lower lip; the key points of the chin include the left and right corners of the mouth The intersection point of the extension line of the chin and the chin, the intersection point of the vertical line passing through the middle lip point and the chin point, the intersection point of the vertical line passing through the left and right corners of the mouth and the chin point, the intersection point of a straight line passing through the left corner point of 45 degrees to the lower left and the chin, and the point passing through the right mouth corner point Go down to the right 45 degrees to the intersection of the straight line and the chin.
基于变形模板的特征形状检测包括:对眼睛区域特征形状进行检测,获得眼睛模板参数;对嘴部形状进行检测,获得嘴部模板参数;对下巴形状进行检测,获得下巴模板参数。The feature shape detection based on the deformed template includes: detecting the feature shape of the eye area to obtain the eye template parameters; detecting the mouth shape to obtain the mouth template parameters; detecting the chin shape to obtain the chin template parameters.
所述的人脸侧面轮廓线的提取为:利用人脸的肤色特征将人脸区域分割出来;然后采用边缘检测,并根据人脸轮廓的先验数据定位轮廓线。The extraction of the profile of the human face is as follows: using the skin color feature of the human face to segment the human face area; then adopting edge detection, and locating the contour line according to the prior data of the human face profile.
所述的人脸侧面特征点的检测为:以鼻尖点为界,将人脸轮廓线分为上下两段;通过曲线拟合,得到轮廓线的近似函数表达式,计算该函数一阶导数为零的点,将该点作为人脸侧面特征点。The detection of the feature points on the side of the human face is as follows: with the tip of the nose as the boundary, the contour line of the human face is divided into upper and lower sections; by curve fitting, the approximate function expression of the contour line is obtained, and the first derivative of the function is calculated The point that is zero is used as the feature point of the face side.
利用特征数据合成输出人脸图像的方法为:在人脸模型上定义多个特征点,该特征点可以从特定人的正面和侧面像中提取出来,特征点的自动提取属于人脸面部图象检测与分析范畴;假定已经应用分析与识别技术从特定人脸图象上提取了所需特征或变形曲线,然后将其当作对一般人脸模型的变形参数。一般人脸中性模型是一个三维网格体,每个特征点的三维坐标是已知的,在从一般人脸中性模型到特定人脸中性模型的修改过程中,要进行两种变换;首先:要对一般人脸中性模型进行整体变换,完成面部整体轮廓的修改,使其与特定人的脸形和五官的大致位置相匹配。设变换前的人脸模型上点坐标为(x,y,z),变换后为(x′,y′,z′),变换前后脸的中心点坐标分别为o(x0,y0,z0)和o(x′0,y′0,z′0),其中脸的中心点o定义为两眼眼角点连线及人脸纵向中轴线之间的交点。参数p,q1,q2分别定义为中心点到鬓角,中心点到额头中点,中心点到下巴之间的距离;参数u定义为嘴部中心点到耳朵下缘距离;参数r1,r2,r3在侧面分别定义为额头最高处到发际的距离,中心点到耳朵上缘的距离,嘴角点到耳朵下边缘的距离。The method of using feature data to synthesize and output face images is: define multiple feature points on the face model, the feature points can be extracted from the front and side images of a specific person, and the automatic extraction of feature points belongs to the face image Detection and analysis category; it is assumed that analysis and recognition technology has been applied to extract the required features or deformation curves from specific face images, and then use them as deformation parameters for general face models. The general face-neutral model is a 3D mesh, and the 3D coordinates of each feature point are known. During the modification process from the general face-neutral model to the specific face-neutral model, two transformations are required: first : It is necessary to perform an overall transformation on the neutral model of the general face, and complete the modification of the overall contour of the face so that it matches the shape of the face and the approximate position of the facial features of a specific person. Suppose the point coordinates on the face model before transformation are (x, y, z), and after transformation are (x′, y′, z′), and the coordinates of the center point of the face before and after transformation are o(x 0 , y 0 , z 0 ) and o(x′ 0 , y′ 0 , z′ 0 ), where the center point o of the face is defined as the intersection between the line connecting the corners of the two eyes and the longitudinal axis of the face. Parameters p, q1, and q2 are respectively defined as the distance from the center point to the sideburns, from the center point to the midpoint of the forehead, and from the center point to the chin; parameter u is defined as the distance from the center point of the mouth to the lower edge of the ear; parameters r1, r2, r3 On the side, it is defined as the distance from the highest point of the forehead to the hairline, the distance from the center point to the upper edge of the ear, and the distance from the corner of the mouth to the lower edge of the ear.
对于上半部分脸(眼睛水平线以上的部分)采用如下修改公式:For the upper half of the face (the part above the eye level), use the following modified formula:
x′=x0′+p′/p(x-x0)x′=x 0 ′+ p′ / p (xx 0 )
y′=y0′+q1′/q1(y-y0)y'=y 0 '+ q1' / q1 (yy 0 )
z′=z0′+r1′/r1(z-z0)对于脸的中间部分和下部可以类似地修改。z' = z 0 '+ r1 ' / r1 (zz 0 ) can be similarly modified for the middle and lower parts of the face.
局部变换中的眼睛区域上点的修改公式如下:The modification formula of the point on the eye area in the local transformation is as follows:
设(x,y,z)为变换前眼睛区域点的坐标,(x′,y′,z′)为变换后眼睛区域点的坐标,则有:Let (x, y, z) be the coordinates of the eye region points before transformation, and (x′, y′, z′) be the coordinates of the eye region points after transformation, then:
x′=ax+by+czx'=ax+by+cz
y′=dx+ey+fz y′=dx+ey+fz
其中,a,b,c,d,e,f可通过带入变换前后的三组特征点,解六元线性方程组得到;经过这个变换,可以达到对眼睛微小位置的移动及其形状的改变。从上式看出,修改仅发生在x和y方向上,而没有进行深度方面的修改,因此不能很好地反映特征人脸的侧面信息。同理可进行眉毛和嘴部的修改。Among them, a, b, c, d, e, f can be obtained by bringing in three sets of feature points before and after transformation, and solving six-element linear equations; after this transformation, the movement of the tiny position of the eye and the change of its shape can be achieved . It can be seen from the above formula that the modification only occurs in the x and y directions, without modification in depth, so the side information of the feature face cannot be well reflected. In the same way, eyebrows and mouth can be modified.
设变换前后鼻子区域点的坐标分别为(x,y,z)和(x′,y′,z′),鼻子的中心点坐标分别为(x0,y0,z0)和(x′0,y′0,z′0),对鼻子部分的变换依照如下公式计算:Let the coordinates of the nose area points before and after transformation be (x, y, z) and (x′, y′, z′), respectively, and the coordinates of the center point of the nose are (x 0 , y 0 , z 0 ) and (x′ 0 , y′ 0 , z′ 0 ), the transformation of the nose part is calculated according to the following formula:
x′=x0′+p′/p(x-x0)x′=x 0 ′+ p′ / p (xx 0 )
y′=y0′+q′/q(y-y0)y'=y 0 '+ q' / q (yy 0 )
z′=z0′+r′/r(z-z0)z′=z 0 ′+ r′ / r (zz 0 )
完成了整体变换和局部变换之后,就得到了一张基本具有特定人脸特征的中性三维人脸网格体。After the overall transformation and local transformation are completed, a neutral 3D face mesh with specific facial features is obtained.
利用特征数据合成输出人脸图像的方法还包括:唇部模型采用两条抛物线拟合上唇线,一条抛物线拟合下唇线。选择两个嘴角点,两上唇抛物线的最高点,下唇抛物线的最低点和两上唇抛物线的交点;另外,分别在下唇抛物线上、上唇抛物线上、在两个嘴角点的连线上增加若干个点,分别成为上下内唇抛物线上的各点。唇部的外轮廓的抛物线则满足如下公式:The method for synthesizing and outputting the face image by using the feature data also includes: the lip model adopts two parabolas to fit the upper lip line, and one parabola to fit the lower lip line. Select two mouth corner points, the highest point of the two upper lip parabolas, the lowest point of the lower lip parabola and the intersection of the two upper lip parabolas; in addition, add several points on the lower lip parabola, upper lip parabola, and the connection line between the two mouth corner points The points become the points on the upper and lower inner lip parabolas respectively. The parabola of the outer contour of the lip satisfies the following formula:
y-a(x-b)2+cya(xb) 2 +c
其中,系数a,b,c可以通过将已知点的坐标带入上述方程即可解得。Among them, the coefficients a, b, c can be solved by bringing the coordinates of known points into the above equation.
动态的唇动模型利用上述张口模型,描述为五条相互关联起来的抛物线,其中包括上唇两条,下唇一条,上下内唇各一条。在进行参数驱动的唇动合成中,给出纵、横方向上开口的距离,或者给出嘴角点以及上下唇的最高点,即可确定相应的唇部开口状态。The dynamic lip movement model uses the above-mentioned mouth opening model and is described as five interrelated parabolas, including two for the upper lip, one for the lower lip, and one for the upper and lower inner lips. In the parameter-driven lip motion synthesis, the corresponding lip opening state can be determined by giving the distance of the opening in the vertical and horizontal directions, or giving the corner points of the mouth and the highest points of the upper and lower lips.
本发明提供的一种通过中间模式语言进行手语翻译的方法,将手语及非手语翻译成一中间模式语言,再将该中间模式的语言进一步翻译成所需的语言形式;手语以及非手语模式的语言均与中间模式语言相对应,有利于手语翻译系统的扩展,方便了非手语语言与手语之间的相互转换。The present invention provides a method for sign language translation through intermediate mode language, which translates sign language and non-sign language into an intermediate mode language, and then further translates the language of the intermediate mode into the required language form; sign language and non-sign language mode language Both correspond to the intermediate mode language, which is beneficial to the expansion of the sign language translation system and facilitates the mutual conversion between non-sign language languages and sign languages.
以下结合附图和具体的实施例对本发明做详细的进一步说明:Below in conjunction with accompanying drawing and specific embodiment the present invention is described in detail further:
附图说明: Description of drawings :
图1为本发明的基本原理示意图。Fig. 1 is a schematic diagram of the basic principle of the present invention.
图2为本发明中将手语翻译为中间模式语言的流程示意图。Fig. 2 is a schematic flow chart of translating sign language into intermediate mode language in the present invention.
图3为本发明中将中间模式语言翻译为手语的流程示意图。Fig. 3 is a schematic flow chart of translating intermediate pattern language into sign language in the present invention.
图4为本发明一实施例的整体流程结构示意图。Fig. 4 is a schematic diagram of the overall process structure of an embodiment of the present invention.
具体实施方式: Specific implementation methods :
参见图1,本发明的基本原理是:以中间模式语言M作为手语MA和非手语语言MB之间转换的必要途径;即:将手语MA翻译为非手语语言MB或将非手语语言MB翻译为手语MA,均通过一中间模式语言M进行。Referring to Fig. 1, the basic principle of the present invention is: use the intermediate model language M as the necessary way of conversion between sign language MA and non-sign language language MB ; Language M B is translated into sign language M A through an intermediate model language M.
实施例1:手语转变为语音输出Embodiment 1: Sign language is converted into voice output
参见图2和图4,将手语通过一中间模式语言数据形式翻译为非手语语言的具体方法为:Referring to Fig. 2 and Fig. 4, the specific method of translating sign language into non-sign language language through an intermediate mode language data form is:
首先,采集手语词语数据;本发明的一实施例中,采用两只具有18个传感器的数据手套及其配套设备-位置跟踪器作为手势输入设备,该位置跟踪器由一个发射器和若干个接收器构成,发射器发出电磁波,每个接收器接收该电磁波,然后根据接收到的电磁波计算该接收器相对于发射器的位置和方向数据。在作出手语动作的人体左右手腕上各配一个接收器,因为发射器的位置不固定,测试时采集的手语数据的坐标经常会发生变化,所以还要把第三个接收器放在人身体上,用该接收器的位置作为参照点和参照坐标系,通过参照该第三接收器的位置和方向数据,获取左、右手上的接收器相对参照坐标系的位置和方向为不变量特征。First, collect sign language word data; In one embodiment of the present invention, adopt two data gloves with 18 sensors and its supporting equipment-position tracker as gesture input device, this position tracker is made up of a transmitter and several receivers The transmitter is composed of electromagnetic waves, and each receiver receives the electromagnetic waves, and then calculates the position and direction data of the receiver relative to the transmitter according to the received electromagnetic waves. A receiver is equipped on the left and right wrists of the human body performing sign language movements. Because the position of the transmitter is not fixed, the coordinates of the sign language data collected during the test often change, so a third receiver should be placed on the human body , using the position of the receiver as a reference point and a reference coordinate system, by referring to the position and direction data of the third receiver, the position and direction of the receivers on the left and right hands relative to the reference coordinate system are obtained as invariant features.
提取该手语词语数据中的特征信息的具体方法为:计算左右两手相对于参照的位置和方向,对手的各关节传感数据的每个分量进行归一化处理,并将处理后的数据作为HMM训练样本,建立手语样本模型库。The specific method of extracting the feature information in the sign language word data is: calculate the position and direction of the left and right hands relative to the reference, normalize each component of the sensory data of each joint of the hand, and use the processed data as HMM Training samples and building a sign language sample model library.
如上所述的一个HMM可用参数:λ=(A,B,π)表示,As mentioned above, an HMM can be expressed with parameters: λ=(A, B, π),
其中,A={aij}为状态转移概率矩阵,Among them, A={a ij } is the state transition probability matrix,
并且满足公式:aij=P[qi+j=Sj|ql=Si],1≤i,j≤N;And satisfy the formula: a ij =P[q i+j =S j |q l =S i ], 1≤i, j≤N;
并满足约束条件:aij≥0,1≤i,j≤N,
上式中,N为模型的状态数;In the above formula, N is the state number of the model;
π={πi},πi表示从第i个状态结点开始的概率,π={π i }, π i represents the probability of starting from the i-th state node,
并且满足公式:πi=P[q1=Si],1≤i≤N;And satisfy the formula: π i =P[q 1 =S i ], 1≤i≤N;
并满足约束条件:πi≥0,
B={bj(k)}为观测信号的概率密度,当观察符号为连续矢量时,bj(k)是连续概率密度函数,并且:
其中,N为模型的状态数,M为混合项数,Ok为k时刻的观察向量;cjm为混合比(Mixing proportion),并且满足:Among them, N is the state number of the model, M is the number of mixing items, O k is the observation vector at time k; c jm is the mixing ratio (Mixing proportion), and it satisfies:
其中:G取为高斯概率密度函数,μjm和∑jm分别为高斯混合概率密度中第m个分量的均值向量和协方差矩阵;Wherein: G is taken as the Gaussian probability density function, and μ jm and ∑ jm are respectively the mean vector and the covariance matrix of the mth component in the Gaussian mixture probability density;
对应于同一个手势词的K组训练数据O=[O(1),O(2),...,O(K)],其中
πi,aij和cjm,μm,∑m的重估公式为:The revaluation formulas of π i , a ij and c jm , μ m , ∑ m are:
即在时刻t=1时,状态结点i出现的期望概率;That is, at time t=1, the expected probability of state node i appearing;
分子部分为从状态结点i转移到状态结点j的期望概率;The numerator part is the expected probability of transferring from state node i to state node j;
分母部分为从状态结点i转移的期望概率;The denominator part is the expected probability of transferring from state node i;
分子部分为在状态结点j的第m个分支出现的期望概率;The numerator part is the expected probability of appearing in the mth branch of the state node j;
分母部分为在状态结点j出现的期望概率;The denominator part is the expected probability of appearing at state node j;
分子部分为在状态结点j第m个分支出现观察序列0的期望概率;The molecular part is the expected probability of the observation sequence 0 appearing in the mth branch of the state node j;
分母部分为在状态结点j第m个分支出现的期望概率;The denominator part is the expected probability that the mth branch of the state node j appears;
分子部分为在状态结点j第m个分支出现观察序列0的均方差的期望概率;The molecular part is the expected probability of the mean square error of the observation sequence 0 appearing in the mth branch of the state node j;
分母部分为在状态结点j第m个分支出现的期望概率。The denominator part is the expected probability that the mth branch of state node j appears.
在提取手语词语数据的特征信息后,再根据该特征信息进行手语连续语句识别,然后记录中间模式语言数据的识别结果;具体的方法为:After extracting the feature information of the sign language word data, the sign language continuous sentence recognition is performed according to the feature information, and then the recognition result of the intermediate mode language data is recorded; the specific method is:
采用半连续隐马尔可夫模型(Semi-Continuous Hidden Markov Model,简称SCHMM)分别处理手形,位置和方向,以便减少码本的数目,然后通过建立位置,方向和手形的多维字母串来描述手语。The semi-continuous hidden Markov model (Semi-Continuous Hidden Markov Model, referred to as SCHMM) is used to process the hand shape, position and direction separately, so as to reduce the number of codebooks, and then describe sign language by establishing a multi-dimensional letter string of position, direction and hand shape.
首先,用单数据流对所有词建立连续的模型,对所有状态结点上的均值向量的左右手形、右手相对左手位置、右手相对左手方向和三个接收器之问的距离等六部分数据分别聚类,聚类的步骤如下:First, use a single data stream to build a continuous model for all words, and use six parts of data, including the left and right hand shape of the mean vector on all state nodes, the position of the right hand relative to the left hand, the direction of the right hand relative to the left hand, and the distance between the three receivers. Clustering, the steps of clustering are as follows:
初始化:从训练向量集合中任意选出多个向量作为初始码本;Initialization: Randomly select multiple vectors from the training vector set as the initial codebook;
寻找最接近的码字:对每个训练向量,在当前码本中寻找与之最接近的码字向量,并把这个向量分配到该码字所对应的集合中;Find the closest codeword: For each training vector, find the closest codeword vector in the current codebook, and assign this vector to the set corresponding to the codeword;
码字的修正:将码字修正为其所对应的集合中所有训练向量的均值;Correction of the codeword: Correct the codeword to the mean value of all training vectors in the corresponding set;
上述的两步骤,直至各类中向量的均方差低于给定阈值。The above two steps until the mean square error of the vectors in each category is lower than a given threshold.
得到聚类中心后,再对所有模型的每个状态结点的均值向量进行量化。每个状态结点只记录与之距离最近的码本的序号。在识别时,对每帧识别数据只需计算它与各码本的距离,用与该结点所纪录的码本的距离来代替与该结点的均值向量的距离。After the cluster centers are obtained, the mean vectors of each state node of all models are quantized. Each state node only records the sequence number of the codebook closest to it. During identification, for each frame of identification data, it is only necessary to calculate the distance between it and each codebook, and use the distance to the codebook recorded by the node to replace the distance to the mean vector of the node.
上述的模型库建好以后,用韦特比(Viterbi)译码方法计算测试样本与各种可能的模型序列的似然概率,概率值最大的模型序列所对应的词序列即为识别结果。After the above model library is built, use the Viterbi decoding method to calculate the likelihood probability of the test sample and various possible model sequences, and the word sequence corresponding to the model sequence with the largest probability value is the recognition result.
设词汇表的容量为V,词的模型编号为k=1,...,V,对应的模型参数为(πk,Ak,ck,μk,Uk),每个词的状态数均为L,输入手语帧序列编号为i=1,2,...,N;Suppose the capacity of the vocabulary is V, the model numbers of words are k=1,...,V, the corresponding model parameters are (π k , A k , c k , μ k , U k ), the state of each word The number is L, and the input sign language frame sequence number is i=1, 2,..., N;
当词汇在同一模型内转移时(j>1):When vocabulary is transferred within the same model (j > 1):
T(i,j,k)=kT(i,j,k)=k
当词汇在模型的边界(j=1)发生转移时:When the vocabulary is transferred at the boundary of the model (j=1):
开始时(i=1),Pr(1,k)=p(1,1,k),Pr(j,k)=0,j>1At the beginning (i=1), Pr(1,k)=p(1,1,k), Pr(j,k)=0, j>1
T(1,j,k)=-1T(1,j,k)=-1
F(1,i,k)=-1F(1,i,k)=-1
可用上述公式递归地求出各个Pr(L,k),从而可得到全局最大概率:The above formula can be used to recursively calculate each Pr(L, k), so that the global maximum probability can be obtained:
由T(i,j,k)和F(i,j,k)回溯出最佳路径(Ti,Fi)(倒序):The optimal path (T i , F i ) is backtracked from T(i, j, k) and F( i , j, k ) (in reverse order):
FX=LF X =L
Ti=T(i+1,Tj+1,Fj+1), N-1≥i≥1T i =T(i+1, T j+1 , F j+1 ), N-1≥i≥1
Fi=F(i+1,Ti+1,Fi+1), N-1≥i≥1F i =F(i+1, T i+1 , F i+1 ), N-1≥i≥1
得到识别结果;其中,Get the recognition result; among them,
p(i,j,k):为在词k的第j个状态出现第i帧的概率;p(i, j, k): the probability of the i-th frame appearing in the j-th state of word k;
Pr(j,k):为到当前输入帧i为止,从开始经状态转移到词k的第j个状态的最大概率:Pr(j, k): until the current input frame i, the maximum probability of transitioning from the start state to the jth state of the word k:
T(i,j,k):记录前一帧所在的模型的序号;T(i, j, k): record the serial number of the model where the previous frame is located;
F(i,j,k):记录前一帧在模型T(i,j,k)中所处的状态。F(i, j, k): Record the state of the previous frame in the model T(i, j, k).
为了提高识别精度,在Viterbi搜索过程中嵌入二阶马尔科大链(Bigram),即:语句的先验概率可用下式计算:In order to improve the recognition accuracy, the second-order Marko chain (Bigram) is embedded in the Viterbi search process, that is, the prior probability of the sentence can be calculated by the following formula:
其中,in,
W为被识趴的语句;W is the recognized statement;
w1,w2,...wn,为被识别的语句W中的各字;w 1 , w 2 ,...w n are each word in the recognized sentence W;
P(wi|wi-1)为字对的出现频度。P(w i |wi -1 ) is the occurrence frequency of word pairs.
由此获得即可实现将手语的动作信息转换为中间模式语言的语句。Thus obtained can realize the conversion of the action information of the sign language into the sentence of the intermediate mode language.
当需要将输入的手语翻译为非手语语句时(例如:语音语句),则根据中间模式语言数据与相应的非手语语言的对应关系,将中间模式语言数据转换为该非手语语言词语并输出。在本实施例中,作为手语到非手语之间的中间模式语言为文字文本;以语音输出为例:当需要语音输出时,可利用语音合成将采用文本中的文字表达出来。其中,语音合成可通过将孤立词的音频数据作简单的平滑处理后进行连接播出。具体的语音合成方法则采用现有的以文本文件合成语音的系统。When it is necessary to translate the input sign language into a non-sign language sentence (for example: voice sentence), then according to the corresponding relationship between the intermediate pattern language data and the corresponding non-sign language language, the intermediate pattern language data is converted into the non-sign language language word and output. In this embodiment, text is used as an intermediate mode language between sign language and non-sign language; take speech output as an example: when speech output is required, text in the text can be expressed by speech synthesis. Among them, the speech synthesis can be connected and broadcasted after the audio data of the isolated words are simply smoothed. The concrete speech synthesis method then adopts the existing system of synthesizing speech with text files.
实施例2:语音转变为手语输出Embodiment 2: Speech is converted into sign language output
参见图3和图4,将非手语语言通过一中间模式语言数据形式翻译为手语的具体方法为:Referring to Fig. 3 and Fig. 4, the specific method of translating non-sign language language into sign language through an intermediate mode language data form is:
首先,采集非手语语言词语数据,利用现有的语音识别技术将口语转换为中间模式语言数据;本实施例中,该语音识别可利用IBM公司开发的语音数据开发工具实现。First, collect non-sign language word data, and use the existing speech recognition technology to convert spoken language into intermediate mode language data; in this embodiment, the speech recognition can be realized using the speech data development tool developed by IBM.
然后,根据中间模式语言数据与该非手语语言的对应关系,将该非手语语言词语转换为中间模式语言数据并记录;本实施例中,所采用的中间模式语言存储为文字文本。Then, according to the corresponding relationship between the intermediate pattern language data and the non-sign language language, the non-sign language words are converted into intermediate pattern language data and recorded; in this embodiment, the adopted intermediate pattern language is stored as text.
最后,根据该文字文本,在手语词语库中找到相应的手语词语数据,再将该手语词语数据合成为手语图像信息输出;具体的实现包括:文本的输入、分析、切分、以文本表示的自然语言到手语码的转换以及手语词语数据合成为手语图像信息等步骤。Finally, according to the text, find the corresponding sign language word data in the sign language vocabulary database, and then synthesize the sign language word data into sign language image information output; the specific implementation includes: text input, analysis, segmentation, text representation The conversion of natural language to sign language code and the synthesis of sign language word data into sign language image information and other steps.
其中,文本的输入、分析、切分可利用现有的自然语言识别方法实现,然后将切分后的自然语言词语与手语词库中相应的手语词语相对应,获得该手语词语的手语特征数据。该手语特征数据用于最后合成相应的手语图像。Among them, the input, analysis, and segmentation of the text can be realized by using the existing natural language recognition method, and then the segmented natural language words correspond to the corresponding sign language words in the sign language lexicon, and the sign language feature data of the sign language words are obtained . The sign language feature data is used to finally synthesize corresponding sign language images.
上述手语图像合成的具体方法为:The concrete method of above-mentioned sign language image synthesis is:
首先,采用虚拟现实建模语言(Virtual Reality Modeling Language,简称VRML)的人体表示模型建立虚拟人;确定该虚拟人各自由度的角度值,并计算虚拟人每个肢体的位置和方向,确定出虚拟人的一个姿态;First, a virtual human is established using the human body representation model of Virtual Reality Modeling Language (VRML); the angle values of each degree of freedom of the virtual human are determined, and the position and direction of each limb of the virtual human are calculated to determine the A gesture of the virtual human;
由于手语是人体上肢运动,手语运动是人体运动在人体上肢关节上的投影,因此,显示手语(即将手语映射到虚拟人姿态)时,可将一个手语运动表示扩充为一个完整的人体运动表示;也就是说,手语的显示可以使用人体运动显示的通用方法进行,因此,通过忽略上述步骤获得的虚拟人姿态的非上肢关节角度,就可以获得该虚拟人的一个手语姿态;或者说通过忽略上述步骤获得的虚拟人姿态的非上肢关节角度,就可以将虚拟人的一个手语姿态以一个完整的人体运动姿态表示。Since sign language is the movement of the upper limbs of the human body, and sign language movement is the projection of the movement of the human body on the joints of the upper limbs of the human body, when displaying sign language (that is, mapping sign language to a virtual human posture), a sign language movement representation can be expanded into a complete human movement representation; That is to say, the display of sign language can be performed using the general method of human body motion display, therefore, by ignoring the non-upper limb joint angles of the virtual human posture obtained in the above steps, a sign language posture of the virtual human can be obtained; or by ignoring the above-mentioned The non-upper limb joint angles of the virtual human posture obtained in the step can express a sign language posture of the virtual human as a complete human body movement posture.
在获得所有的手语运动姿态数据以后,再按照规定的时间间隔连续显示一个手语运动中的每一个姿态,生成相应的手语运动图像。After obtaining all the gesture data of the sign language movement, each gesture in a sign language movement is continuously displayed according to the specified time interval, and a corresponding sign language movement image is generated.
在生成手语运动图像时,还进一步在手语运动图像的相邻帧之间进行平滑插值;具体的插值根据如下的公式计算:When generating the sign language moving image, smooth interpolation is further performed between adjacent frames of the sign language moving image; the specific interpolation is calculated according to the following formula:
其中,in,
f1和f2分别为一个手语运动中两个相邻图像帧;f 1 and f 2 are two adjacent image frames in a sign language movement;
tf1和tf2分别为f1和f2距起始点的时间值;t f1 and t f2 are the time values of f 1 and f 2 from the starting point respectively;
tf为被插入帧距起始点的时间值;t f is the time value of the inserted frame from the starting point;
t1为起始点的时间值;t 1 is the time value of the starting point;
Gi(tf)为被插入的自由度曲线函数值;G i (t f ) is the function value of the inserted degree of freedom curve;
Gi(t1)为起始点的自由度曲线函数值;G i (t 1 ) is the function value of the degree of freedom curve at the starting point;
Gi(tf1)为f1的自由度曲线函数;G i (t f1 ) is the degree of freedom curve function of f 1 ;
Gi(tf2)为f2的自由度曲线函数值。G i (t f2 ) is the function value of the degree of freedom curve of f 2 .
在生成手语运动图像时,还进一步采用基于四元组的运动插值方法对不连续手语帧中的复杂关节进行平滑过渡处理,具体的平滑处理根据如下的公式计算:When generating sign language motion images, a quadruple-based motion interpolation method is further used to perform smooth transition processing on complex joints in discontinuous sign language frames. The specific smoothing processing is calculated according to the following formula:
其中,in,
f1和f2分别为一个手语运动中两个相邻手势的图像帧;f 1 and f 2 are image frames of two adjacent gestures in a sign language movement;
tf1和tf2分别为f1和f2距起始点的时间值;t f1 and t f2 are the time values of f 1 and f 2 from the starting point respectively;
qf1和qf2分别为关节在tf1,tf2时刻的方向;q f1 and q f2 are the directions of joints at t f1 and t f2 respectively;
tf为插入帧距起始点的时间值;t f is the time value of the insertion frame from the starting point;
θ为.....并且由qf1,qf2=cosθ确定。θ is ..... and is determined by q f1 , q f2 = cos θ.
以上的两个实施例中仅给出了手语运动的识别和合成方法。事实上,手语语言中通常还包含着人体面部表情信息和唇动信息;并且,在对手语进行识别或者合成时,往往还需要对具体的表达方的面部表情特征和唇动信息给予描述。In the above two embodiments, only the recognition and synthesis methods of sign language movements are given. In fact, sign language usually also contains human facial expression information and lip movement information; moreover, when recognizing or synthesizing sign language, it is often necessary to describe the facial expression characteristics and lip movement information of the specific expression party.
参见图4,在本发明的实施例中还进一步对表达方特定的人脸进行检测获取相应的人脸特征及唇动信息,并且在输出端进行合成,与手语语句同步输出。具体的人脸及唇动信息的检测和合成的方法如下:Referring to Fig. 4, in the embodiment of the present invention, the specific face of the expressing party is further detected to obtain the corresponding facial features and lip movement information, and synthesized at the output end, and output synchronously with the sign language sentence. The specific detection and synthesis methods of face and lip movement information are as follows:
首先,提取该人脸信息中的特征数据,它至少包括:正面人脸特征的检测和侧面人脸特征点的检测;其中,First, extract feature data in the face information, which at least includes: detection of frontal face features and detection of side face feature points; wherein,
正面人脸特征的检测至少包括:面部特征粗定位、关键特征点检测和基于变形模板的特征形状检测;The detection of front face features at least includes: rough positioning of facial features, key feature point detection and feature shape detection based on deformed templates;
侧面人脸特征点检测至少包括:人脸侧面轮廓线的提取和人脸侧面特征点的检测。The feature point detection of the side face includes at least: the extraction of the profile line of the face and the detection of the feature points of the face.
所述的面部特征粗定位为:首先定位人眼虹膜的位置,然后根据两个虹膜中心点的位置数据、面部器官结构的统计先验数据和面部灰度分布特性获得该人脸其他器官的位置数据。The rough localization of facial features is as follows: first locate the position of the iris of the human eye, and then obtain the positions of other organs of the human face according to the position data of the two iris center points, the statistical prior data of the facial organ structure and the facial gray distribution characteristics data.
人脸关键特征点的检测为:获取眼角点、嘴角点、下巴曲线上的主要特征点,作为相应的器官模板参数的初值;具体包括:眼睛关键点的检测,嘴部关键点的检测和下巴关键点的检测;其中:眼睛关键点包括左右眼角点和上下眼皮的界限点;嘴部关键点包括两个嘴角点、上唇最高点和下唇最低点;下巴的关键点包括左右嘴角的延长线与下巴的交点、过中唇点的垂线与下巴的交点、过左右嘴角点的垂线与下巴的交点、过左嘴角点往左下45度直线与下巴的交点、过右嘴角点往右下45度直线与下巴的交点。The detection of the key feature points of the face is: obtain the main feature points on the corners of the eyes, the corners of the mouth, and the chin curve, as the initial value of the corresponding organ template parameters; specifically include: the detection of the key points of the eyes, the detection of the key points of the mouth and The detection of the key points of the chin; among them: the key points of the eyes include the left and right corners of the eyes and the boundary points of the upper and lower eyelids; the key points of the mouth include the two corner points of the mouth, the highest point of the upper lip and the lowest point of the lower lip; the key points of the chin include the extension of the left and right corners of the mouth The intersection point of the line and the chin, the intersection point of the vertical line passing through the middle lip point and the chin point, the intersection point of the vertical line passing through the left and right corners of the mouth and the chin point, the intersection point of the chin and the 45-degree line passing through the left corner point of the left mouth point, and the right side point passing through the right mouth corner point The intersection of the lower 45-degree line and the chin.
基于变形模板的特征形状检测包括:对眼睛区域特征形状进行检测,获得眼睛模板参数;对嘴部形状进行检测,获得嘴部模板参数;对下巴形状进行检测,获得下巴模板参数。The feature shape detection based on the deformed template includes: detecting the feature shape of the eye area to obtain the eye template parameters; detecting the mouth shape to obtain the mouth template parameters; detecting the chin shape to obtain the chin template parameters.
所述的人脸侧面轮廓线的提取为:利用人脸的肤色特征将人脸区域分割出来;然后采用边缘检测,并根据人脸轮廓的先验数据定位轮廓线。The extraction of the profile of the human face is as follows: using the skin color feature of the human face to segment the human face area; then adopting edge detection, and locating the contour line according to the prior data of the human face profile.
所述的人脸侧面特征点的检测为:以鼻尖点为界,将人脸轮廓线分为上下两段;通过曲线拟合,得到轮廓线的近似函数表达式,计算该函数一阶导数为零的点,将该点作为人脸侧面特征点。The detection of the feature points on the side of the human face is as follows: with the tip of the nose as the boundary, the contour line of the human face is divided into upper and lower sections; by curve fitting, the approximate function expression of the contour line is obtained, and the first derivative of the function is calculated The point that is zero is used as the feature point of the face side.
为了从特定人脸上提取特征,可在人脸模型上定义41个特征点。这里特征点可以从特定人的正面和侧面像中提取出来,特征点的自动提取属于人脸面部图象检测与分析范畴,假定已经应用分析与识别技术从特定人脸图象上提取了所需特征或变形曲线,然后将其当作对一般人脸模型的变形参数。由于一般人脸中性模型是一个三维网格体,每个特征点的三维坐标是已知的,在从一般人脸中性模型到特定人脸中性模型的修改过程中,要进行两种变换。首先要对一般人脸中性模型进行整体变换,整体变换的目的是完成面部整体轮廓的修改,使其与特定人的脸形和五官的大致位置相匹配。设变换前的人脸模型上点坐标为(x,y,z),变换后为(x′,y′,z′),变换前后脸的中心点坐标分别为o(x0,y0,z0)和o(x′0,y′0,z′0)。其中,脸的中心点o定义为两眼眼角点连线及人脸纵向中轴线之间的交点。参数p,q1,q2分别定义为中心点到鬓角,中心点到额头中点,中心点到下巴之间的距离;参数u定义为嘴部中心点到耳朵下缘距离;参数r1,r2,r3在侧面分别定义为额头最高处到发际的距离,中心点到耳朵上缘的距离,嘴角点到耳朵下边缘的距离。In order to extract features from a specific face, 41 feature points can be defined on the face model. Here the feature points can be extracted from the front and side images of a specific person. The automatic extraction of feature points belongs to the category of face image detection and analysis. It is assumed that the analysis and recognition technology has been applied to extract the required features or deformation curves, which are then treated as deformation parameters for general face models. Since the general face-neutral model is a 3D mesh body, and the 3D coordinates of each feature point are known, two transformations are required during the modification process from the general face-neutral model to the specific face-neutral model. Firstly, an overall transformation should be performed on the neutral model of the general face. The purpose of the overall transformation is to complete the modification of the overall contour of the face so that it matches the shape of the face and the approximate position of the facial features of a specific person. Suppose the point coordinates on the face model before transformation are (x, y, z), and after transformation are (x′, y′, z′), and the coordinates of the center point of the face before and after transformation are o(x 0 , y 0 , z 0 ) and o(x′ 0 , y′ 0 , z′ 0 ). Among them, the center point o of the face is defined as the intersection point between the line connecting the corners of the two eyes and the longitudinal central axis of the face. Parameters p, q1, and q2 are respectively defined as the distance from the center point to the sideburns, from the center point to the midpoint of the forehead, and from the center point to the chin; parameter u is defined as the distance from the center point of the mouth to the lower edge of the ear; parameters r1, r2, r3 On the side, it is defined as the distance from the highest point of the forehead to the hairline, the distance from the center point to the upper edge of the ear, and the distance from the corner of the mouth to the lower edge of the ear.
时于上半部分脸(眼睛水平线以上的部分)采用如下修改公式:For the upper part of the face (the part above the eye level), the following modified formula is used:
x′=x0′+p′/p(x-x0)x′=x 0 ′+ p′ / p (xx 0 )
y′=y0′+q1′/q1(y-y0)y'=y 0 '+ q1' / q1 (yy 0 )
z′=z0′+r1′/r1(z-z0)z'=z 0 '+ r1' / r1 (zz 0 )
对于脸的中间部分和下部可以类似地修改。The middle and lower parts of the face can be modified similarly.
局部变换中的眼睛区域上点的修改公式如下:The modification formula of the point on the eye area in the local transformation is as follows:
设:(x,y,z)为变换前眼睛区域点的坐标,(x′,y′,z′)为变换后眼睛区域点的坐标,则有Let: (x, y, z) be the coordinates of the eye area points before transformation, (x′, y′, z′) be the coordinates of the eye area points after transformation, then we have
x′=ax+by+czx'=ax+by+cz
y′=dx+ey+fzy′=dx+ey+fz
其中,变量a,b,c,d,e,f可通过带入变换前后的三组特征点,解六元线性方程组得到,经过这个变换,可以达到对眼睛微小位置的移动及其形状的改变。Among them, the variables a, b, c, d, e, f can be obtained by bringing in the three sets of feature points before and after the transformation, and solving the six-element linear equations. After this transformation, the movement of the tiny position of the eye and its shape can be achieved. Change.
上式看出,修改仅发生在x和y方向上,而没有进行深度方面的修改,因此不能很好地反映特征人脸的侧面信息。同理可进行眉毛和嘴部的修改。It can be seen from the above formula that the modification only occurs in the x and y directions, without modification in depth, so the side information of the feature face cannot be well reflected. In the same way, eyebrows and mouth can be modified.
对鼻子部分的变换公式如下:设变换前后鼻子区域点的坐标分别为(x,y,z)和(x′,y′,z′),鼻子的中心点坐标分别为(x0,y0,z0)和(x′0,y′0,z′0),The transformation formula for the nose part is as follows: Let the coordinates of the nose area points before and after the transformation be (x, y, z) and (x′, y′, z′), respectively, and the coordinates of the center point of the nose are (x 0 , y 0 , z 0 ) and (x′ 0 , y′ 0 , z′ 0 ),
x′=x0′+p′/p(x-x0)x′=x 0 ′+ p′ / p (xx 0 )
y′=y0′+q′/q(y-y0)y'=y 0 '+ q' / q (yy 0 )
z′=z0′+r′/r(z-z0)z′=z 0 ′+ r′ / r (zz 0 )
完成了整体变换和局部变换之后,就得到了一张基本具有特定人脸特征的中性三维人脸网格体。After the overall transformation and local transformation are completed, a neutral 3D face mesh with specific facial features is obtained.
人脸图像中的唇部模型采用两条抛物线拟合上唇线,一条抛物线拟合下唇线,选择两个嘴角点,两上唇抛物线的最高点,下唇抛物线的最低点,两上唇抛物线的交点。另外在下唇抛物线上增加两点,上唇抛物线上增加两点,在两个嘴角点的连线上增加三组每两个可重合的点。对于张嘴形式,各对重合点分离,分别成为上下内唇抛物线上的各点。唇部的外轮廓的抛物线方程满足:The lip model in the face image uses two parabolas to fit the upper lip line, one parabola to fit the lower lip line, select two corner points of the mouth, the highest point of the two upper lip parabolas, the lowest point of the lower lip parabola, and the intersection point of the two upper lip parabolas . In addition, add two points on the parabola of the lower lip, add two points on the parabola of the upper lip, and add three groups of two overlapping points on the connection line between the two corner points of the mouth. For the mouth-opening form, each pair of coincident points separates and becomes each point on the upper and lower inner lip parabolas respectively. The parabolic equation for the outer contour of the lip satisfies:
y=a(x-b)2+cy=a(xb) 2 +c
其中,系数a,b,c可以通过将已知点的坐标带入上述方程即可解得。Among them, the coefficients a, b, c can be solved by bringing the coordinates of known points into the above equation.
动态的唇动模型利用上述张口模型,描述为五条相互关联起来的抛物线,其中包括上唇两条,下唇一条,上下内唇各一条。在进行参数驱动的唇动合成中,给出纵、横方向上开口的距离,或者给出嘴角点以及上下唇的最高点,即可确定相应的唇部开口状态。The dynamic lip movement model uses the above-mentioned mouth opening model and is described as five interrelated parabolas, including two for the upper lip, one for the lower lip, and one for the upper and lower inner lips. In the parameter-driven lip motion synthesis, the corresponding lip opening state can be determined by giving the distance of the opening in the vertical and horizontal directions, or giving the corner points of the mouth and the highest points of the upper and lower lips.
在汉语中,语言中每一个能很自然分辨的语音单位为一个音节,通常一个汉字就是一个音节,一般一个音节是由声母和韵母组成的,发音时声母持续时间很短,然后迅速滑到韵母口型,在汉语拼音中,声母有19个,韵母39个,韵母又分为单韵母、复韵母和鼻韵母。单韵母发音时,舌位唇形在整个发音过程中不变,所以可以看作一个口型。在汉语发音时常见口型基础上定义几个基本口型,交互地改变口型参数,调整皮肤网格体上嘴部区域上的网格点位置,形成代表基本口型的网格体并预先存储下来。根据上面描述的基本口型,可以衍生出一个韵母口型库,衍生规则如下:In Chinese, each phonetic unit that can be distinguished naturally in the language is a syllable. Usually a Chinese character is a syllable. Generally, a syllable is composed of an initial consonant and a final consonant. Mouth shape, in Chinese Pinyin, there are 19 initials and 39 finals, and the finals are divided into single finals, compound finals and nasal finals. When the single vowel is pronounced, the tongue position and lip shape remain unchanged throughout the pronunciation process, so it can be regarded as a mouth shape. Define several basic mouth shapes on the basis of the common mouth shapes in Chinese pronunciation, change the mouth shape parameters interactively, adjust the grid point position on the mouth area on the skin mesh body, form a mesh body representing the basic mouth shape and pre-set Store it down. According to the basic mouth shapes described above, a library of vowel mouth shapes can be derived, and the derivation rules are as follows:
(1)单韵母发音已有基本口型与之对应(1) There are basic mouth shapes corresponding to the single final pronunciation
(2)对于复韵母和鼻韵母发音口型,可以拆成多个基本口型的线性组合。(2) For the pronouncing mouth shapes of compound finals and nasal finals, it can be split into a linear combination of multiple basic mouth shapes.
对所有复韵母和鼻韵母,都可以得到对应的口型参数,这样就构成了韵母口型库。合成时,根据汉语拼音,找出声母和韵母对应的口型,然后合成出来,必要时,可以在口型之间进行插值,平滑嘴唇的变化。For all compound finals and nasal finals, the corresponding mouth shape parameters can be obtained, thus forming a finals mouth shape library. When synthesizing, find out the mouth shapes corresponding to the initials and finals according to the Chinese pinyin, and then synthesize them. If necessary, you can interpolate between the mouth shapes to smooth the changes of the lips.
最后所应说明的是:以上实施例仅用以说明而非限制本发明的技术方案,尽管参照上述实施例对本发明进行了详细说明,本领域的普通技术人员应当理解:依然可以对本发明进行修改或者等同替换,而不脱离本发明的精神和范围的任何修改或局部替换,其均应涵盖在本发明请求保护的技术方案范围当中。Finally, it should be noted that the above embodiments are only used to illustrate and not limit the technical solutions of the present invention, although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the present invention can still be modified Or an equivalent replacement, any modification or partial replacement that does not depart from the spirit and scope of the present invention shall fall within the scope of the technical solutions claimed in the present invention.
Claims (27)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 02121369 CN1246793C (en) | 2002-06-17 | 2002-06-17 | Method of hand language translation through a intermediate mode language |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 02121369 CN1246793C (en) | 2002-06-17 | 2002-06-17 | Method of hand language translation through a intermediate mode language |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN1464433A CN1464433A (en) | 2003-12-31 |
| CN1246793C true CN1246793C (en) | 2006-03-22 |
Family
ID=29742946
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 02121369 Expired - Fee Related CN1246793C (en) | 2002-06-17 | 2002-06-17 | Method of hand language translation through a intermediate mode language |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN1246793C (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100332229A1 (en) * | 2009-06-30 | 2010-12-30 | Sony Corporation | Apparatus control based on visual lip share recognition |
| CN102737397B (en) * | 2012-05-25 | 2015-10-07 | 北京工业大学 | What map based on motion excursion has rhythm head movement synthetic method |
| CN106203235B (en) * | 2015-04-30 | 2020-06-30 | 腾讯科技(深圳)有限公司 | Living body identification method and apparatus |
| CN108629241B (en) * | 2017-03-23 | 2022-01-14 | 华为技术有限公司 | Data processing method and data processing equipment |
| CN108766434B (en) * | 2018-05-11 | 2022-01-04 | 东北大学 | Sign language recognition and translation system and method |
| CN109166409B (en) * | 2018-10-10 | 2021-02-12 | 长沙千博信息技术有限公司 | Sign language conversion method and device |
| WO2021218750A1 (en) * | 2020-04-30 | 2021-11-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | System and method for translating sign language |
-
2002
- 2002-06-17 CN CN 02121369 patent/CN1246793C/en not_active Expired - Fee Related
Also Published As
| Publication number | Publication date |
|---|---|
| CN1464433A (en) | 2003-12-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1241168C (en) | Learning apparatus, learning method, and robot apparatus | |
| CN1879147A (en) | Text-to-speech method and system, computer program product therefor | |
| CN100347741C (en) | Mobile speech synthesis method | |
| CN1215433C (en) | Online character identifying device, method and program and computer readable recording media | |
| CN1462428A (en) | voice processing device | |
| CN1194337C (en) | Voice identifying apparatus and method, and recording medium with recorded voice identifying program | |
| CN1196103C (en) | Voice identifying apparatus and method, and recording medium with recorded voice identifying program | |
| CN1941077A (en) | Apparatus and method speech recognition of character string in speech input | |
| CN1161687C (en) | Handwriting Matching Technology | |
| CN1143263C (en) | System and method for recognizing tonal languages | |
| CN1328321A (en) | Apparatus and method for providing information by speech | |
| CN1725295A (en) | Speech processing apparatus, speech processing method, program, and recording medium | |
| CN101046960A (en) | Apparatus and method for processing voice in speech | |
| CN1091906C (en) | Pattern recognizing method and system and pattern data processing system | |
| CN1652107A (en) | Language conversion rule generating device, language conversion device and program recording medium | |
| CN1653518A (en) | Speech recognition device | |
| CN101937431A (en) | Emotional voice translation device and processing method | |
| CN1453767A (en) | Speech recognition apparatus and speech recognition method | |
| CN1545693A (en) | Intonation generation method, speech synthesis device and speech server using the method | |
| CN1246793C (en) | Method of hand language translation through a intermediate mode language | |
| CN1841497A (en) | Speech synthesis system and method | |
| CN1220173C (en) | Fundamental frequency pattern generating method, fundamental frequency pattern generator, and program recording medium | |
| CN101034409A (en) | Search method for human motion based on data drive and decision tree analysis | |
| CN1462995A (en) | Speech recognition system, method and recording medium of recording speech recognition program | |
| CN1287657A (en) | Speech recognizing device and method, navigation device, portable telephone, and information processor |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20060322 Termination date: 20200617 |