[go: up one dir, main page]

JPH02236407A - Method and device for measuring shape of object - Google Patents

Method and device for measuring shape of object

Info

Publication number
JPH02236407A
JPH02236407A JP1058596A JP5859689A JPH02236407A JP H02236407 A JPH02236407 A JP H02236407A JP 1058596 A JP1058596 A JP 1058596A JP 5859689 A JP5859689 A JP 5859689A JP H02236407 A JPH02236407 A JP H02236407A
Authority
JP
Japan
Prior art keywords
line
sight
target object
cross
sectional shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP1058596A
Other languages
Japanese (ja)
Other versions
JPH076782B2 (en
Inventor
Toshio Ueshiba
俊夫 植芝
Fumiaki Tomita
文明 富田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP1058596A priority Critical patent/JPH076782B2/en
Publication of JPH02236407A publication Critical patent/JPH02236407A/en
Publication of JPH076782B2 publication Critical patent/JPH076782B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To measure the shape of even an object including a curved surface which has not texture by picking up the image of the object with plural image pickup devices put on the same specific plane and obtaining the line of sight from each image pickup device to the boundary between the object and the background in accordance with the obtained image. CONSTITUTION:Respective projection centers OL, OC, and OR of television cameras 2L, 2C, and 2R are put on the same plane 21, and this plane 21 is called 'an epipolar plane. Projection centers OL, OC, and OR of cameras 2L, 2C, and 2R are put on not only the same epipolar plane but also the same line (base line 22) drawn on the epipolar plane 21. Consequently, the epipolar plane 21 can be scanned around the base line 22 in rotation directions of arrows (f). Thus, the shape of the section, which the epipolar plane 21 transverses, of an object 10 is restored in each rotation angular position of the epipolar plane 21, and the three-dimensional shape of the object 10 are precisely obtained.

Description

【発明の詳細な説明】 [産業上の利用分野] 本発明は物体を撮像装置によって撮像し、得られた画像
からその形状を測定ないし復元する方法及び装置に関し
、特に、特定の平面で切断した断面の形状が予め分かつ
ている種類の二次元凸断面形状である場合に有効な形状
測定方法及び装置に関する. [従来の技術] 物体を撮像装置により立体的に捉え、計測(復元)する
ための装置構成自体としては、従来からも、いわゆる両
眼立体視の原理に即するものとして、互いに所定の離間
距離を置いた二台の撮像装置を用いるものが知られてい
た。
[Detailed Description of the Invention] [Industrial Application Field] The present invention relates to a method and apparatus for capturing an image of an object using an imaging device and measuring or restoring its shape from the obtained image, and particularly relates to a method and a device for capturing an image of an object using an imaging device and measuring or restoring the shape of the object from the obtained image. This article relates to a shape measuring method and apparatus that are effective when the cross-sectional shape is a two-dimensional convex cross-sectional shape that is known in advance. [Prior Art] The device configuration itself for three-dimensionally capturing and measuring (reconstructing) an object using an imaging device has traditionally been based on the principle of so-called binocular stereopsis, in which objects are separated from each other at a predetermined distance. It is known that two imaging devices are used.

しかし、こうした装置を用いての従来の形状測定手法で
は、測定対象物体表面のテクスチャを基に相関を採る等
し、これにより、二台の撮像装置の各々から各一枚づつ
得られる計二枚の画像間の対応点を求めて、物体表面の
三次元位置を求めるようになっていた。
However, in conventional shape measurement methods using such devices, correlations are taken based on the texture of the surface of the object to be measured. The three-dimensional position of the object surface was determined by finding corresponding points between the images.

[発明が解決しようとする課題] しかるに、上記した従来の測定方法では、各点に他と区
別できるような特徴を持たない一様な表面の物体では、
当該各点の区別ができないがため、一意的には対応点を
決定できないという欠点があり、したがって特に、特定
の平面で切断した二次元断面形状を採ると、それが凸断
面形状となるような曲面体の形状を測定するには不向き
であった。
[Problems to be Solved by the Invention] However, with the conventional measurement method described above, for an object with a uniform surface where each point does not have distinguishable features,
Since each point cannot be distinguished, it has the disadvantage of not being able to uniquely determine corresponding points. Therefore, especially when a two-dimensional cross-sectional shape cut along a specific plane is taken, it may become a convex cross-sectional shape. It was not suitable for measuring the shape of a curved surface.

1本発明はこの点の解決を計って成されたもので、テク
スチャを持たない曲面を含む物体にも適用可能な形状測
定方法及び装置を提供せんとするものである. [課題を解決するための手段] 本発明は、上記の目的に沿い、特定の平面で断面を採る
とその断面形状が凸断面形状となる対象物体の形状を測
定(復元)するために、次のような構成を開示する. まず、二つ以上、複数個の撮像装置を用いるに際し、特
定の平面上にそれら複数個の撮像装置の全ての投影中心
を乗せるようにする. 次に、そうした各々の撮像装置が当該特定の平面に沿フ
て対象物体の存在する領域を見た場合に、当該対象物体
と背景との境界を見る視線を求める. その上で、求められた視線群が所定の種類の凸断面形状
の包結線となるように計算処理する。
1. The present invention has been made to solve this problem, and aims to provide a shape measuring method and apparatus that can be applied to objects including curved surfaces without texture. [Means for Solving the Problems] In accordance with the above-mentioned object, the present invention provides the following methods for measuring (restoring) the shape of a target object whose cross-sectional shape becomes a convex cross-sectional shape when a cross-section is taken on a specific plane. Disclose a configuration like this. First, when using two or more imaging devices, the projection centers of all of the imaging devices should be placed on a specific plane. Next, when each such imaging device views the area where the target object exists along the specific plane, the line of sight that looks at the boundary between the target object and the background is determined. Then, calculation processing is performed so that the determined line-of-sight group becomes an enveloping line of a predetermined type of convex cross-sectional shape.

このような基本構成に加え、各撮像装置の投影中心を、
全て特定の平面上に乗せるだけではなく、この特定の平
面上においてさらに特定の直線上にも乗るように揃え、
この特定の直線の周りに当該特定の平面を回転方向に走
査しながら、その各回転角位置において上記の計算処理
をする手法も開示する. さらに、特定の平面上における複数の撮像装置の視線群
が、対象物体の表面上の一点にて交わるか否かの判断を
加えた手法も提案する。
In addition to this basic configuration, the projection center of each imaging device is
Not only do they all lie on a specific plane, but they are also aligned on a specific straight line on this specific plane.
A method is also disclosed in which the above calculation process is performed at each rotational angular position while scanning the specific plane in the rotational direction around this specific straight line. Furthermore, we also propose a method in which it is determined whether the lines of sight of a plurality of imaging devices on a specific plane intersect at a single point on the surface of the target object.

また、複数の撮像装置は、少なくとも二台以上あれば本
発明が成立するが、これを三台以上に限定した手法も開
示し、特に、三台以上とした場合には、それらの撮像装
置の視線群の相対的な位置関係に基づき、それら視線群
に対し、対象物体がどちらの側に存在するのかの判断を
加えた手法も提案する。
In addition, the present invention can be realized as long as there are at least two plurality of imaging devices, but a method is also disclosed in which the number of imaging devices is limited to three or more. We also propose a method that adds judgment to which side of the line of sight groups the target object is located on, based on the relative positional relationship of the line of sight groups.

もちろんこの最後の手法は、その前に述べたように、特
定平面上の特定直線上に全ての撮像装置の投影中心を乗
せる手法とも、また、それら視線群が対象物体の境界線
上の一点にで交わるか否かの判断手法とも、共に組合せ
ることができる。
Of course, as mentioned earlier, this last method can be used to place the projection centers of all imaging devices on a specific straight line on a specific plane, or to place the line of sight at a single point on the boundary line of the target object. It can also be combined with a method for determining whether or not they intersect.

[作  用] 本発明では、同一の特定平面上に乗っている複数の撮像
装置で対象物体を撮像し、得られる画像に基づき、各撮
像装置が対象物体と背景との境界を見る視線を求め、こ
の視線を予め分かっている所定種類、例えば真円である
とか楕円であるとかの凸断面形状の包絡線とみなし、当
該視線が最適な包絡線となる凸断面形状を計算により求
めている。
[Function] In the present invention, a target object is imaged by a plurality of imaging devices disposed on the same specific plane, and based on the obtained images, each imaging device determines the line of sight for viewing the boundary between the target object and the background. This line of sight is regarded as an envelope of a convex cross-sectional shape of a predetermined type known in advance, such as a perfect circle or an ellipse, and a convex cross-sectional shape in which the line of sight is an optimal envelope is determined by calculation.

そのため、当該計算処理の結果、最終的に求められた凸
断面形状は、特定の平面において対象物体の断面を採っ
た場合の当該対象物体の断面形状に良く合致したものと
なり、したがってテクスチャを持たない曲面表面の物体
であっても、当該特定の平面で採った断面形状を簡単か
つ確実に求めることができる。
Therefore, as a result of the calculation process, the convex cross-sectional shape finally determined closely matches the cross-sectional shape of the target object when the cross-section of the target object is taken on a specific plane, and therefore does not have texture. Even if the object has a curved surface, the cross-sectional shape taken on a particular plane can be easily and reliably determined.

また、対象物体の断面形状が例えば真円であって、その
半径を知るために当該真円の形状を復元するというよう
な比較的簡単な目的のためには、用いる撮像装置は最低
、二台あれば良い。
Furthermore, if the cross-sectional shape of the target object is, for example, a perfect circle, and for a relatively simple purpose such as restoring the shape of the perfect circle in order to find its radius, at least two imaging devices may be used. It's good to have.

これに対し、三台以上の撮像装置を用いれば、最も一般
的な楕円形状の外、より複雑な凸断面形状に関しての復
元も可能になるし、測定分解能(Fit度)を高めるこ
ともできる。
On the other hand, if three or more imaging devices are used, it is possible to restore not only the most common elliptical shape but also a more complicated convex cross-sectional shape, and the measurement resolution (Fit degree) can also be improved.

このような基本構成の及ぼす作用に加え、各撮像装置の
投影中心を全て特定の平面上に乗せるだけではなく、さ
らに特定の直線上にも乗せると、この特定の直線の周り
に当該特定の平面を回転方向に走査しながら、その各回
転角位置において上記の計算処理をなすことにより、対
象物体を互いに異なる複数の特定平面で断面を採り、か
つ、その特定平面に関しての断面形状を復元した結果を
複数、得ることができるので、対象物体の三次元的な形
状復元も簡単に、かつ連続的な処理でなすことができる
In addition to the effects of this basic configuration, if the projection center of each imaging device is not only placed on a specific plane, but also placed on a specific straight line, the projection center of each imaging device will be placed on a specific straight line, By performing the above calculation processing at each rotational angular position while scanning in the rotational direction, cross-sections of the target object are taken at multiple specific planes that are different from each other, and the cross-sectional shape with respect to the specific planes is restored. Since a plurality of images can be obtained, the three-dimensional shape of the target object can be easily restored through continuous processing.

さらに、特定の平面上における複数の撮像装置の視線群
が、対象物体の表面上の一点にて交わるか否かの判断を
もなす場合には、凸断面形状復元のための計算処理の前
に、当該対象物体が果たして本発明の手法に通した凸断
面形状を持っているか否かを判断することができる。複
数の撮像装置の視線群が、対象物体の表面上の一点にて
交わる場合には非凸断面形状であると判断できるし、逆
に凸断面形状である場合には、複数の撮像装置からの視
線群は、対象物体の表面上の一点にて決して集束しない
からである。
Furthermore, when determining whether or not the lines of sight of multiple imaging devices on a specific plane intersect at a single point on the surface of the target object, it is necessary to , it can be determined whether the target object has a convex cross-sectional shape that can be applied to the method of the present invention. If the lines of sight of multiple imaging devices intersect at one point on the surface of the target object, it can be determined that the object has a non-convex cross-sectional shape, and conversely, if the line of sight from multiple imaging devices intersects at a single point on the surface of the target object, if it has a convex cross-sectional shape, This is because the line of sight never converges on a single point on the surface of the target object.

なお、このような作用に加え、特に撮像装置を三台以上
用いた場合には、それら視線群の相対的な位置関係によ
り、対象物体がそれら視線群のどちらの側に存在してい
るのかの判断も可能となる。
In addition to this effect, especially when three or more imaging devices are used, it is difficult to determine on which side of the line of sight the target object is located, depending on the relative positional relationship between the line of sight groups. Judgment is also possible.

[実 施 例] 第1図は本発明に従う基本的な一実施例を示している。[Example] FIG. 1 shows a basic embodiment according to the invention.

本実施例では、符号2L . 2c . 2Rで各々が
示されているように、撮像装置は三台用いられ、それら
各撮像装置はまた、テレビジョン・カメラで構成されて
いる。
In this embodiment, the code 2L. 2c. Three imaging devices are used, each of which is shown in 2R, and each imaging device also consists of a television camera.

各テレビジョン・カメラ2L, 2. , 2Rのそれ
ぞれの投影中心OL r OC + ORは、共に同一
の平面21上に乗っており、この平面21は、一般に、
“エピボーラ面”と呼ぶことができる。
Each television camera 2L, 2. , 2R are both on the same plane 21, and this plane 21 is generally
It can be called an “epibola surface”.

さらに、この実施例においては、各撞像装置ないしテレ
ビジョン・カメラ2L, 2c , 2Rの各投影中心
OL + oc + oRは、上記のように、全て同一
のエビボーラ面zl上にあるだけではなく、望ましいこ
とに、当該エビポーラ面2l上に引くことのできる同一
の直線(基線)22の上にも乗っている.そのため、換
言すれば、この基線21の周りにエピボーラ面22を回
転方向fに走査する状態を作ることができ、これにより
、後述の手法に従って当該エビボーラ面2lが横切る断
面における対象物体lOの当該断面形状復元に際しては
、エピボーラ面2lの各回転角位置ごとにこれをなすこ
とができ、対象物体工0の三次元形状を緻密に求めるこ
とができる。
Furthermore, in this embodiment, the projection centers OL + oc + oR of each imager or television camera 2L, 2c, 2R are not only located on the same Ebola plane zl, as described above. , desirably also lie on the same straight line (baseline) 22 that can be drawn on the Epipolar surface 2l. Therefore, in other words, it is possible to create a state in which the epiboral surface 22 is scanned in the rotational direction f around this base line 21, and thereby, according to the method described later, the relevant cross section of the target object lO in the cross section crossed by the relevant epiboral surface 2l. When restoring the shape, this can be done for each rotation angle position of the epiboral surface 2l, and the three-dimensional shape of the target object 0 can be determined precisely.

各テレビジョン・カメラ2L, 2c, 2Rがそれら
の投影中心OL,QC,ORから対象物体10と背景と
の境目の点、すなわち対象物体10の輪郭をかすめ見る
直線は視線VL + VC + vRと呼ぶことがでぎ
るが、第2図は、こうした第1図示の装置構成において
、ある特定のエビボーラ面2lによって対象物体10を
切断した場合、当該対象物体lOの断面形状の輪郭が凸
であるか否かを判定する手法を説明している。
The straight line through which the television cameras 2L, 2c, and 2R see the boundary point between the target object 10 and the background, that is, the outline of the target object 10 from their projection centers OL, QC, and OR, is the line of sight VL + VC + vR. As can be called, FIG. 2 shows whether, in the apparatus configuration shown in FIG. It explains the method for determining whether or not.

第2図の図面紙面は、基線22の周りに回転された任意
の回転角位置において対象物体IOを横切るエビボーラ
面21そのものに相当し、同第2図中の符号PL  ’
C + PRは、それぞれのテレビジョン・カメラ2L
,2C  2Rにおける画像上の対応点である。
The plane of the drawing in FIG. 2 corresponds to the shrimp bore surface 21 itself that crosses the target object IO at an arbitrary rotation angle position rotated around the base line 22, and is indicated by the symbol PL' in FIG.
C + PR is each television camera 2L
, 2C 2R are corresponding points on the image.

これら対応点PL,PC.PRは、各テレビジョン・カ
メラ2L,2(:,2+1の各画像上において明るさが
急変する点においてのみ、求めることができるが、第1
図中、各テレビジョン・カメラ2L * 2c , 2
Bに関して模式的に示されている視線vL+ VC +
 vRは、上記の投影中心とこの画像上の対応点を用い
て表記すると、それぞれ、線分OLPL , OcPc
 , ORPRを含む線となる。
These corresponding points PL, PC. PR can be determined only at the point where the brightness suddenly changes on each image of each television camera 2L, 2(:, 2+1).
In the figure, each television camera 2L * 2c, 2
Line of sight vL+ VC + schematically shown with respect to B
When vR is expressed using the above projection center and corresponding points on this image, it becomes line segments OLPL and OcPc, respectively.
, the line includes ORPR.

ここでもし、対象物体lOをエピボーラ面21で切断し
て得た断面が凸でなければ、第2図(八)に示されてい
るように、各テレビジョン・カメラ2L,2, , 2
lIの画像上における各対応点PL, PC, PRは
、共に対象物体10上の同一の点Pの像となるから、全
てのテレビジョン・カメラ2t., 2C , 2Rか
らの視線vL,vc,vllは、対象物体10の輪郭上
の一点であるこの点Pにて全て交わる。
Here, if the cross section obtained by cutting the target object lO at the epibolar plane 21 is not convex, each television camera 2L, 2, 2, as shown in FIG. 2 (8)
Since each of the corresponding points PL, PC, and PR on the image of lI are images of the same point P on the target object 10, all the television cameras 2t. , 2C, and 2R all intersect at this point P, which is one point on the contour of the target object 10.

これに対し、対象物体10をエピボーラ面21で切断し
て得た断面が凸であると、今度は第2図(B)に示され
ているようになり、各テレビジョン・カメラ2L,2c
,2Rの画像上の対応点P,  PC,PR?、対象物
体10とその背景とをそれぞれのテレビジョン・カメラ
2L,2C,2■が見分ける見掛け上の境界点に過ぎな
くなり、対象物体lO上の特定の点の像ではなくなるの
で、これら三木の視線vL  vC * vRが共に同
一の点で交わるようなことはなくなる。
On the other hand, if the cross section obtained by cutting the target object 10 along the epibolar plane 21 is convex, it will become as shown in FIG. 2(B), and each television camera 2L, 2c
, 2R on the image P, PC, PR? , these Miki's lines of sight become only apparent boundary points at which the television cameras 2L, 2C, 2■ distinguish between the target object 10 and its background, and are no longer images of specific points on the target object 1O. vL vC * vR will no longer intersect at the same point.

このようにして、この断面における対象物体10の形状
が凸であるか否かを判定することができ、したがって対
象物体10が曲面体であるか多面体であるか等の判断も
容易に行なえるし、また、凸でないものについては、断
面形状復元のための後述する計算処理の前に、これを観
測領域から除去することもできる。これは、測定不能な
対象物体についてまで、無駄な計算処理をなす不都合を
除く意味でも有効である。
In this way, it can be determined whether the shape of the target object 10 in this cross section is convex or not, and therefore it can be easily determined whether the target object 10 is a curved body or a polyhedron. , and those that are not convex can be removed from the observation area before the calculation process described later for restoring the cross-sectional shape. This is also effective in eliminating the inconvenience of unnecessary calculation processing even for objects that cannot be measured.

さらにこの実施例の場合には、三台のテレビジョン・カ
メラの中、両側のテレビジョン・カメラ2L , 2R
からの二木の視線0..PL , ORPRの交点が、
中央のテレビジョン・カメラ2。からの視線vcに対し
、左右(図面紙面上では上下)どちらの側にあるかによ
って、最終的な計算処理をなす前の段階で、対象物体1
0がこれらの視線群vL  vc + vRに対し、左
右どちらの側に存在しているのかも判断することができ
る。
Furthermore, in the case of this embodiment, among the three television cameras, the television cameras 2L and 2R on both sides
Niki's line of sight from 0. .. The intersection of PL and ORPR is
Central television camera 2. Depending on whether the target object is on the left or right (up or down on the drawing paper) with respect to the line of sight vc from
It is also possible to determine on which side of the line of sight group vL vc + vR 0 exists.

本発明は、すでに述べたように、予めその断面形状の種
類が知られているか、または予想される場合に有効であ
るが、ここで例えば、上記のように対象物体10のエピ
ボーラ面21による断面が凸であると判定され、かつ、
当該断面形状の種類が楕円で近似できると仮定した場合
の当該断面形状を定量的に求める一手法につき、第3図
に即して説明する。一般に、凸断面形状を持つ物体の当
該断面形状は、ここで想定しているように、楕円で近似
できる場合が最も多い。
As already mentioned, the present invention is effective when the type of the cross-sectional shape is known or predicted in advance. is determined to be convex, and
One method for quantitatively determining the cross-sectional shape assuming that the type of cross-sectional shape can be approximated by an ellipse will be described with reference to FIG. Generally, the cross-sectional shape of an object having a convex cross-sectional shape can most often be approximated by an ellipse, as assumed here.

対象物体lOの断面が楕円であれば、各テレビジョン・
カメラ2L , 2c , 2Rからの視線VL , 
Vc , VRは全て、当該楕円の接線となる。
If the cross section of the target object lO is an ellipse, each television
Line of sight VL from cameras 2L, 2c, 2R,
Vc and VR are all tangents to the ellipse.

このような場合、互いに異なるこれらの視線が少なくと
も五本以上得られれば、一つの簡単かつ代表的な手法と
して、最小二乗法を適用することにより、これら視線な
いし接線を包絡線とする楕円の長軸、短軸の各長さ、傾
き、中心位置の各定量パラメータを計算することができ
る。これはもちろん、当該楕円形状を復元し得たことに
なる。
In such a case, if at least five lines of sight that are different from each other are obtained, one simple and typical method is to apply the least squares method to calculate the length of the ellipse whose envelope is these lines of sight or tangents. Quantitative parameters such as the length of the axis and short axis, the inclination, and the center position can be calculated. This, of course, means that the elliptical shape can be restored.

この実施例では、上述のように、テレビジョン・カメラ
を三台用いているので、それらのテレビジョン・カメラ
2L,2C,2Rが各々、対象物体1oと背景との境界
を当該対象物体10の左右両側において見るとすれば、
得られる視線の数は6木になり、上記の条件を満たすこ
とができる。
In this embodiment, as described above, three television cameras are used, and therefore, these television cameras 2L, 2C, and 2R each define the boundary between the target object 1o and the background of the target object 10. If you look at both the left and right sides,
The number of lines of sight obtained is 6, which satisfies the above conditions.

これを一般的に言い直せば、n(n≧3)台以上のテレ
ビジョン・カメラ等の撮像装置を用い、かつ、それぞれ
の}a像装置が対象物体10の左右両側の輪郭点を見る
限りにおいては、最大、2n本の視線が得られ、原則と
しては数が多い程、楕円断面形状復元のための入力情報
数が増えるので、復元形状の分解能ないし“質“を向上
し得る可能性が増し、さらに楕円に限らずとも、もっと
複雑な凸断面形状の復元も可能になる。
To restate this in general terms, as long as n (n≧3) or more imaging devices such as television cameras are used, and each of the imaging devices sees contour points on both the left and right sides of the target object 10, A maximum of 2n lines of sight can be obtained, and as a general rule, the larger the number, the more input information for reconstructing the elliptical cross-sectional shape, so the possibility of improving the resolution or "quality" of the reconstructed shape increases. Furthermore, it is possible to restore not only elliptical shapes but also more complex convex cross-sectional shapes.

しかし逆に、用いる撮像装置の数が増える程、当然のこ
とながら装置としても大型化し、計算処理量も増え、処
理時間も掛かクでくるので、それらとのトレード・オフ
により、実際に用いる撮像装置の数を決定すれば良い。
However, on the other hand, as the number of imaging devices used increases, the size of the device naturally increases, the amount of calculation processing increases, and the processing time increases. All you have to do is decide on the number of devices.

もっとも、大体の対象物1oで凸断面形状を持つものは
、ほとんど、当該断面形状を楕円で近似できるという事
実も考え合せると、この実施例で述べているように、三
台(n=3)の撮像装置系というのは、性能とコストの
バランスの採れた、かなり合理的な構成例を開示してお
り、凸断面であると判断のつく対象物体1oに関しては
、実用上、十分な精度で当該エビボーラ面2lにおける
対象物体10の断面形状を復元することができる。
However, considering the fact that most of the objects 1o that have a convex cross-sectional shape can be approximated by an ellipse, three (n=3) The imaging device system disclosed here has a fairly reasonable configuration with a good balance between performance and cost, and for the target object 1o, which can be judged to have a convex cross section, it can be used with sufficient accuracy for practical purposes. The cross-sectional shape of the target object 10 on the shrimp bore surface 2l can be restored.

このような事情の下に、引き続き、この実施例における
最小二乗法を適用しての断面形状復元に関し、説明する
Under these circumstances, the cross-sectional shape restoration by applying the least squares method in this embodiment will be explained.

本実施例にて採用する最小二乗法は、おおまかに分ける
と、以下に述べる第一、第二の二つのステップから成る
Roughly divided, the least squares method employed in this embodiment consists of the first and second steps described below.

第一ステップ 最初にまずこの第一ステップでは、各視線からの距離の
二乗和が最小となる真円を求める。
First Step First, in this first step, find a perfect circle that minimizes the sum of squares of distances from each line of sight.

第1〜3図中において、符号VL , Vc , VR
を付して模式的に説明してぎた視線の方程式を、適当な
る二次元座標系での各座標値に対応する二次元ベクトル
値であるn,xを用いて、 ■のように求めることができる。
In FIGS. 1 to 3, symbols VL, Vc, VR
The line-of-sight equation, which has been schematically explained with can.

とすると、同様に二次元ベクトル値である中心座標値t
と、常にスカラ量である半径rとで定義される真円から
各視線までの距離は、 n’,t+r−d ll ・・・・・・■ となる。
Then, the center coordinate value t, which is also a two-dimensional vector value,
The distance from the perfect circle to each line of sight defined by the radius r, which is always a scalar quantity, is n', t+rd-dll...■.

したがって、求める真円は、 J=Σll n”, t+r−d, ll ’=Σ(t
’n,n:t+2 (r−d,)n:t+ (r−d,
)’)・・・・・・■ なる評価関数Jを最小にするt,rとして、次式(N:
視線の本数) ・・・・・・■ 次元ベクトル値x,tを用い、 (x−t) tA (x−t) = 1(A:2X2正
値対称行列) ・・・・・・■ と表すことができる。
Therefore, the desired perfect circle is J=Σll n'', t+rd, ll'=Σ(t
'n,n:t+2 (r-d,)n:t+ (r-d,
)')・・・・・・■ As t and r that minimize the evaluation function J, the following formula (N:
(number of line of sight) ・・・・・・■ Using dimensional vector values x and t, (x-t) tA (x-t) = 1 (A: 2X2 positive value symmetric matrix) ・・・・・・■ can be expressed.

そこで、既述の弐〇で表される視線とこの楕円との距離
は、 然、A−1もそうである。
Therefore, the distance between the line of sight represented by 2〇 mentioned above and this ellipse is, of course, the same as A-1.

・・・・・・■ となる。そこで、求める楕円は、 なる評価関数Jを最小にするA,tとして求められる。・・・・・・■ becomes. Therefore, the desired ellipse is It is determined as A, t that minimizes the evaluation function J.

しかるに、tは、 δJ −  =0 θt そこで、パラメータal,a3,a3を用いると、・・
・・・・[相] と表すことができる。
However, t is δJ − =0 θt Therefore, using parameters al, a3, a3,...
It can be expressed as ... [phase].

一方、先の第一ステップ中の式■に示されるように、予
め、値rが求められているので、これを用い、各パラメ
ータal+82+83の初期値を、J=Σ+tiワ L ・・・・・・■ となって、Aのみで表記でぎる。
On the other hand, as shown in the formula (■) in the first step above, the value r has been determined in advance, so using this, the initial value of each parameter al+82+83 is determined as J=Σ+tiwaL...・■, so it can be written only with A.

Aは既述のように正価対称行列であるから、当とした上
で、これら三つのパラメータal.a2+83に関する
ニュートン法を通用すれば、上記式■により表される評
価間数Jを最小にする最適な楕円を求めることができる
Since A is a net symmetric matrix as mentioned above, assuming that these three parameters al. By applying Newton's method regarding a2+83, it is possible to find the optimal ellipse that minimizes the evaluation interval number J expressed by the above equation (2).

第4図は以上の過程を図示により説明したもので,同図
(^)は三つの撮像装置ないしテレビジョン・カメラ2
L, 2c, 2Rから得られ、座標系上に抽出された
計六木の視線を示し、同図(B)は、上記の第一ステッ
プにより、当該接線の全てに接するに最適な真円が求め
られた状態を示している。
Figure 4 illustrates the above process, and the figure (^) shows three imaging devices or television cameras 2
The line of sight of six trees obtained from L, 2c, and 2R and extracted on the coordinate system is shown in the figure (B). In the first step above, the optimal perfect circle that touches all of the tangents is found. Indicates the desired state.

第4図(C)以降は上記第二のステップに関するもので
、同図(C)から(E)までは、第1回目から第3回目
までのパラメータ改善の結果が示されている。
FIG. 4(C) and subsequent steps relate to the second step, and FIG. 4(C) to (E) show the results of the first to third parameter improvements.

すなわち、同図(8) に示されている真円を徐々に最
適な楕円に変形して行く過程がこれら第4図(C)〜(
E)に示されており、第4図(F)には、第6回目のパ
ラメータ改善の結果として、妥当ないし最適と思われる
近似的な楕円形状が復元ざれた状態が示されている. しかるに、上記した手法は、もちろん、一つのエビポー
ラ面2lに関してのみの断面形状測定である。
In other words, the process of gradually transforming the perfect circle shown in Figure 4 (8) into an optimal ellipse is shown in Figure 4 (C) to (
Fig. 4(F) shows a state in which an approximate elliptical shape that is considered to be appropriate or optimal has been restored as a result of the sixth parameter improvement. However, the above-described method is, of course, a cross-sectional shape measurement only for one Epipolar surface 2l.

しかし、テレビジョン・カメラ2L , 2c , 2
*の没影中心’L * oc+ o宵が全て同一のエビ
ポーラ面2l上に乗っているだけでなく、図示実施例の
ように、特定の基線22上にも乗っている場合には、第
1図中に矢印fにて併示のように、この基線22の周り
に当該エビボーラ面2lを回転する方向に走査し、選択
した複数の各回転角位置ごとのエビボーラ面21に関し
、上記第一ステップ、第二ステップを繰返して行けば、
対象物体10を複数の仰角ないし俯角に沿って採った複
数の断面形状を得ることができ、結局、総合的に見ると
、テレビジョン・カメラ2い2c , 2Nの視野全体
の広い範囲に亙って対象物体10の表面形状を総合的に
三次元形状で求めることができる。
However, television cameras 2L, 2c, 2
When the projection center 'L*oc+oyoi of * are not only on the same epipolar surface 2l, but also on a specific base line 22 as in the illustrated embodiment, as shown in FIG. As shown by the arrow f inside, the shrimp bowl surface 2l is scanned in the direction of rotation around this base line 22, and the shrimp bowl surface 21 at each of a plurality of selected rotation angle positions is scanned in the first step, If you repeat the second step,
It is possible to obtain a plurality of cross-sectional shapes of the target object 10 taken along a plurality of elevation angles or depression angles, and in the end, when viewed comprehensively, it covers a wide range of the entire field of view of the television cameras 2c, 2N. The surface shape of the target object 10 can be comprehensively determined as a three-dimensional shape.

ただし、第5図に例示のように、対象物体10の表面の
法線の向きが不連続に変化するような輪郭点PIJは、
画像から得られる視線群VL, VC, VRが当該輪
郭点PLIにおいて断面形状に対する接線とはならない
ため、このような点PUを含む対象物体10の表面形状
は本発明方法では知ることができない. 本発明はあくまで、実用的な見地から、その断面形状が
凸断面形状であって、それがまた、楕円等、既知の形状
で近似できる場合の対象物体10の形状測定に対し、簡
単でありながら確実な手法を参是イ共せんとするもので
ある。
However, as illustrated in FIG. 5, contour points PIJ where the direction of the normal to the surface of the target object 10 changes discontinuously,
Since the line-of-sight groups VL, VC, and VR obtained from the image do not become tangents to the cross-sectional shape at the contour point PLI, the surface shape of the target object 10 including such a point PU cannot be known by the method of the present invention. From a practical point of view, the present invention is simple and useful for measuring the shape of a target object 10 when its cross-sectional shape is a convex cross-sectional shape and can be approximated by a known shape such as an ellipse. We would like to share with you a reliable method.

さらに言うなら、上記実施例に見られるように、撮像装
置を三台以上用いた場合には、例えば撮像装置の監視領
域中に一つづつ置かれる対象物体lOに関し、それがま
ず、本発明によりその断面形状が復元可能な凸断面形状
を持っているか否かの判断を速やかになした上で、判断
可能なものについては引き続き形状測定に入るというシ
ーケンスを採ることができる。
Furthermore, as seen in the above embodiment, when three or more imaging devices are used, for example, with respect to the target object 10 placed one by one in the monitoring area of the imaging device, A sequence can be adopted in which it is quickly determined whether or not the cross-sectional shape has a convex cross-sectional shape that can be restored, and then the shape measurement is continued for those that can be determined.

したがって、例えば工場等のベルト・コンベア上を流れ
る物体群の中から、凸断面形状を持つものと持たないも
の、例えば曲面体と多面体とをまず迅速に識別し、多面
体についてはこれを計算の対象とせず、そのまま通過さ
せることにより無駄な時間の発生を防ぎ、曲面体の中か
らはまた、同じ形状種類に属するもの同志を撮り分ける
等の作業に便利に使うことができる. しかし、先にも少し述べたように、取扱う対象物体10
が全て、エビボーラ面21で切断した断面においては必
ず凸断面になると分かっている場合には、当然、撮像装
置が現在見ている対象物体lOのエビポーラ面上におけ
る断面が凸であるか否かの判断は必要ないので、例えば
、対象物体10の当該断面が真円であって、その半径だ
けを求めれば良いというような単純な測定処理の場合に
は、二台の撮像装置でも十分、本発明に従った処理が行
なえる。
Therefore, among a group of objects flowing on a belt conveyor in a factory, for example, those with convex cross-sectional shapes and those without convex cross-sectional shapes, for example, curved bodies and polyhedra, are first quickly identified, and polyhedra are the objects of calculation. By letting the object pass through the object as it is, it is possible to prevent wasted time, and it can also be conveniently used for tasks such as separating objects belonging to the same shape type from within a curved object. However, as mentioned earlier, the object to be handled 10
If it is known that all of the cross-sections cut along the Ebi-polar surface 21 are convex, it is natural to know whether the cross-section of the target object lO currently viewed by the imaging device on the Ebi-polar surface is convex or not. Since no judgment is necessary, for example, in the case of a simple measurement process in which the cross section of the target object 10 is a perfect circle and only the radius thereof needs to be determined, two imaging devices are sufficient. Processing can be performed according to the following.

すなわち、二台の撮像装置から得られる最大四木の互い
に異なる視線を、本発明に従い、当該真円断面形状の輪
郭に対する接線とみなせば、上記した第一ステップに準
じ、目的とする半径を抽出することができる。
In other words, if the different lines of sight of the maximum four trees obtained from the two imaging devices are regarded as tangents to the contour of the perfect circular cross-sectional shape according to the present invention, the target radius can be extracted according to the first step described above. can do.

こうしたことから、本発明の要旨構成においては、単に
複数台の撮像装置を用いるとの表現になっているのであ
るが、もちろん逆に、用いる撮像装置の数を図示実施例
の三台よりもさらに増やせば、同じ断面形状を復元する
にもその分解能を向上させたり、あるいはまた楕円に限
らず、もっと複雑な凸断面形状の復元も可能になる。
For this reason, the gist of the present invention simply states that a plurality of imaging devices are used, but of course, the number of imaging devices used is even greater than the three in the illustrated embodiment. By increasing the number, the resolution can be improved even when restoring the same cross-sectional shape, or it becomes possible to restore not only an ellipse but also a more complicated convex cross-sectional shape.

なお、既述した本発明の実施例においては、撮像装置2
L+2C+2Rとしてテレビジョン・カメラを用いる場
合を想定してきたが、これに限ることはないし、例えば
対象物体1oの特定の一断面に関してのみ、測定を行な
えば良いような場合には、一次元ライン・センサを撞像
装置として用いることもできる。
Note that in the embodiments of the present invention described above, the imaging device 2
Although we have assumed that a television camera is used as L+2C+2R, this is not the only option. For example, if it is only necessary to measure a specific cross section of the target object 1o, a one-dimensional line sensor may be used. can also be used as an imager.

各撮像装置から得られる視線を接線ないし包絡線として
当てはめるべき凸断面形状についても、上記実施例では
最も一般的な楕円を想定してきたが、その形状の種類が
既知である場合にはこれに限ることはなく、さらに当て
ほめの方法自体も、上記最小二乗法の外、異なる評価関
数を用い、それに即した処理としても良い。
Regarding the convex cross-sectional shape to which the line of sight obtained from each imaging device should be applied as a tangent or envelope, the most common ellipse has been assumed in the above embodiments, but it is limited to this if the type of the shape is known. Furthermore, the estimation method itself may use a different evaluation function in addition to the above-mentioned least squares method, and may be processed accordingly.

[効  果] 本発明によれば、断面形状が凸断面形状となる対象物体
についてのみではあるが、光を投射する等の能動的な手
法を必要とすることもなく、従来は困難であったテクス
チャを持たない物体の形状測定(復元)をPJ車かつ確
実に行なうことができ、実用上、極めて大きな価値を持
つ。
[Effects] According to the present invention, active methods such as projecting light are not required, which was difficult in the past, although only for target objects with a convex cross-sectional shape. It is possible to reliably measure (restore) the shape of an object without texture using a PJ vehicle, and it has extremely high practical value.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明に従って構成された望ましい一実施例の
概略構成図 第2図は特定のエピボーラ面にて切断した対象物体の断
面形状が凸であるか否かの判断を行なうに関しての説明
図. 第3図は近似的に複数の視線群を包絡線とする楕円の説
明図, 第4図は本発明において採用することができるー、測定
処理ステップ例の説明図, j第5図は本発明によっては復元不能な断面形状一の説
明図,である。 、・図中、10は対象物体、2lはエビボーラ面、22
は基線、2L , 2C , 21は撮像装置、oL,
 oc , ORは各撮像装置に関するそれぞれの投影
中心、VL , Vc . VRは各撮像装置からの視
線ないし対象物体断面形状に対する接線、である。 指定代理人 工業技術院 電子技術総合研究=! 杉浦 第4図 第3図 第2図(A)
FIG. 1 is a schematic diagram of a preferred embodiment constructed according to the present invention. FIG. 2 is an explanatory diagram for determining whether the cross-sectional shape of a target object cut along a specific epibolar plane is convex or not. .. Fig. 3 is an explanatory diagram of an ellipse whose envelope is approximately a plurality of line-of-sight groups, Fig. 4 is an explanatory diagram of an example of measurement processing steps that can be adopted in the present invention, and Fig. 5 is an explanatory diagram of an example of the measurement processing steps that can be adopted in the present invention. This is an explanatory diagram of a cross-sectional shape that cannot be restored in some cases. ,・In the figure, 10 is the target object, 2l is the shrimp bore surface, 22
is the baseline, 2L, 2C, 21 is the imaging device, oL,
oc , OR are the respective projection centers for each imaging device, VL , Vc . VR is a line of sight from each imaging device or a tangent to the cross-sectional shape of the target object. Designated Agent Agency of Industrial Science and Technology Electronic Technology Comprehensive Research =! Sugiura Figure 4 Figure 3 Figure 2 (A)

Claims (10)

【特許請求の範囲】[Claims] (1)特定の平面で断面を採った際、該断面形状が凸断
面形状となる対象物体の形状測定方法であつて; 上記特定の平面上に複数の撮像装置の投影中心を乗せ; 各撮像装置が該特定の平面に沿って上記対象物体と背景
との境界を見る視線を求めた後;得られた該視線群が所
定種類の凸断面形状の包絡線となるように計算処理する
こと; を特徴とする物体の形状測定方法。
(1) A method for measuring the shape of a target object in which the cross-sectional shape becomes a convex cross-sectional shape when the cross-section is taken on a specific plane; The projection centers of a plurality of imaging devices are placed on the specific plane; each image capturing After the device obtains the line of sight that looks at the boundary between the target object and the background along the specific plane; performs calculation processing so that the obtained line of sight group becomes an envelope of a predetermined type of convex cross-sectional shape; A method for measuring the shape of an object characterized by:
(2)各撮像装置の投影中心を、上記特定の平面上にあ
って特定の直線上にも乗るように揃え;該特定の直線の
周りに上記特定の平面を回転方向に走査しながら、該各
回転角位置においてそれぞれ上記計算処理をなすこと; を特徴とする請求項1に記乗の方法。
(2) Align the projection center of each imaging device so that it is on the specific plane and also on the specific straight line; while scanning the specific plane in the rotational direction around the specific straight line; 2. The multiplication method according to claim 1, wherein the calculation process is performed at each rotational angular position.
(3)上記特定の平面上における上記複数の撮像装置の
視線群が上記対象物体の表面上の一点にて交わるか否か
の判断もすること; を特徴とする請求項1または2に記乗の方法。
(3) Also determining whether the lines of sight of the plurality of imaging devices on the specific plane intersect at one point on the surface of the target object; the method of.
(4)撮像装置を三台以上用いたこと; を特徴とする請求項1から3までのどれか一つに記乗の
方法。
(4) The method according to any one of claims 1 to 3, characterized in that three or more imaging devices are used.
(5)上記三台以上の撮像装置の視線群の相対的な位置
関係に基づき、該視線群に対し、上記対象物体がどちら
の側に存在するのかの判断も行なうこと; を特徴とする請求項4に記乗の方法。
(5) Based on the relative positional relationship of the line-of-sight groups of the three or more imaging devices, it is also determined on which side the target object is present with respect to the line-of-sight group; Method of multiplication in item 4.
(6)特定の平面で断面を採った際、該断面形状が凸断
面形状となる対象物体の形状測定装置であって; 上記特定の平面上にそれぞれの投影中心を乗せた複数の
撮像装置と; 各撮像装置が該特定の平面に沿って上記対象物体と背景
との境界を見る視線を求める計算手段と; 得られた該視線群が所定種類の凸断面形状の包絡線とな
るように計算処理する手段と; を有して成る物体の形状測定装置。
(6) A shape measuring device for a target object whose cross-sectional shape is a convex cross-sectional shape when the cross-section is taken on a specific plane; ; Calculating means for determining a line of sight for each imaging device to view the boundary between the target object and the background along the specific plane; and Calculating so that the obtained line of sight group becomes an envelope of a predetermined type of convex cross-sectional shape. A device for measuring the shape of an object, comprising: processing means;
(7)各撮像装置の投影中心は、上記特定の平面上にお
いてさらに特定の直線上にも乗っており;該特定の直線
の周りに上記特定の平面を回転方向に走査する手段を有
すること; を特徴とする請求項6に記乗の装置。
(7) The projection center of each imaging device also lies on a specific straight line on the specific plane; having means for scanning the specific plane in the rotational direction around the specific straight line; 7. The device according to claim 6, characterized in that:
(8)上記特定の平面上における上記複数の撮像装置の
視線群が上記対象物体の表面上の一点にて交わるか否か
の判断手段も有すること; を特徴とする請求項6または7に記乗の装置。
(8) further comprising means for determining whether or not lines of sight of the plurality of imaging devices on the specific plane intersect at one point on the surface of the target object; device for riding.
(9)撮像装置を三台以上用いたこと; を特徴とする請求項6から8までのどれか一つに記乗の
装置。
(9) The device according to any one of claims 6 to 8, characterized in that three or more imaging devices are used.
(10)上記三台以上の撮像装置の視線群の相対的な位
置関係に基づき、該視線群に対し、上記対象物体がどち
らの側に存在するのかの判断手段も有すること; を特徴とする請求項9に記乗の装置。
(10) It is characterized by further comprising means for determining on which side the target object is present with respect to the line-of-sight group based on the relative positional relationship of the line-of-sight group of the three or more imaging devices; A device according to claim 9.
JP1058596A 1989-03-10 1989-03-10 Object shape measuring method and apparatus Expired - Lifetime JPH076782B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1058596A JPH076782B2 (en) 1989-03-10 1989-03-10 Object shape measuring method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1058596A JPH076782B2 (en) 1989-03-10 1989-03-10 Object shape measuring method and apparatus

Publications (2)

Publication Number Publication Date
JPH02236407A true JPH02236407A (en) 1990-09-19
JPH076782B2 JPH076782B2 (en) 1995-01-30

Family

ID=13088880

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1058596A Expired - Lifetime JPH076782B2 (en) 1989-03-10 1989-03-10 Object shape measuring method and apparatus

Country Status (1)

Country Link
JP (1) JPH076782B2 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05133727A (en) * 1991-09-17 1993-05-28 Opton Co Ltd Three-dimensional shape measuring device for long stretching material
JP2009093447A (en) * 2007-10-10 2009-04-30 Toyota Motor Corp Design support apparatus, design support method, and design support program
JP2015510169A (en) * 2012-01-17 2015-04-02 リープ モーション, インコーポレーテッドLeap Motion, Inc. Feature improvement by contrast improvement and optical imaging for object detection
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US9626591B2 (en) 2012-01-17 2017-04-18 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US9697643B2 (en) 2012-01-17 2017-07-04 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11099653B2 (en) 2013-04-26 2021-08-24 Ultrahaptics IP Two Limited Machine responsiveness to dynamic user movements and gestures
US11353962B2 (en) 2013-01-15 2022-06-07 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11994377B2 (en) 2012-01-17 2024-05-28 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05133727A (en) * 1991-09-17 1993-05-28 Opton Co Ltd Three-dimensional shape measuring device for long stretching material
JP2009093447A (en) * 2007-10-10 2009-04-30 Toyota Motor Corp Design support apparatus, design support method, and design support program
US9697643B2 (en) 2012-01-17 2017-07-04 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
JP2015510169A (en) * 2012-01-17 2015-04-02 リープ モーション, インコーポレーテッドLeap Motion, Inc. Feature improvement by contrast improvement and optical imaging for object detection
JP2016186793A (en) * 2012-01-17 2016-10-27 リープ モーション, インコーポレーテッドLeap Motion, Inc. Feature improvement by contrast improvement and optical imaging for object detection
US9626591B2 (en) 2012-01-17 2017-04-18 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9652668B2 (en) 2012-01-17 2017-05-16 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US11994377B2 (en) 2012-01-17 2024-05-28 Ultrahaptics IP Two Limited Systems and methods of locating a control object appendage in three dimensional (3D) space
US9741136B2 (en) 2012-01-17 2017-08-22 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10699155B2 (en) 2012-01-17 2020-06-30 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9778752B2 (en) 2012-01-17 2017-10-03 Leap Motion, Inc. Systems and methods for machine control
US9934580B2 (en) 2012-01-17 2018-04-03 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US10366308B2 (en) 2012-01-17 2019-07-30 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US10410411B2 (en) 2012-01-17 2019-09-10 Leap Motion, Inc. Systems and methods of object shape and position determination in three-dimensional (3D) space
US10565784B2 (en) 2012-01-17 2020-02-18 Ultrahaptics IP Two Limited Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space
US12086327B2 (en) 2012-01-17 2024-09-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US11782516B2 (en) 2012-01-17 2023-10-10 Ultrahaptics IP Two Limited Differentiating a detected object from a background using a gaussian brightness falloff pattern
US9767345B2 (en) 2012-01-17 2017-09-19 Leap Motion, Inc. Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US12260023B2 (en) 2012-01-17 2025-03-25 Ultrahaptics IP Two Limited Systems and methods for machine control
US11720180B2 (en) 2012-01-17 2023-08-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US11308711B2 (en) 2012-01-17 2022-04-19 Ultrahaptics IP Two Limited Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US11353962B2 (en) 2013-01-15 2022-06-07 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11740705B2 (en) 2013-01-15 2023-08-29 Ultrahaptics IP Two Limited Method and system for controlling a machine according to a characteristic of a control object
US12204695B2 (en) 2013-01-15 2025-01-21 Ultrahaptics IP Two Limited Dynamic, free-space user interactions for machine control
US12405673B2 (en) 2013-01-15 2025-09-02 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US11874970B2 (en) 2013-01-15 2024-01-16 Ultrahaptics IP Two Limited Free-space user interface and control using virtual constructs
US10585193B2 (en) 2013-03-15 2020-03-10 Ultrahaptics IP Two Limited Determining positional information of an object in space
US11693115B2 (en) 2013-03-15 2023-07-04 Ultrahaptics IP Two Limited Determining positional information of an object in space
US12306301B2 (en) 2013-03-15 2025-05-20 Ultrahaptics IP Two Limited Determining positional information of an object in space
US12333081B2 (en) 2013-04-26 2025-06-17 Ultrahaptics IP Two Limited Interacting with a machine using gestures in first and second user-specific virtual planes
US11099653B2 (en) 2013-04-26 2021-08-24 Ultrahaptics IP Two Limited Machine responsiveness to dynamic user movements and gestures
US11461966B1 (en) 2013-08-29 2022-10-04 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US12086935B2 (en) 2013-08-29 2024-09-10 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US11776208B2 (en) 2013-08-29 2023-10-03 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12236528B2 (en) 2013-08-29 2025-02-25 Ultrahaptics IP Two Limited Determining spans and span lengths of a control object in a free space gesture control environment
US11282273B2 (en) 2013-08-29 2022-03-22 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12242312B2 (en) 2013-10-03 2025-03-04 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11775033B2 (en) 2013-10-03 2023-10-03 Ultrahaptics IP Two Limited Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US11868687B2 (en) 2013-10-31 2024-01-09 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12265761B2 (en) 2013-10-31 2025-04-01 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US12314478B2 (en) 2014-05-14 2025-05-27 Ultrahaptics IP Two Limited Systems and methods of tracking moving hands and recognizing gestural interactions
US12154238B2 (en) 2014-05-20 2024-11-26 Ultrahaptics IP Two Limited Wearable augmented reality devices with object detection and tracking
US11778159B2 (en) 2014-08-08 2023-10-03 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US12095969B2 (en) 2014-08-08 2024-09-17 Ultrahaptics IP Two Limited Augmented reality with motion sensing
US12299207B2 (en) 2015-01-16 2025-05-13 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments

Also Published As

Publication number Publication date
JPH076782B2 (en) 1995-01-30

Similar Documents

Publication Publication Date Title
JPH02236407A (en) Method and device for measuring shape of object
US6608622B1 (en) Multi-viewpoint image processing method and apparatus
JP6441346B2 (en) Optical tracking
JP2018197685A (en) 3D measuring device
JPH055041B2 (en)
CN106780610B (en) Position positioning method and device
JPH06137840A (en) Vision sensor automatic calibration device
JP2018522240A (en) Method for measuring artifacts
WO1988002518A2 (en) Real time generation of stereo depth maps
JP3696336B2 (en) How to calibrate the camera
JP7242431B2 (en) Three-dimensional measuring device, three-dimensional measuring method and three-dimensional measuring program
JPH05135155A (en) 3D model construction device using continuous slice images
CN209962289U (en) A screw hole calibration device using three-dimensional reconstruction
JPH07287764A (en) Stereoscopic method and solid recognition device using the method
Nozick Camera array image rectification and calibration for stereoscopic and autostereoscopic displays
JPH04370704A (en) Method for detecting object proper information about position, angle, etc.
CN116105662A (en) Calibration method of multi-contour sensor
CN114494404A (en) Object volume measurement method, system, device and medium
Tsao et al. A Lie group approach to a neural system for three-dimensional interpretation of visual motion
CA2230197C (en) Strand orientation sensing
JP2001059715A (en) Shape restoration method, image processing method, and shape restoration system using spatiotemporal image of real object
JPH06221832A (en) Method and device for calibrating focusing distance of camera
Benosman et al. Real time omni-directional stereovision and planes detection
CN116147635B (en) Processing method applied to multi-contour sensor
JPH0727514A (en) Calibration method of image measuring device

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term