[go: up one dir, main page]

TWI635255B - Method and system for tracking object - Google Patents

Method and system for tracking object Download PDF

Info

Publication number
TWI635255B
TWI635255B TW106134271A TW106134271A TWI635255B TW I635255 B TWI635255 B TW I635255B TW 106134271 A TW106134271 A TW 106134271A TW 106134271 A TW106134271 A TW 106134271A TW I635255 B TWI635255 B TW I635255B
Authority
TW
Taiwan
Prior art keywords
image
light
object tracking
area
item
Prior art date
Application number
TW106134271A
Other languages
Chinese (zh)
Other versions
TW201915442A (en
Inventor
柯傑斌
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW106134271A priority Critical patent/TWI635255B/en
Priority to US15/849,639 priority patent/US20190102890A1/en
Application granted granted Critical
Publication of TWI635255B publication Critical patent/TWI635255B/en
Publication of TW201915442A publication Critical patent/TW201915442A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一種物體追蹤方法及系統。透過多個組裝墊各自的光發射器發射光束。透過攝像裝置持續朝向所述組裝墊來擷取多張影像,其中每一個影像中包括透過所述光束而形成的多個光區。解析第一影像以及第二影像,以計算所述光區的移動變化,其中第一影像以及第二影像為相鄰的兩張影像。之後,基於移動變化來判定攝像裝置的運動狀態。An object tracking method and system. The light beam is emitted through respective light emitters of the plurality of assembly pads. A plurality of images are continuously captured through the camera device toward the assembly pad, and each image includes a plurality of light regions formed through the light beam. Analyze the first image and the second image to calculate the movement change of the light area, wherein the first image and the second image are two adjacent images. Thereafter, the motion state of the imaging device is determined based on the movement change.

Description

物體追蹤方法及系統Object tracking method and system

本發明是有關於一種物體追蹤方法及系統,且特別是有關於一種結合發光組合墊的物體追蹤方法及系統。 The present invention relates to an object tracking method and system, and more particularly, to an object tracking method and system combined with a light emitting combination pad.

隨著科技的進步,虛擬實境(Virtual Reality,VR)、擴增實境(Augmented Reality,AR)、混合實境(Mixed Reality,MR)等技術越來越成熟,越來越多民眾對於AR/VR/MR亦日漸熟悉,使用者對於實體與虛擬空間的輸入需求日增。據此,對應的頭盔、手杖等輸入裝置日漸增多,用於沉浸式體驗的空間定位技術更顯重要。 With the advancement of technology, technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) are becoming more and more mature. / VR / MR is also becoming more and more familiar, and users' input requirements for physical and virtual spaces are increasing. According to this, corresponding input devices such as helmets and walking sticks are increasing, and spatial positioning technology for immersive experience is more important.

目前的VR技術是以實體框架或虛擬光學框架來界定使用者的移動,卻也限制使用者的所在位置。例如,利用雷射對空間進行編碼,而對頭盔、手杖等待測物體進行追蹤;或者,掃描快速響應矩陣圖碼(Quick Response(QR)Code)進行辨識來獲取訊息,以將虛擬資訊於現實空間展現。據此,如何提供一個簡單且可適用於各種場地的互動技術是當前重要的課題。 Current VR technology uses a physical frame or a virtual optical frame to define the user's movement, but it also restricts the user's location. For example, using laser to encode space and tracking helmets and canes waiting to measure objects; or scanning Quick Response (QR) Code for identification to obtain information to put virtual information in real space Show. According to this, how to provide a simple and applicable interactive technology for various venues is an important issue at present.

本發明提供一種物體追蹤方法及系統,結合發光的組裝墊來進行空間定位,可在各種場地來實現物體追蹤技術。 The present invention provides an object tracking method and system, which combines light-emitting assembly pads for spatial positioning, and can implement object tracking technology in various venues.

本發明的物體追蹤方法,包括底下步驟。透過多個組裝墊各自的光發射器發射光束,其中這些組裝墊拼裝出光發射區域。透過攝像裝置持續在光發射區域內朝向所述組裝墊來擷取多張影像,其中每一個影像中包括透過所述光束而形成的多個光區。解析第一影像以及第二影像,以計算所述光區的移動變化,其中第一影像以及第二影像為所述影像中相鄰的兩張影像。之後,基於移動變化來判定攝像裝置的運動狀態。 The object tracking method of the present invention includes the following steps. The light beam is emitted through respective light emitters of a plurality of assembly pads, wherein the assembly pads assemble a light emitting area. A plurality of images are captured through the camera device continuously toward the assembly pad in the light emitting area, and each image includes a plurality of light regions formed by transmitting the light beam. Analyze the first image and the second image to calculate the movement change of the light area, wherein the first image and the second image are two adjacent images in the image. Thereafter, the motion state of the imaging device is determined based on the movement change.

在本發明的一實施例中,上述解析第一影像以及第二影像,以計算所述光區的移動變化的步驟包括:在第一影像以及第二影像分別鎖定一指定光區;以及基於第一影像的指定光區的座標位置以及第二影像的光區的座標位置,計算在水平面上的位移方向以及位移量。 In an embodiment of the present invention, the step of analyzing the first image and the second image to calculate the movement change of the light area includes: locking a designated light area in the first image and the second image respectively; and based on the first The coordinate position of a designated light area of an image and the coordinate position of a light area of a second image calculate a displacement direction and a displacement amount on a horizontal plane.

在本發明的一實施例中,上述解析第一影像以及第二影像,以計算所述光區的移動變化的步驟包括:在第一影像以及第二影像分別鎖定一成像位置;記錄在第一影像的成像位置的第一光區;記錄在第二影像的成像位置的第二光區;基於第一光區與第二光區兩者的位置關係來獲得在水平面上的位移方向;以及基於第一光區與第二光區之間包括的光區數量來獲得在水平面上的位移量。 In an embodiment of the present invention, the step of analyzing the first image and the second image to calculate the movement change of the light area includes: locking an imaging position in the first image and the second image respectively; and recording in the first image. A first light region of the imaging position of the image; a second light region recorded at the imaging position of the second image; obtaining a displacement direction on a horizontal plane based on the positional relationship between the first light region and the second light region; The number of light regions included between the first light region and the second light region is used to obtain the amount of displacement on the horizontal plane.

在本發明的一實施例中,上述解析第一影像以及第二影像,以計算所述光區的移動變化的步驟包括:在第一影像以及第二影像分別鎖定一指定光區;於第一影像中,在指定光區分別與其相鄰的光區之間相距的間隔皆相等時,記錄第一間隔;於第二影像中,在指定光區分別與其相鄰的光區之間相距的間隔皆相等時,記錄第二間隔;在第一間隔與第二間隔不同的情況下,基於第一間隔與第二間隔的變化關係,獲得在垂直軸上的位移方向以及位移量。 In an embodiment of the present invention, the step of analyzing the first image and the second image to calculate the movement change of the light area includes: locking a designated light area in the first image and the second image, respectively; In the image, the first interval is recorded when the distance between the specified light area and its adjacent light area is equal; in the second image, the distance between the specified light area and its adjacent light area is recorded When they are all equal, the second interval is recorded; in the case where the first interval is different from the second interval, based on the change relationship between the first interval and the second interval, the displacement direction and amount on the vertical axis are obtained.

在本發明的一實施例中,在第一間隔與第二間隔相同的情況下,判定移動變化為水平移動。 In an embodiment of the present invention, when the first interval is the same as the second interval, it is determined that the movement change is a horizontal movement.

在本發明的一實施例中,上述解析第一影像以及第二影像,以計算所述光區的移動變化的步驟包括:判斷在第一影像與第二影像中,位於外側的光區所形成的直線之間的斜率是否改變。而基於移動變化來判定攝像裝置的運動狀態的步驟包括:基於該斜率的變化來計算攝像裝置的旋轉角度。 In an embodiment of the present invention, the step of analyzing the first image and the second image to calculate the movement change of the light region includes: judging the formation of the light region on the outer side of the first image and the second image. Whether the slope between the straight lines changes. The step of determining the motion state of the imaging device based on the movement change includes: calculating a rotation angle of the imaging device based on the change in the slope.

在本發明的一實施例中,上述物體追蹤方法更包括:取得各組裝墊的識別碼;驅使所述組裝墊逐一發出光束;基於接收到的光訊號及擷取影像,獲得各組裝墊所配置的真實空間位置;以及將識別碼與其對應的真實空間位置進行對應而獲得對應地圖。 In an embodiment of the present invention, the object tracking method further includes: obtaining an identification code of each assembly pad; driving the assembly pad to emit a beam one by one; and obtaining the configuration of each assembly pad based on the received optical signal and the captured image. The real space position of; and obtaining the corresponding map by corresponding the identification code with its corresponding real space position.

在本發明的一實施例中,上述物體追蹤方法更包括:透過攝像裝置在光發射區域內朝向所述組裝墊來擷取一校正影像, 並顯示校正影像於螢幕;於螢幕中,在校正影像的上方顯示理想光區域;以及透過理想光區域與校正影像中的光區來進行校正程序。 In an embodiment of the present invention, the above-mentioned object tracking method further includes: acquiring a correction image through the camera device toward the assembly pad in the light emission area, The correction image is displayed on the screen; on the screen, the ideal light area is displayed above the correction image; and the correction process is performed through the ideal light area and the light area in the correction image.

在本發明的一實施例中,上述各組裝墊在其邊緣具有公母機構接合點,各組裝墊透過公母機構接合點來進行組裝。 In an embodiment of the present invention, each of the assembly pads has a male-female mechanism joint point on an edge thereof, and each assembly pad is assembled through the male-female mechanism joint point.

在本發明的一實施例中,上述攝像裝置安裝在一物件上,所述物件為頭盔、手杖、遙控器、手套、鞋套以及衣服其中一個。 In an embodiment of the present invention, the camera device is mounted on an object, and the object is one of a helmet, a cane, a remote control, gloves, a shoe cover, and clothing.

在本發明的一實施例中,上述各組裝墊更包括力量感測器。 In an embodiment of the present invention, each of the assembly pads further includes a force sensor.

本發明的物體追蹤系統,包括:多個組裝墊,拼裝出光發射區域,其中每一個組裝墊包括用以發射光束的光發射器;以及攝像裝置。攝像裝置包括:影像擷取器,持續在光發射區域內朝向所述組裝墊來擷取多張影像,其中每一個影像中包括透過所述光束而形成的多個光區;以及影像分析單元,耦接至影像擷取器,接收影像以進行解析。影像分析單元解析第一影像以及第二影像,以計算所述光區的移動變化,其中第一影像以及第二影像為所述影像中相鄰的兩張影像;並且,影像分析單元基於移動變化來判定攝像裝置的運動狀態。 The object tracking system of the present invention comprises: a plurality of assembly pads for assembling a light emitting area, wherein each assembly pad includes a light emitter for emitting a light beam; and a camera device. The camera device includes: an image capture device, which continuously captures a plurality of images toward the assembly pad in a light emission area, wherein each image includes a plurality of light regions formed through the light beam; and an image analysis unit, Coupled to an image capturer to receive images for analysis. The image analysis unit analyzes the first image and the second image to calculate the movement change of the light area, wherein the first image and the second image are two adjacent images in the image; and the image analysis unit is based on the movement change To determine the motion state of the imaging device.

基於上述,本發明結合多個發光的組裝墊,利用組裝墊來界定出活動範圍,以在活動範圍內來追蹤特定的物體。據此,可視情況來增減組合墊的數量,在擴充上更為彈性不受場地限 制,不僅組裝方便,亦便於拆卸。 Based on the above, the present invention combines a plurality of light-emitting assembly pads, and uses the assembly pads to define a range of motion to track a specific object within the range of motion. According to this, the number of combination pads can be increased or decreased according to the situation, and it is more flexible in expansion without being limited by the venue. System, not only easy to assemble, but also easy to disassemble.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above features and advantages of the present invention more comprehensible, embodiments are hereinafter described in detail with reference to the accompanying drawings.

100、900‧‧‧物體追蹤系統 100, 900‧‧‧ object tracking system

110‧‧‧攝像裝置 110‧‧‧ camera

210‧‧‧供電單元 210‧‧‧ Power supply unit

220‧‧‧影像分析單元 220‧‧‧Image Analysis Unit

230‧‧‧影像擷取器 230‧‧‧Image Capturer

240‧‧‧光發射器 240‧‧‧ light emitter

250‧‧‧微控制器 250‧‧‧Microcontroller

300‧‧‧影像 300‧‧‧Image

510、610、710‧‧‧第一影像 510, 610, 710‧‧‧ first image

520、620、720‧‧‧第二影像 520, 620, 720‧‧‧Second image

910‧‧‧電子裝置 910‧‧‧Electronic device

a~d‧‧‧長度 a ~ d‧‧‧length

A、A11~A14、A21~A24、A31~A34‧‧‧組裝墊 A, A11 ~ A14, A21 ~ A24, A31 ~ A34‧‧‧Assembly pad

CD‧‧‧擷取距離 CD‧‧‧Capture distance

CL‧‧‧擷取焦長 CL‧‧‧Capture focal length

F‧‧‧力量感測器 F‧‧‧ Force Sensor

IO‧‧‧電性接合點 IO‧‧‧electrical junction

IR‧‧‧紅外光源 IR‧‧‧ infrared light source

M、N、P‧‧‧指定光區 M, N, P‧‧‧ Designated light zone

MD‧‧‧實際距離 MD‧‧‧ actual distance

R‧‧‧螢幕 R‧‧‧Screen

t1、t2‧‧‧直線 t1, t2‧‧‧Straight

U‧‧‧抬頭狀態 U‧‧‧ looked up

V‧‧‧平視狀態 V‧‧‧ Head-up state

VH‧‧‧實際高度 VH‧‧‧actual height

VP‧‧‧間距像素 VP‧‧‧ Pitch Pixels

WP‧‧‧總像素 WP‧‧‧ Total Pixels

Z‧‧‧理想光區域 Z‧‧‧ Ideal light area

S305~S320‧‧‧物體追蹤方法各步驟 S305 ~ S320‧‧‧ Object tracking method steps

圖1是依照本發明一實施例的物體追蹤系統的示意圖。 FIG. 1 is a schematic diagram of an object tracking system according to an embodiment of the present invention.

圖2是依照本發明一實施例的物體追蹤系統的方塊圖。 FIG. 2 is a block diagram of an object tracking system according to an embodiment of the present invention.

圖3是依照本發明一實施例的物體追蹤方法的流程圖。 FIG. 3 is a flowchart of an object tracking method according to an embodiment of the present invention.

圖4是依照本發明一實施例的三角測量的示意圖。 FIG. 4 is a schematic diagram of triangulation according to an embodiment of the present invention.

圖5是依照本發明一實施例在水平移動下影像的光區呈現的示意圖。 FIG. 5 is a schematic diagram showing a light area of an image under horizontal movement according to an embodiment of the present invention.

圖6是依照本發明一實施例在垂直移動下影像的光區呈現的示意圖。 FIG. 6 is a schematic diagram showing a light area of an image under vertical movement according to an embodiment of the present invention.

圖7是依照本發明一實施例在轉動情況下影像的光區呈現的示意圖。 FIG. 7 is a schematic diagram showing a light area of an image in a rotating situation according to an embodiment of the present invention.

圖8A及圖8B是依照本發明一實施例的校正畫面的示意圖。 8A and 8B are schematic diagrams of a calibration screen according to an embodiment of the present invention.

圖9是依照本發明另一實施例的物體追蹤系統的方塊圖。 FIG. 9 is a block diagram of an object tracking system according to another embodiment of the present invention.

圖10A~圖10C是依照本發明一實施例的組裝墊構成的示意圖。 10A-10C are schematic diagrams of the structure of an assembly pad according to an embodiment of the present invention.

圖11A~圖11C是依照本發明另一實施例的組裝墊構成的示意圖。 11A-11C are schematic diagrams of the structure of an assembly pad according to another embodiment of the present invention.

有關本發明之前述及其他技術內容、特點與功效,在以下配合參考圖式之各實施例的詳細說明中,將可清楚的呈現。以下實施例中所提到的方向用語,例如:「上」、「下」、「前」、「後」、「左」、「右」等,僅是參考附加圖式的方向。因此,使用的方向用語是用來說明,而並非用來限制本發明。並且,在下列各實施例中,相同或相似的元件將採用相同或相似的標號。 The foregoing and other technical contents, features, and effects of the present invention will be clearly presented in the following detailed description of each embodiment with reference to the drawings. The directional terms mentioned in the following embodiments, such as: "up", "down", "front", "rear", "left", "right", etc., are only directions referring to the attached drawings. Therefore, the directional terms used are used for illustration, but not for limiting the present invention. And, in the following embodiments, the same or similar elements will be given the same or similar reference numerals.

圖1是依照本發明一實施例的物體追蹤系統的示意圖。圖2是依照本發明一實施例的物體追蹤系統的方塊圖。請參照圖1及圖2,物體追蹤系統100包括攝像裝置110以及組裝墊A11~A14、A21~A24、A31~A34(底下統稱為組裝墊A)。在本實施例中,以4×3個組裝墊A為例來進行說明,然,在其他實施例中並不以此為限。 FIG. 1 is a schematic diagram of an object tracking system according to an embodiment of the present invention. FIG. 2 is a block diagram of an object tracking system according to an embodiment of the present invention. 1 and FIG. 2, the object tracking system 100 includes a camera device 110 and assembly pads A11 to A14, A21 to A24, and A31 to A34 (collectively referred to as assembly pad A below). In this embodiment, 4 × 3 assembling pads A are used as an example for description, however, it is not limited to this in other embodiments.

而在每一個組裝墊A中設置有光發射器240以及微控制器250。微控制器250耦接至光發射器240。透過微控制器250來控制光發射器240發射特定頻率的光束,例如紅外光。所述紅外光的波長可設計為850nm或940nm。所述光發射器240例如為紅外光發射器,用以發射紅外光。微控制器(Micro Control Unit,MCU)250為一種積體電路晶片,可視為是一個微型電腦。在本實施例中,各組裝墊A為正方形狀,且光發射器240設置在各組裝墊A的中心位置,且每一個組裝墊A的尺寸相同。透過這些組 裝墊A拼裝出一個光發射區域,並且以攝像裝置110來作為定位設備,利用攝像裝置110來進行空間定位的設定。然,在其他實施例中,亦可以使用三角形、長方形、六角形或其他多邊形的組裝墊,在此並不限制。 In each assembly pad A, a light transmitter 240 and a microcontroller 250 are provided. The microcontroller 250 is coupled to the light transmitter 240. The micro-controller 250 is used to control the light transmitter 240 to emit a light beam with a specific frequency, such as infrared light. The wavelength of the infrared light may be designed as 850 nm or 940 nm. The light emitter 240 is, for example, an infrared light emitter for emitting infrared light. A micro controller unit (MCU) 250 is an integrated circuit chip and can be regarded as a microcomputer. In this embodiment, each of the assembly pads A has a square shape, and the light emitter 240 is disposed at a center position of each of the assembly pads A, and each of the assembly pads A has the same size. Through these groups The pad A assembles a light emitting area, and uses the imaging device 110 as a positioning device, and uses the imaging device 110 to set the spatial positioning. However, in other embodiments, a triangular, rectangular, hexagonal, or other polygonal assembly pad may also be used, which is not limited herein.

攝像裝置110可以安裝在各種物件上,例如,頭盔、手杖、遙控器、手套、鞋套、衣服等。攝像裝置110包括供電單元210、影像分析單元220以及影像擷取器230。供電單元210耦接至影像分析單元220以及影像擷取器230,以進行供電。影像分析單元220耦接至影像擷取器230。 The camera device 110 may be mounted on various objects, such as a helmet, a cane, a remote controller, gloves, a shoe cover, clothes, and the like. The camera device 110 includes a power supply unit 210, an image analysis unit 220, and an image capture device 230. The power supply unit 210 is coupled to the image analysis unit 220 and the image capture unit 230 to provide power. The image analysis unit 220 is coupled to the image capturer 230.

在此,供電單元210例如為電池。影像擷取器230例如為採用電荷耦合元件(Charge coupled device,CCD)鏡頭、互補式金氧半電晶體(Complementary metal oxide semiconductor transistors,CMOS)鏡頭的攝影機、照相機等,用以擷取影像。另外,影像擷取器230也可以是用於立體偵測的三維攝影鏡頭,如雙攝像頭、結構光(光編碼)鏡頭、時差測距(Time-of-Flight,TOF)技術鏡頭或是高速攝影鏡頭(>60Hz,如120Hz、240Hz、960Hz)。影像分析單元220例如為中央處理單元(Central Processing Unit,CPU)、圖像處理單元(Graphic Processing Unit,GPU)、物理處理單元(Physics Processing Unit,PPU)、可程式化之微處理器(Microprocessor)、嵌入式控制晶片、數位訊號處理器(Digital Signal Processor,DSP)、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)或其他類似裝置。 Here, the power supply unit 210 is, for example, a battery. The image capturing device 230 is, for example, a video camera or a camera using a charge coupled device (CCD) lens, a complementary metal oxide semiconductor (CMOS) lens, and the like, for capturing images. In addition, the image capture device 230 may also be a three-dimensional photography lens for stereo detection, such as a dual camera, a structured light (light coding) lens, a time-of-flight (TOF) technology lens, or a high-speed photography Lens (> 60Hz, such as 120Hz, 240Hz, 960Hz). The image analysis unit 220 is, for example, a central processing unit (CPU), an image processing unit (GPU), a physical processing unit (PPU), and a programmable microprocessor (Microprocessor). , Embedded control chip, Digital Signal Processor (DSP), Application Specific Integrated Circuits (ASIC) or other similar devices.

攝像裝置110會持續在光發射區域內朝向組裝墊A來擷取多張影像。即,當攝像裝置110在三度空間中沿三個坐標軸的移動和繞三個坐標軸的轉動時(例如上下移動、水平移動、垂直移動、對應三個座標軸的旋轉等六個自由度),攝像裝置110會接收到具有不同的光區區域的影像,之後,透過影像分析單元220來分析所述影像藉此來追蹤攝像裝置110的運動狀態。 The camera device 110 continues to capture multiple images in the light emission area toward the assembly pad A. That is, when the imaging device 110 moves along three coordinate axes and rotates around the three coordinate axes in a three-degree space (for example, six degrees of freedom such as up and down movement, horizontal movement, vertical movement, rotation corresponding to three coordinate axes) The camera device 110 receives images with different light areas, and then analyzes the images through the image analysis unit 220 to track the motion state of the camera device 110.

圖3是依照本發明一實施例的物體追蹤方法的流程圖。請參照圖3,在步驟S305中,透過各組裝墊A的光發射器240發射光束。例如,可由攝像裝置110來發出一控制訊號給組裝墊A,以使得微控制器250驅動光發射器240發射光束;或者,組裝墊A以有線或無線的方式連接至外部的一電子裝置(具有運算能力的設備),而由所述電子裝置來發出控制訊號給組裝墊A,使得微控制器250驅動光發射器240發射光束。 FIG. 3 is a flowchart of an object tracking method according to an embodiment of the present invention. Referring to FIG. 3, in step S305, a light beam is transmitted through the light emitter 240 of each assembly pad A. For example, the camera 110 can send a control signal to the assembly pad A, so that the microcontroller 250 drives the light transmitter 240 to emit a light beam; or, the assembly pad A is connected to an external electronic device (with Computing device), and the electronic device sends a control signal to the assembly pad A, so that the microcontroller 250 drives the light transmitter 240 to emit a light beam.

接著,在步驟S310中,透過攝像裝置110持續在光發射區域內朝向組裝墊A來擷取多張影像,其中每一張影像中包括透過所述光束而形成的多個光區。攝像裝置110會接收到各組裝墊A的光發射器240所發出的各光束,因而在成像的影像中形成多個光區。在此,光區可由一個或複數個光點構成,複數的光點可視為光點之集合且複數光點為相鄰之光點。 Next, in step S310, a plurality of images are captured through the camera device 110 continuously toward the assembly pad A in the light emitting area, where each image includes a plurality of light regions formed through the light beam. The imaging device 110 receives each light beam emitted from the light transmitter 240 of each assembly pad A, and thus forms a plurality of light regions in the imaged image. Here, the light area may be composed of one or a plurality of light points, and the plurality of light points may be regarded as a collection of light points and the plurality of light points may be adjacent light points.

之後,在步驟S315中,透過影像分析單元220來解析第一影像以及第二影像,以計算光區的移動變化。在此,為了方便說明,將相鄰的兩張影像稱為第一影像及第二影像。最後,在步 驟S320中,基於移動變化來判定攝像裝置110的運動狀態。攝像裝置110所接收到的影像,其光區的呈現方式會不同,而透過分析具有不同光區區域的第一影像及第二影像,藉此來判斷攝像裝置110在六個自由度上的運動狀態。 After that, in step S315, the first image and the second image are analyzed by the image analysis unit 220 to calculate the movement change of the light area. Here, for convenience of explanation, two adjacent images are referred to as a first image and a second image. Finally, in step In step S320, the motion state of the imaging device 110 is determined based on the movement change. The images received by the camera device 110 will have different light areas, and the first and second images with different light areas will be analyzed to determine the movement of the camera device 110 in six degrees of freedom. status.

將攝像裝置110安裝在各種不同的目標物上,透過上述步驟S305~S320,便能夠以攝像裝置110的運動狀態來代表目標物的運動狀態。 The camera device 110 is mounted on various target objects. Through the above steps S305 to S320, the movement state of the camera device 110 can be used to represent the movement state of the target object.

底下舉例來說明分析光區的移動變化的方式。 The following example illustrates how to analyze the movement changes of the light area.

圖4是依照本發明一實施例的三角測量的示意圖。請參照圖4,VH表示攝像裝置110與組裝墊A之間的實際高度,CL表示攝像裝置110的擷取焦長,MD表示相鄰的兩個組裝墊A的兩個中心點在指定方向上相距的實際距離,CD為影像擷取器230內部擷取到擷取距離。 FIG. 4 is a schematic diagram of triangulation according to an embodiment of the present invention. Please refer to FIG. 4, VH represents the actual height between the camera device 110 and the assembly pad A, CL represents the capture focal length of the camera device 110, and MD represents the two center points of two adjacent assembly pads A in a specified direction The actual distance from each other, CD is the internal capture-to-capture distance from the image capture 230.

影像300為攝像裝置110所接收的影像。在此,影像300中包括9個光區。WP為影像300在縱軸上的總像素,VP為相鄰的兩個光區的兩個中心點在縱軸上相距的間距像素。 The image 300 is an image received by the imaging device 110. Here, the image 300 includes nine light regions. WP is the total pixels on the vertical axis of the image 300, and VP is the distance pixels on the vertical axis between the two center points of two adjacent light regions.

計算公式如下: 其中,,CC為畫面轉換常數。 Calculated as follows: among them, , CC is the screen transition constant.

故,Therefore, .

影像分析單元220可利用所述公式(1)來獲得在縱軸上移動的位移量。另外,影像分析單元220也可利用所述公式(1)來獲得在橫軸上移動的位移量。即,將WP設定為影像300在橫軸上的總像素,將VP設定為相鄰的兩個光區的兩個中心點在橫軸上相距的間距像素。 The image analysis unit 220 may use the formula (1) to obtain a displacement amount that moves on the vertical axis. In addition, the image analysis unit 220 may also use the formula (1) to obtain a displacement amount that moves on the horizontal axis. That is, WP is set as the total pixels of the image 300 on the horizontal axis, and VP is set as the pitch pixels where the two center points of two adjacent light regions are spaced on the horizontal axis.

在此,可由使用者輸入自己的身高來決定實際高度VH。例如,使用者輸入其身高為160cm,影像分析單元220便能夠根據一般眼睛與頭頂之間的距離來決定實際高度VH。而擷取焦長CL、總像素WP、畫面轉換常數CC為已知的定值。據此,分析第一影像與第二影像之間的位移量(像素數目),以獲得間距像素VP,之後,利用公式(1)來求出實際距離MD,以實際距離MD代表攝像裝置110在真實空間中所移動的距離。 Here, the user can input his own height to determine the actual height VH. For example, if the user inputs a height of 160 cm, the image analysis unit 220 can determine the actual height VH according to the distance between the average eye and the top of the head. The captured focal length CL, total pixels WP, and frame conversion constant CC are known constant values. Based on this, the amount of displacement (the number of pixels) between the first image and the second image is analyzed to obtain the pitch pixel VP. Then, the actual distance MD is obtained by using formula (1), and the actual distance MD represents the imaging device 110 at The distance traveled in real space.

圖5是依照本發明一實施例在水平移動下影像的光區呈現的示意圖。在本實施例中,使用鎖定光區計算像素法。即,透過影像偵測鎖定一指定光區,將指定光區移動的像素數目帶入公式(1)即可得到移動距離。 FIG. 5 is a schematic diagram showing a light area of an image under horizontal movement according to an embodiment of the present invention. In this embodiment, the pixel method is calculated using a locked light area. That is, a specified light area is locked through image detection, and the number of pixels moved by the specified light area is brought into formula (1) to obtain the moving distance.

請參照圖5,在第一影像510以及第二影像520分別鎖定指定光區M,進而基於第一影像510的指定光區M的座標位置以及第二影像520的指定光區M的座標位置,計算在水平面上的位移方向以及位移量。以圖5為例,位移方向為向前移動,並且位移量例如為1024個像素。 Referring to FIG. 5, the designated light area M is locked in the first image 510 and the second image 520 respectively, and based on the coordinate position of the designated light area M of the first image 510 and the coordinate position of the designated light area M of the second image 520 Calculate the direction and amount of displacement on the horizontal plane. Taking FIG. 5 as an example, the displacement direction is forward movement, and the displacement amount is, for example, 1024 pixels.

在此,以1600萬(5312×2988)像素來舉例,在縱軸上 的總像素WP為定值2988。並且,假設畫面轉換常數CC為定值7cm、擷取焦長CL定值為12cm、實際高度VH為定值150cm。 Here, take 16 million (5312 × 2988) pixels as an example. On the vertical axis The total pixel WP is fixed at 2988. In addition, it is assumed that the screen conversion constant CC is a fixed value of 7 cm, the capture focal length CL is a fixed value of 12 cm, and the actual height VH is a fixed value of 150 cm.

將指定光區移動的像素數目,例如1024個像素,帶入公式(1),則獲得對應的實際距離MD為30cm。以此類推,倘若指定光區移動了2048個像素,則對應的實際距離為60cm。而在其他實施例中,亦可以同樣方式來偵測左右移動的位移量。 By moving the number of pixels in the specified light area, for example 1024 pixels, into formula (1), the corresponding actual distance MD is 30 cm. By analogy, if the designated light area moves by 2048 pixels, the corresponding actual distance is 60cm. In other embodiments, the displacement amount of left-right movement can also be detected in the same manner.

另外,亦可使用記點方式,即,在第一影像510以及第二影像520分別鎖定一成像位置;記錄在第一影像510的成像位置的第一光區;記錄在第二影像520的成像位置的第二光區;基於第一光區與第二光區兩者的位置關係來獲得在水平面上的位移方向;以及基於第一光區與第二光區之間包括的光區數量來獲得在水平面上的位移量。 In addition, a counting method can also be used, that is, an imaging position is locked on the first image 510 and the second image 520 respectively; a first light area recorded on the imaging position of the first image 510; and an imaging recorded on the second image 520 The second light zone at the position; obtaining the displacement direction on the horizontal plane based on the positional relationship between the first light zone and the second light zone; and based on the number of light zones included between the first light zone and the second light zone The amount of displacement on the horizontal plane is obtained.

例如,假設在第一影像510中將指定光區M所在的位置指定鎖定的成像位置,在前後方向上移動後,在第二影像520中的鎖定的成像位置中位相鄰於指定光區M的另一光區移動至此,則利用上述公式(1)(假設兩個光區之間的間距像素VP為1024個像素)可以得知往前移動了30cm。而倘若移動過程中經過兩個光區最後落在第三光區上,則表示移動了90cm。倘若介於光區與光區之間,亦可按比例計算之。以此類推,亦可判斷左右方向的移動。亮區鎖定與記憶方式並不限制於上述說明。 For example, suppose that the position of the designated light area M is designated as the locked imaging position in the first image 510, and after moving in the front-back direction, the locked imaging position in the second image 520 is located adjacent to the designated light area M At this point, the other light area has been moved, and then using the above formula (1) (assuming that the pixel VP between the two light areas is 1024 pixels), it can be known that it has moved 30 cm forward. However, if two light zones are finally dropped on the third light zone during the movement, it means that 90cm has been moved. If it is between the light zone and the light zone, it can also be calculated proportionally. By analogy, the movement in the left and right directions can also be judged. Bright area lock and memory methods are not limited to the above description.

圖6是依照本發明一實施例在垂直移動下影像的光區呈現的示意圖。本實施例是在光區的間距為一定的情況下來偵測垂 直移動。在第一影像610以及第二影像620分別鎖定一指定光區N。於第一影像610中,在指定光區N分別與其相鄰的4個光區之間相距的間隔皆相等時,記錄第一間隔。於第二影像620中,在指定光區N分別與其相鄰的4個光區之間相距的間隔皆相等時,記錄第二間隔。在第一間隔與第二間隔不同的情況下,基於第一間隔與第二間隔的變化關係,獲得在垂直軸上的位移方向以及位移量。 FIG. 6 is a schematic diagram showing a light area of an image under vertical movement according to an embodiment of the present invention. In this embodiment, the vertical Move straight. A designated light area N is locked on the first image 610 and the second image 620, respectively. In the first image 610, the first interval is recorded when the distance between the designated light area N and its four adjacent light areas is equal. In the second image 620, when the interval between the designated light area N and the four adjacent light areas are equal, the second interval is recorded. In the case where the first interval is different from the second interval, based on the change relationship between the first interval and the second interval, a displacement direction and a displacement amount on the vertical axis are obtained.

在第一間隔與第二間隔相同的情況下,判定移動變化為水平移動。即,透過偵測指定光區N與其相鄰的四個光區,偵測指定光區N與其相鄰四個光區之間的間距是否相同,相同代表保持水平狀態。而當攝像裝置110上下移動,譬如使用者配戴著具有攝像裝置110的頭盔進行蹲下起立動作時,相鄰兩張影像的光區間的間距雖相同但是其大小關係會改變。 When the first interval is the same as the second interval, it is determined that the movement change is a horizontal movement. That is, by detecting the specified light area N and its four adjacent light areas, it is detected whether the distance between the specified light area N and its four adjacent light areas is the same, and the same represents that the horizontal state is maintained. When the camera device 110 moves up and down, for example, when a user wears a helmet with the camera device 110 to perform squatting and standing motions, although the distance between the light intervals of two adjacent images is the same, the size relationship will change.

如圖6所示,假設第一影像610的第一間隔為1024個像素,第二影像620的第二間隔為615個像素,透過所述公式(1)的計算可以得知攝像裝置110在實際空間中從150cm垂直移動至90cm。 As shown in FIG. 6, assuming that the first interval of the first image 610 is 1024 pixels and the second interval of the second image 620 is 615 pixels, it can be known from the calculation of the formula (1) that the imaging device 110 is actually used. The space moves vertically from 150cm to 90cm.

圖7是依照本發明一實施例在轉動情況下影像的光區呈現的示意圖。本實施例是在光區的間距為一定的情況下來偵測轉動。在此,使用者配戴著具有攝像裝置110的頭盔。在使用者為抬頭狀態U時,攝像裝置110獲得第一影像710;在使用者為平視狀態V時,攝像裝置110獲得第二影像720。影像分析單元220 判斷在第一影像710與第二影像720中,位於外側的光區所形成的直線t1、t2之間的斜率是否改變,基於斜率的變化來計算攝像裝置110的旋轉角度。例如,可利用正切函數來計算當下的抬頭角度,即攝像裝置110的旋轉角度。 FIG. 7 is a schematic diagram showing a light area of an image in a rotating situation according to an embodiment of the present invention. In this embodiment, the rotation is detected when the distance between the light regions is constant. Here, the user wears a helmet having the camera 110. When the user is in the head-up state U, the camera 110 obtains a first image 710; when the user is in the head-up state V, the camera 110 obtains a second image 720. Image analysis unit 220 It is determined whether the slope between the straight lines t1 and t2 formed by the outer light regions in the first image 710 and the second image 720 is changed, and the rotation angle of the imaging device 110 is calculated based on the change in the slope. For example, a tangent function may be used to calculate the current head-up angle, that is, the rotation angle of the camera 110.

另外,亦可以在第一影像710與第二影像720中鎖定一指定光區P,偵測指定光區P與其相鄰4個光區的間隔,透過第一影像710與第二影像720的指定光區P與其相鄰光區的間距變化,亦可以得知攝像裝置110的旋轉角度改變。在此,第一影像710的指定光區P與其在橫軸上相鄰的光區之間的間距,小於第二影像720的指定光區P與其在橫軸上相鄰的光區之間的間距。由此可知,攝像裝置110在垂直方向上產生了轉動變化。 In addition, a designated light area P may be locked in the first image 710 and the second image 720, and the interval between the designated light area P and the four adjacent light areas may be detected, and the designation of the first image 710 and the second image 720 may be performed. The change in the distance between the light region P and its adjacent light region can also be seen to change the rotation angle of the imaging device 110. Here, the distance between the designated light area P of the first image 710 and its adjacent light area on the horizontal axis is smaller than the distance between the designated light area P of the second image 720 and its adjacent light area on the horizontal axis. spacing. From this, it can be seen that the imaging device 110 undergoes a rotation change in the vertical direction.

圖8A及圖8B是依照本發明一實施例的校正畫面的示意圖。在開始使用之前,可先進行校準的動作。即,組裝多個組裝墊A而獲得光發射區域。接著,開啟攝像裝置110。之後,將攝像裝置110移動至光發射區域。在此,使用者例如是戴著安裝有攝像裝置110的頭盔進入光發射區域,或是手持著安裝有攝像裝置110的手杖進入光發射區域。接著,在攝像裝置110的螢幕R中顯示一理想光區域Z,如圖8A所示。攝像裝置110進入光發射區域,並且在固定位置時影像的偏差極小時,便可判定定位不動。 8A and 8B are schematic diagrams of a calibration screen according to an embodiment of the present invention. Before you start using it, you can perform calibration. That is, a plurality of assembling pads A are assembled to obtain a light emitting region. Next, the imaging device 110 is turned on. After that, the imaging device 110 is moved to a light emission area. Here, the user enters the light emission area while wearing a helmet with the camera device 110 installed thereon, or enters the light emission area while holding a cane with the camera device 110 installed thereon. Next, an ideal light region Z is displayed on the screen R of the imaging device 110, as shown in FIG. 8A. The imaging device 110 enters the light emission area, and when the deviation of the image is extremely small at the fixed position, it can be determined that the positioning is not moving.

之後,偵測中心的光區與其在縱向上相鄰光區之間的長度a、b以及中心的光區與其在橫向上相鄰光區的長度c、d。判斷縱向長度比a/b以及橫向長度比c/d是否小於預設值。例如,長度 a與長度b兩者的比值應小於5%;長度c與長度d的比值應小於5%。 Then, the lengths a and b between the central light area and its adjacent light areas in the longitudinal direction and the central light areas and the lengths c and d of its adjacent light areas in the lateral direction are detected. It is determined whether the longitudinal length ratio a / b and the lateral length ratio c / d are smaller than a preset value. E.g. length The ratio of a to length b should be less than 5%; the ratio of length c to length d should be less than 5%.

圖9是依照本發明另一實施例的物體追蹤系統的方塊圖。在本實施例中,物體追蹤系統900更包括電子裝置910,電子裝置910利用有線或無線的方式連結至多個組裝墊A。例如,利用通用序列匯流排(Universal Serial Bus,USB)、藍牙(Bluetooth)、Wi-Fi等來連接電子裝置910與多個組裝墊A。電子裝置910例如為一般桌上型電腦、筆記型電腦、平板電腦、智慧型手機、或其他具有運算功能的各類電子裝置。 FIG. 9 is a block diagram of an object tracking system according to another embodiment of the present invention. In this embodiment, the object tracking system 900 further includes an electronic device 910. The electronic device 910 is connected to the plurality of assembly pads A in a wired or wireless manner. For example, a universal serial bus (USB), Bluetooth, Wi-Fi, or the like is used to connect the electronic device 910 and the plurality of assembly pads A. The electronic device 910 is, for example, a general desktop computer, a notebook computer, a tablet computer, a smart phone, or various other electronic devices with computing functions.

在進行物體追蹤之前,可先透過電子裝置910來建立出虛擬空間與真實空間之間的對應地圖。電子裝置910利用有線或無線的方式偵測所有組裝墊A的光束,取得所有組裝墊A的光發射器240的識別碼。並且,要求各組裝墊A根據特定時序或特定強度等閃爍方式來發出光束,攝像裝置110的影像擷取器230接收到光訊號與擷取影像之後,進而可獲得各組裝墊所配置的一真實空間位置,並且將識別碼與對應的真實空間位置進行對應,最後產生最終的對應地圖。另外,在其他實施例中,亦可在攝像裝置110中來建立對應地圖,在此並不限制。 Before performing object tracking, a corresponding map between the virtual space and the real space may be established through the electronic device 910. The electronic device 910 detects the light beams of all the assembly pads A in a wired or wireless manner, and obtains the identification codes of the light transmitters 240 of all the assembly pads A. In addition, each assembly pad A is required to emit a light beam according to a specific timing or a specific intensity of the flashing mode. After the image capture device 230 of the camera device 110 receives the optical signal and captures an image, a true configuration of each assembly pad can be obtained Spatial location, and the identification code corresponds to the corresponding real spatial location, and finally a final corresponding map is generated. In addition, in other embodiments, the corresponding map may also be established in the camera device 110, which is not limited herein.

圖10A~圖10C是依照本發明一實施例的組裝墊構成的示意圖。在此,以紅外光源IR做為光發射器,紅外光源IR可為單區、多區於單一組裝墊上,其一區至少為一顆IRLED構成。圖10A的組裝墊A包括1個紅外光源IR;圖10B的組裝墊A包括3 個紅外光源IR;圖10C的組裝墊A包括4個紅外光源IR。每一個組裝墊A為方形,並於邊緣具有公母機構接合點,可單可雙,公母位置可相鄰相對皆可。而電性接合點IO位於機構接合點上,且可單邊全公、全母、或公母混行搭配電性接合點IO進行電性對應。 10A-10C are schematic diagrams of the structure of an assembly pad according to an embodiment of the present invention. Here, the infrared light source IR is used as a light emitter, and the infrared light source IR can be a single area or multiple areas on a single assembly pad, and one area is composed of at least one IRLED. The assembly pad A of FIG. 10A includes an infrared light source IR; the assembly pad A of FIG. 10B includes 3 Infrared light sources IR; the assembly pad A of FIG. 10C includes four infrared light sources IR. Each assembly pad A is square, and has a male-female joint on the edge, which can be single or double, and the male and female positions can be adjacent to each other. The electrical junction point IO is located on the mechanism junction point, and the unilateral all-male, all-mother, or mixed male and female can be paired with the electrical junction point IO for electrical correspondence.

圖11A~圖11C是依照本發明另一實施例的組裝墊構成的示意圖。在本實施例中,組裝墊A更與力量感測器(force sensor)F結合。圖11A的組裝墊A中設置1個力量感測器F,並且在力量感測器F的中央設置紅外光源IR。圖11B的組裝墊A中設置1個力量感測器F,而紅外光源IR則設置在不與力量感測器F重疊的位置。圖11C的組裝墊A中紅外光源IR設置中央,而力量感測器F以紅外光源IR為中心並與紅外光源IR不重疊地設置為4個。紅外光源IR的擺設位置與數目可隨系統需求的精準度與影像擷取器230的攝像頭的成熟程度而演進,在此並不限定為設置於中央亦不限定為4個。 11A-11C are schematic diagrams of the structure of an assembly pad according to another embodiment of the present invention. In this embodiment, the assembly pad A is further combined with a force sensor F. One force sensor F is provided in the assembly pad A of FIG. 11A, and an infrared light source IR is provided in the center of the force sensor F. One force sensor F is provided in the assembly pad A of FIG. 11B, and the infrared light source IR is provided at a position that does not overlap with the force sensor F. In the assembly pad A of FIG. 11C, the infrared light sources IR are set at the center, and the force sensor F is centered on the infrared light source IR and is set to four without overlapping the infrared light source IR. The arrangement position and number of the infrared light sources IR may evolve with the accuracy required by the system and the maturity of the camera of the image capture device 230, which is not limited to being arranged in the center and is not limited to four.

綜上所述,本發明結合多個發光的組裝墊,利用組裝墊來界定出活動範圍,以在活動範圍內來追蹤特定的物體。而組裝墊結構簡單,可手動組裝,不僅組裝方便,亦便於拆卸。並且,可視情況來增減組合墊的數量,在擴充上更為彈性,另外可符合任何場地域形狀,亦可用於桌面、牆壁等需要戶動的平面。在應用上更為廣泛。不僅可用於虛擬實境或擴增實境的互動,亦可用於居家照顧、人體或寵物位置追蹤等。 In summary, the present invention combines a plurality of light-emitting assembly pads, and uses the assembly pads to define a range of motion to track a specific object within the range of motion. The assembly pad has a simple structure and can be assembled manually, which is not only easy to assemble, but also easy to disassemble. In addition, the number of combined pads can be increased or decreased depending on the situation, which is more flexible in expansion. In addition, it can meet the shape of any field area, and can also be used on desktops, walls, and other planes that require user movement. It is more widely used. Not only for virtual reality or augmented reality interaction, but also for home care, human or pet location tracking, etc.

雖然本發明已以實施例揭露如上,然其並非用以限定本 發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Inventions, anyone with ordinary knowledge in the technical field to which they belong can make minor changes and modifications without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the present invention shall be defined by the scope of the attached patent application as follows: quasi.

Claims (22)

一種物體追蹤方法,包括: 透過多個組裝墊各自的一光發射器發射一光束,其中該些組裝墊拼裝出一光發射區域; 透過一攝像裝置持續在該光發射區域內朝向該些組裝墊來擷取多張影像,其中每一該些影像中包括透過所述光束而形成的多個光區; 解析一第一影像以及一第二影像,以計算該些光區的移動變化,其中該第一影像以及該第二影像為該些影像中相鄰的兩張影像;以及 基於該移動變化來判定該攝像裝置的運動狀態。An object tracking method includes: transmitting a light beam through a light transmitter of each of a plurality of assembly pads, wherein the assembly pads are assembled into a light emission area; and a camera device is continuously directed toward the assembly pads in the light emission area. To capture a plurality of images, each of which includes a plurality of light regions formed by transmitting the light beam; analyzing a first image and a second image to calculate a movement change of the light regions, wherein the The first image and the second image are two adjacent images among the images; and determining a motion state of the camera device based on the movement change. 如申請專利範圍第1項所述的物體追蹤方法,其中解析該第一影像以及該第二影像,以計算該些光區的移動變化的步驟包括: 在該第一影像以及該第二影像分別鎖定一指定光區;以及 基於該第一影像的該指定光區的座標位置以及該第二影像的該指定光區的座標位置,計算在一水平面上的一位移方向以及一位移量。The object tracking method according to item 1 of the scope of patent application, wherein the step of parsing the first image and the second image to calculate the movement changes of the light areas includes: separately from the first image and the second image Locking a designated light area; and calculating a displacement direction and a displacement amount on a horizontal plane based on the coordinate position of the designated light area of the first image and the coordinate position of the designated light area of the second image. 如申請專利範圍第1項所述的物體追蹤方法,其中解析該第一影像以及該第二影像,以計算該些光區的移動變化的步驟包括: 在該第一影像以及該第二影像分別鎖定一成像位置; 記錄在該第一影像的該成像位置的第一光區; 記錄在該第二影像的該成像位置的第二光區; 基於該第一光區與該第二光區兩者的位置關係來獲得在一水平面上的一位移方向;以及 基於該第一光區與該第二光區之間包括的光區數量來獲得在該水平面上的一位移量。The object tracking method according to item 1 of the scope of patent application, wherein the step of parsing the first image and the second image to calculate the movement changes of the light areas includes: separately from the first image and the second image An imaging position is locked; a first light region recorded at the imaging position of the first image; a second light region recorded at the imaging position of the second image; based on both the first light region and the second light region To obtain a displacement direction on a horizontal plane; and to obtain a displacement amount on the horizontal plane based on the number of light regions included between the first light region and the second light region. 如申請專利範圍第1項所述的物體追蹤方法,其中解析該第一影像以及該第二影像,以計算該些光區的移動變化的步驟包括: 在該第一影像以及該第二影像分別鎖定一指定光區; 於該第一影像中,在該指定光區分別與其相鄰的該些光區之間相距的間隔皆相等時,記錄一第一間隔; 於該第二影像中,在該指定光區分別與其相鄰的該些光區之間相距的間隔皆相等時,記錄一第二間隔; 在該第一間隔與該第二間隔不同的情況下,基於該第一間隔與該第二間隔的變化關係,獲得在一垂直軸上的位移方向以及一位移量。The object tracking method according to item 1 of the scope of patent application, wherein the step of parsing the first image and the second image to calculate the movement changes of the light areas includes: separately from the first image and the second image Lock a designated light area; In the first image, when the distance between the designated light area and the adjacent light areas are equal, record a first interval; In the second image, in When the distance between the designated light area and the adjacent light areas are equal, a second interval is recorded; if the first interval is different from the second interval, based on the first interval and the The change relationship of the second interval obtains a displacement direction on a vertical axis and a displacement amount. 如申請專利範圍第4項所述的物體追蹤方法,更包括: 在該第一間隔與該第二間隔相同的情況下,判定該移動變化為一水平移動。The object tracking method according to item 4 of the scope of patent application, further comprising: determining that the movement change is a horizontal movement when the first interval is the same as the second interval. 如申請專利範圍第1項所述的物體追蹤方法,其中解析該第一影像以及該第二影像,以計算該些光區的移動變化的步驟包括: 判斷在該第一影像與該第二影像中,位於外側的所述光區形成的直線之間的斜率是否改變; 其中,基於該移動變化來判定該攝像裝置的運動狀態的步驟包括: 基於該斜率的變化來計算該攝像裝置的一旋轉角度。The object tracking method according to item 1 of the scope of patent application, wherein the step of analyzing the first image and the second image to calculate the movement changes of the light areas includes: judging between the first image and the second image Whether the slope between the straight lines formed by the light regions on the outer side changes; wherein the step of determining the motion state of the camera device based on the movement change includes: calculating a rotation of the camera device based on the change in the slope angle. 如申請專利範圍第1項所述的物體追蹤方法,更包括: 取得各該些組裝墊的一識別碼; 驅使該些組裝墊逐一發出光束; 基於接收到的一光訊號及一擷取影像,獲得各該些組裝墊所配置的一真實空間位置;以及 將該識別碼與其對應的該真實空間位置進行對應而獲得一對應地圖。The object tracking method according to item 1 of the scope of patent application, further comprising: obtaining an identification code of each of the assembly pads; driving the assembly pads to emit a light beam one by one; based on a received optical signal and an captured image, Obtain a real space position configured by each of the assembly pads; and obtain a corresponding map by corresponding the identification code with the corresponding real space position. 如申請專利範圍第1項所述的物體追蹤方法,更包括: 透過該攝像裝置在該光發射區域內朝向該些組裝墊來擷取一校正影像,並顯示該校正影像於一螢幕; 於該螢幕中,在該校正影像的上方顯示一理想光區域;以及 透過該理想光區域與該校正影像中的該些光區來進行一校正程序。The object tracking method according to item 1 of the scope of patent application, further comprising: acquiring a correction image through the camera device toward the assembly pads in the light emission area, and displaying the correction image on a screen; On the screen, an ideal light area is displayed above the corrected image; and a correction process is performed through the ideal light area and the light areas in the corrected image. 如申請專利範圍第1項所述的物體追蹤方法,其中每一該些組裝墊在其邊緣具有公母機構接合點,每一該些組裝墊透過該公母機構接合點來進行組裝。The object tracking method according to item 1 of the scope of the patent application, wherein each of the assembly pads has a male-female mechanism joint on its edge, and each of the assembly pads is assembled through the male-female mechanism joint. 如申請專利範圍第1項所述的物體追蹤方法,其中該攝像裝置安裝在一物件上,其中該物件為頭盔、手杖、遙控器、手套、鞋套以及衣服其中一個。The object tracking method according to item 1 of the scope of patent application, wherein the camera device is mounted on an object, wherein the object is one of a helmet, a cane, a remote control, gloves, a shoe cover, and clothing. 如申請專利範圍第1項所述的物體追蹤方法,其中每一該些組裝墊更包括一力量感測器。The object tracking method according to item 1 of the scope of patent application, wherein each of the assembly pads further includes a force sensor. 一種物體追蹤系統,包括: 多個組裝墊,拼裝出一光發射區域,其中每一該些組裝墊包括用以發射一光束的一光發射器;以及 一攝像裝置,包括: 一影像擷取器,持續在該光發射區域內朝向該些組裝墊來擷取多張影像,其中每一該些影像中包括透過所述光束而形成的多個光區;以及 一影像分析單元,耦接至該影像擷取器,接收該些影像以解析該些影像,其中,該影像分析單元解析一第一影像以及一第二影像,以計算該些光區的移動變化,其中該第一影像以及該第二影像為該些影像中相鄰的兩張影像;並且,該影像分析單元基於該移動變化來判定該攝像裝置的運動狀態。An object tracking system includes: a plurality of assembly pads assembled to form a light emitting area, wherein each of the assembly pads includes a light emitter for emitting a light beam; and a camera device including: an image capture device Continuously capturing multiple images toward the assembly pads in the light emitting area, each of which includes a plurality of light regions formed through the light beam; and an image analysis unit coupled to the An image capture device receives the images to analyze the images, wherein the image analysis unit analyzes a first image and a second image to calculate movement changes of the light areas, wherein the first image and the first image The two images are two adjacent images among the images; and the image analysis unit determines a motion state of the camera device based on the movement change. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像分析單元在該第一影像以及該第二影像分別鎖定一指定光區,並且基於該第一影像的該指定光區的座標位置以及該第二影像的該指定光區的座標位置,計算在一水平面上的一位移方向以及一位移量。The object tracking system according to item 12 of the patent application scope, wherein the image analysis unit locks a designated light area on the first image and the second image, and is based on the coordinate position of the designated light area of the first image And the coordinate position of the designated light area of the second image, a displacement direction and a displacement amount on a horizontal plane are calculated. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像分析單元在該第一影像以及該第二影像分別鎖定一成像位置;記錄在該第一影像的該成像位置的第一光區;記錄在該第二影像的該成像位置的第二光區;基於該第一光區與該第二光區兩者的位置關係來獲得在一水平面上的一位移方向;以及基於該第一光區與該第二光區之間包括的光區數量來獲得在該水平面上的一位移量。The object tracking system according to item 12 of the scope of patent application, wherein the image analysis unit locks an imaging position on the first image and the second image respectively; and a first light region recorded on the imaging position of the first image A second light area recorded at the imaging position of the second image; obtaining a displacement direction on a horizontal plane based on the positional relationship between the first light area and the second light area; and based on the first The number of light regions included between the light region and the second light region is used to obtain a displacement amount on the horizontal plane. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像分析單元在該第一影像以及該第二影像分別鎖定一指定光區;於該第一影像中,在該指定光區分別與其相鄰的該些光區之間相距的間隔皆相等時,記錄一第一間隔;於該第二影像中,在該指定光區分別與其相鄰的該些光區之間相距的間隔皆相等時,記錄一第二間隔;在該第一間隔與該第二間隔不同的情況下,基於該第一間隔與該第二間隔的變化關係,獲得在一垂直軸上的一位移方向以及一位移量。The object tracking system according to item 12 of the scope of patent application, wherein the image analysis unit locks a designated light area in the first image and the second image, respectively; in the first image, the designated light area is separately associated with the designated light area. A first interval is recorded when the distances between adjacent light areas are equal; in the second image, the distance between the designated light area and its adjacent light areas are equal A second interval is recorded; when the first interval is different from the second interval, a displacement direction and a displacement on a vertical axis are obtained based on a change relationship between the first interval and the second interval the amount. 如申請專利範圍第15項所述的物體追蹤系統,其中該影像分析單元在該第一間隔與該第二間隔相同的情況下,判定該移動變化為一水平移動。The object tracking system according to item 15 of the scope of patent application, wherein the image analysis unit determines that the movement change is a horizontal movement when the first interval is the same as the second interval. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像分析單元判斷在該第一影像與該第二影像中,位於外側的所述光區形成的直線之間的斜率是否改變,並且基於該斜率的變化來計算該攝像裝置的一旋轉角度。The object tracking system according to item 12 of the scope of patent application, wherein the image analysis unit determines whether a slope between a straight line formed by the light region located outside on the first image and the second image changes, and A rotation angle of the camera is calculated based on the change in the slope. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像分析單元取得各該些組裝墊的一識別碼,驅使該些組裝墊逐一發出光束,基於接收到的一光訊號及一擷取影像,獲得各該些組裝墊所配置的一真實空間位置,並且將該識別碼與其對應的該真實空間位置進行對應而獲得一對應地圖。The object tracking system according to item 12 of the scope of patent application, wherein the image analysis unit obtains an identification code of each of the assembly pads, and drives the assembly pads to emit a beam one by one, based on a received optical signal and an acquisition An image is used to obtain a real space position configured by each of the assembly pads, and the identification code is corresponding to the corresponding real space position to obtain a corresponding map. 如申請專利範圍第12項所述的物體追蹤系統,其中該影像擷取器在該光發射區域內朝向該些組裝墊來擷取一校正影像,並顯示該校正影像於一螢幕;該影像分析單元於該螢幕中,在該校正影像的上方顯示一理想光區域,並且透過該理想光區域與該校正影像中的該些光區來進行一校正程序。The object tracking system according to item 12 of the scope of patent application, wherein the image capture device is used to capture a correction image toward the assembly pads in the light emission area and display the correction image on a screen; the image analysis The unit displays an ideal light area above the corrected image on the screen, and performs a correction process through the ideal light area and the light areas in the corrected image. 如申請專利範圍第12項所述的物體追蹤系統,其中每一該些組裝墊在其邊緣具有公母機構接合點,每一該些組裝墊透過該公母機構接合點來進行組裝。The object tracking system according to item 12 of the scope of patent application, wherein each of the assembly pads has a male-female mechanism joint on its edge, and each of the assembly pads is assembled through the male-female mechanism joint. 如申請專利範圍第12項所述的物體追蹤系統,其中該攝像裝置安裝在一物件上,其中該物件為頭盔、手杖、遙控器、手套、鞋套以及衣服其中一個。The object tracking system according to item 12 of the patent application scope, wherein the camera device is mounted on an object, wherein the object is one of a helmet, a cane, a remote control, gloves, a shoe cover, and clothing. 如申請專利範圍第12項所述的物體追蹤系統,其中每一該些組裝墊更包括一力量感測器。According to the object tracking system of claim 12, each of the assembly pads further includes a force sensor.
TW106134271A 2017-10-03 2017-10-03 Method and system for tracking object TWI635255B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW106134271A TWI635255B (en) 2017-10-03 2017-10-03 Method and system for tracking object
US15/849,639 US20190102890A1 (en) 2017-10-03 2017-12-20 Method and system for tracking object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106134271A TWI635255B (en) 2017-10-03 2017-10-03 Method and system for tracking object

Publications (2)

Publication Number Publication Date
TWI635255B true TWI635255B (en) 2018-09-11
TW201915442A TW201915442A (en) 2019-04-16

Family

ID=64452995

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106134271A TWI635255B (en) 2017-10-03 2017-10-03 Method and system for tracking object

Country Status (2)

Country Link
US (1) US20190102890A1 (en)
TW (1) TWI635255B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923338A (en) * 2020-07-07 2022-01-11 黑快马股份有限公司 Follow shooting system with picture stabilizing function and follow shooting method with picture stabilizing function

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI267806B (en) * 2005-04-28 2006-12-01 Chung Shan Inst Of Science Vehicle control training system and its method
US8547401B2 (en) * 2004-08-19 2013-10-01 Sony Computer Entertainment Inc. Portable augmented reality device and method
WO2016150292A1 (en) * 2015-03-26 2016-09-29 北京小小牛创意科技有限公司 Virtuality-and-reality-combined interactive method and system for merging real environment
KR101713223B1 (en) * 2015-10-20 2017-03-22 (주)라스 Apparatus for realizing virtual reality
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
CN107045201A (en) * 2016-12-27 2017-08-15 上海与德信息技术有限公司 A kind of display methods and system based on VR devices
US20170236328A1 (en) * 2016-02-12 2017-08-17 Disney Enterprises, Inc. Method for motion-synchronized ar or vr entertainment experience

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098095A1 (en) * 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
US9341464B2 (en) * 2011-10-17 2016-05-17 Atlas5D, Inc. Method and apparatus for sizing and fitting an individual for apparel, accessories, or prosthetics
US20150097719A1 (en) * 2013-10-03 2015-04-09 Sulon Technologies Inc. System and method for active reference positioning in an augmented reality environment
JP6645687B2 (en) * 2015-08-31 2020-02-14 キヤノン株式会社 Display device and control method
US20170374333A1 (en) * 2016-06-27 2017-12-28 Wieden + Kennedy, Inc. Real-time motion capture and projection system
US20190083808A1 (en) * 2017-09-20 2019-03-21 Jessica Iverson Apparatus and method for emitting light to a body of a user

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8547401B2 (en) * 2004-08-19 2013-10-01 Sony Computer Entertainment Inc. Portable augmented reality device and method
TWI267806B (en) * 2005-04-28 2006-12-01 Chung Shan Inst Of Science Vehicle control training system and its method
WO2016150292A1 (en) * 2015-03-26 2016-09-29 北京小小牛创意科技有限公司 Virtuality-and-reality-combined interactive method and system for merging real environment
KR101713223B1 (en) * 2015-10-20 2017-03-22 (주)라스 Apparatus for realizing virtual reality
US20170236328A1 (en) * 2016-02-12 2017-08-17 Disney Enterprises, Inc. Method for motion-synchronized ar or vr entertainment experience
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
CN107045201A (en) * 2016-12-27 2017-08-15 上海与德信息技术有限公司 A kind of display methods and system based on VR devices

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923338A (en) * 2020-07-07 2022-01-11 黑快马股份有限公司 Follow shooting system with picture stabilizing function and follow shooting method with picture stabilizing function

Also Published As

Publication number Publication date
TW201915442A (en) 2019-04-16
US20190102890A1 (en) 2019-04-04

Similar Documents

Publication Publication Date Title
US12310679B2 (en) High-speed optical tracking with compression and/or CMOS windowing
US12020458B2 (en) Determining the relative locations of multiple motion-tracking devices
US8686943B1 (en) Two-dimensional method and system enabling three-dimensional user interaction with a device
JP5049228B2 (en) Dialogue image system, dialogue apparatus and operation control method thereof
CN111383266B (en) Object tracking system and object tracking method
US8971565B2 (en) Human interface electronic device
TW200907764A (en) Three-dimensional virtual input and simulation apparatus
CN113474816B (en) Elastic dynamic projection mapping system and method
TW200842665A (en) Cursor controlling device and method for image apparatus and image system
KR20030075399A (en) Motion Mouse System
KR100532525B1 (en) 3 dimensional pointing apparatus using camera
KR101651535B1 (en) Head mounted display device and control method thereof
TW201626174A (en) Optical navigation device with enhanced tracking speed
US9201519B2 (en) Three-dimensional pointing using one camera and three aligned lights
US20040001074A1 (en) Image display apparatus and method, transmitting apparatus and method, image display system, recording medium, and program
CN105320274A (en) Direct three-dimensional pointing using light tracking and relative position detection
TWI635255B (en) Method and system for tracking object
Tsun et al. A human orientation tracking system using Template Matching and active Infrared marker
US9678583B2 (en) 2D and 3D pointing device based on a passive lights detection operation method using one camera
CN109688291B (en) Object tracking method and system
ES2812851T3 (en) Preview device
JP2003076488A (en) Device and method of determining indicating position
KR101695727B1 (en) Position detecting system using stereo vision and position detecting method thereof
JP6666121B2 (en) Information processing system, information processing apparatus, information processing method, and information processing program
WO2013086718A1 (en) Input device and method