TW201301877A - Imaging sensor based multi-dimensional remote controller with multiple input modes - Google Patents
Imaging sensor based multi-dimensional remote controller with multiple input modes Download PDFInfo
- Publication number
- TW201301877A TW201301877A TW101121525A TW101121525A TW201301877A TW 201301877 A TW201301877 A TW 201301877A TW 101121525 A TW101121525 A TW 101121525A TW 101121525 A TW101121525 A TW 101121525A TW 201301877 A TW201301877 A TW 201301877A
- Authority
- TW
- Taiwan
- Prior art keywords
- television
- display screen
- remote controller
- cursor
- virtual
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title abstract 3
- 238000000034 method Methods 0.000 claims abstract description 42
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 238000003708 edge detection Methods 0.000 claims abstract description 9
- 230000003993 interaction Effects 0.000 claims description 32
- 238000013507 mapping Methods 0.000 claims description 26
- 230000002452 interceptive effect Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 39
- 238000004364 calculation method Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 9
- 239000007787 solid Substances 0.000 description 6
- 230000009466 transformation Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 241001310793 Podium Species 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
- Details Of Television Systems (AREA)
Abstract
Description
本發明係關於一種遙控器,尤其是關於一種具有影像感測器之網路電視用之多輸入模式遙控器。 The present invention relates to a remote controller, and more particularly to a multi-input mode remote controller for a network television having an image sensor.
網路電視(Internet TV)近來的發展不僅提供傳統的電視節目,也提供互動的內容,例如2D及3D網頁,線上2D,3D遊戲等等。為了符合逐日擴展的使用者對網路TV用的輸入特色的需要,存在著可提供比現有輸入裝置更改善的輸入能力之下一代遙控器的強烈需求。 The recent development of Internet TV (Internet TV) not only provides traditional TV programs, but also provides interactive content such as 2D and 3D web pages, online 2D, 3D games and so on. In order to meet the needs of daily extended users for the input features of Internet TV, there is a strong need for next generation remote controls that provide improved input capabilities over existing input devices.
本發明之目的在於提供一種具有影像感測器之多輸入模式之網路電視遙控器。 It is an object of the present invention to provide a network television remote control having a multi-input mode of an image sensor.
於一實施例中,本發明提供一種使用具有一影像感測器之遙控器產生電視用之輸入指令之方法。此方法藉由遙控器之影像感測器所擷取之圖形影像來識別電視顯示器之角落。此方法隨後使用電視顯示器螢幕之參考電視尺寸來執行邊緣偵測以及圖像之分段,以識別圖形影像中之電視顯示器四個角落之像素座標。本案方法隨後使用一交互比例演算法(cross ratio algorithm)映對(map)該像素座標中之一相機中心位置至一虛擬電視座標,隨 後對應虛擬電視座標中之一游標的位置至該電視顯示器螢幕之座標。 In one embodiment, the present invention provides a method of generating an input command for a television using a remote control having an image sensor. This method identifies the corner of the television display by means of a graphic image captured by the image sensor of the remote control. The method then performs edge detection and segmentation of the image using the reference television size of the television display screen to identify the pixel coordinates of the four corners of the television display in the graphical image. The method of the present method then maps one of the camera coordinates to a virtual television coordinate using a cross ratio algorithm. The position corresponding to one of the virtual television coordinates is then to the coordinates of the television display screen.
於本發明另一實施例中,本發明提供一種遙控器,包括一影像感測器,一多點觸控觸碰板以及一微處理器。該影像感測器用以執行影像擷取及深度感測。微處理器用以處理影像感測器資料以及於複數輸入模式中之一模式中,從該影像感測器及該多點觸控觸碰板接收之多點觸控觸碰板資料。於一手勢輸入模式或一游標控制模式中,此微處理器用以執行以下步驟(i)從該遙控器之影像感測器所抓取之一圖形影像識別一電視顯示器螢幕之角落,以及(ii)使用電視顯示器螢幕之參考電視尺寸執行邊緣偵測及圖形影像之分段,以識別該圖形影像中之電視顯示器螢幕之四角落之像素座標。 In another embodiment of the present invention, the present invention provides a remote controller including an image sensor, a multi-touch touch panel, and a microprocessor. The image sensor is used to perform image capture and depth sensing. The microprocessor is configured to process the image sensor data and receive the multi-touch touch panel data from the image sensor and the multi-touch touch panel in one of the plurality of input modes. In a gesture input mode or a cursor control mode, the microprocessor is configured to perform the following steps: (i) recognizing a graphic image captured from the image sensor of the remote controller to identify a corner of a television display screen, and (ii) Performing edge detection and segmentation of the graphics image using the reference television size of the television display screen to identify the pixel coordinates of the four corners of the television display screen in the graphics image.
於一實施例中,本發明提供一種電視用之遙控器輸入系統。該系統包括一如前所述之遙控器以及一電視遙控器支援模組。該電視遙控器支援模組係用以(i)使用一交互比例演算法(cross ratio algorithm)映對(map)該像素座標中之一相機中心位置至一虛擬電視座標,以及(ii)對應虛擬電視座標中之一游標的位置至該電視顯示器螢幕之座標。 In one embodiment, the present invention provides a remote control input system for a television. The system includes a remote control as described above and a television remote control support module. The television remote control support module is configured to (i) map a camera center position to a virtual television coordinate in the pixel coordinate using a cross ratio algorithm, and (ii) correspond to a virtual The position of one of the cursors in the TV coordinates to the coordinates of the screen of the television display.
以下所述本發明之實施例係為例示之用,並非用以限制本發明之範圍。可以理解的是,本發明使用一或多種電腦可讀取之媒介,而每種媒介可用以包含操作用之資料或電腦可執行之資料。電腦可執行之資料包括各種資料結構、程式或是其它程式模組。此外,電腦可讀取之媒介可為隨機存取記憶體(RAM)、唯讀型記憶體(ROM)或是其他能夠提供資料或執行指令 的裝置或零件。所有電腦可讀取之媒介適用於本發明,但在某些實施例中,電腦可讀取之媒介為實體非臨時電腦可讀取之媒介(tangible non transitory computer readable media)。 The embodiments of the invention described below are illustrative and are not intended to limit the scope of the invention. It will be understood that the present invention utilizes one or more computer readable mediums, and each medium can be used to contain operational data or computer executable material. The computer executable data includes various data structures, programs or other programming modules. In addition, the computer readable medium can be random access memory (RAM), read only memory (ROM) or other capable of providing data or executing instructions. Device or part. All computer readable media are suitable for use in the present invention, but in some embodiments, the computer readable medium is tangible non transitory computer readable media.
網路電視與遙控器的硬體與軟體結構Hardware and software structure of Internet TV and remote control
請參閱圖1,圖1為遙控器之立體圖。如圖1所示,遙控器100包括一2D或3D影像感測器102與一觸碰板104,其中影像感測器102具有像素座標,而觸碰板104為多指觸碰板或是數位板(digitizer),用以控制網路電視110。觸碰板104藉由傳送絕對座標,或是傳送習知滑鼠模式的相對座標進行操作。此外,遙控器100包括一個或多個按鍵106,用以操作網路電視110或是改變遙控器100的模式。 Please refer to FIG. 1. FIG. 1 is a perspective view of the remote controller. As shown in FIG. 1 , the remote controller 100 includes a 2D or 3D image sensor 102 and a touch panel 104 , wherein the image sensor 102 has pixel coordinates, and the touch panel 104 is a multi-finger touch panel or a digital digit. A digitizer is used to control the network television 110. The touchpad 104 operates by transmitting absolute coordinates or by transmitting relative coordinates of the conventional mouse mode. In addition, remote control 100 includes one or more buttons 106 for operating network television 110 or changing the mode of remote control 100.
遙控器100的韌體藉由無線通訊裝置112將資料封包以無線方式傳送至網路電視110,而網路電視110藉由網路電視110之無線通訊裝置114完成無線通訊。上述的無線通訊方式可為藍牙、WiFi或是其他適用技術。網路電視110包括遙控器支援模組,遙控器支援模組接收來自遙控器100的訊號,並且產生多維輸入指令,例如控制游標。 The firmware of the remote controller 100 wirelessly transmits the data packet to the network television 110 via the wireless communication device 112, and the network television 110 performs wireless communication via the wireless communication device 114 of the network television 110. The above wireless communication method can be Bluetooth, WiFi or other suitable technologies. The network television 110 includes a remote controller support module that receives signals from the remote controller 100 and generates multi-dimensional input commands, such as controlling cursors.
遙控器100之影像感測器102具有兩種功能:影像擷取與深度感測。深度感測是指以像素座標表示遙控器100與3D環境中物件之間的距離。利用影像感測器102所擷取的2D彩色影像以及深度感測過程,遙控器100能夠將相機中心位置的像素座標映對至電視螢幕座標中的游標位置。 The image sensor 102 of the remote controller 100 has two functions: image capture and depth sensing. Depth sensing refers to the distance between the remote control 100 and the objects in the 3D environment in pixel coordinates. Using the 2D color image captured by the image sensor 102 and the depth sensing process, the remote controller 100 can map the pixel coordinates of the camera center position to the cursor position in the TV screen coordinates.
請參閱圖2,圖2為遙控器之方塊示意圖。承上所述,遙控器100包括影像感測器102、一多指觸碰板104或數位板,以及複數個按鍵106或按 壓開關,其中影像感測器102可為CMOS 2D/3D感測器。如圖2所示,影像感測器102、多指觸碰板104與按鍵106電連接於微處理器200,且如圖1所示,微處理器200的資料透過無線通訊裝置112傳送至網路電視110。 Please refer to FIG. 2, which is a block diagram of the remote controller. As described above, the remote controller 100 includes an image sensor 102, a multi-finger touch pad 104 or a tablet, and a plurality of buttons 106 or buttons The pressure switch, wherein the image sensor 102 can be a CMOS 2D/3D sensor. As shown in FIG. 2, the image sensor 102, the multi-finger touch pad 104 and the button 106 are electrically connected to the microprocessor 200, and as shown in FIG. 1, the data of the microprocessor 200 is transmitted to the network through the wireless communication device 112. Road TV 110.
請參閱圖3,圖3為遙控器之韌體與網路電視之遙控器支援模組的方塊圖。韌體300設於遙控器100內部,遙控器支援模組302設於網路電視110內部,其中韌體300接收多指觸碰板104與影像感測器102的非連續資料組,並透過無線方式將非連續資料組傳送至網路電視110之無線通訊驅動器306。 Please refer to FIG. 3. FIG. 3 is a block diagram of the remote controller support module of the firmware of the remote controller and the network television. The firmware 300 is disposed inside the remote controller 100, and the remote control support module 302 is disposed inside the network television 110. The firmware 300 receives the non-continuous data group of the multi-finger touch panel 104 and the image sensor 102, and transmits the wireless The method transmits the non-contiguous data set to the wireless communication driver 306 of the network television 110.
於某些較佳實施例中,遙控器100為複合型USB人類介面裝置,且遙控器100包括至少二個獨立邏輯裝置。第一邏輯裝置為多指觸碰板,第二邏輯裝置為影像感測器。電視操作系統304透過無線通訊驅動器306接收來自遙控器100的資料封包,且資料封包將依序被傳送至USB數位板驅動器308與游標導航模組320。 In some preferred embodiments, remote control 100 is a composite USB human interface device, and remote control 100 includes at least two independent logic devices. The first logic device is a multi-finger touch panel, and the second logic device is an image sensor. The television operating system 304 receives the data packets from the remote controller 100 via the wireless communication driver 306, and the data packets are sequentially transmitted to the USB tablet driver 308 and the cursor navigation module 320.
多維輸入功能Multidimensional input function
關於遙控器的獨立輸入控制模式將於下文中說明,請繼續參閱圖3。本發明之遙控器係為具有三個獨立輸入控制模式的多功能多維遙控器。 The independent input control mode for the remote control will be explained below, please continue to refer to Figure 3. The remote controller of the present invention is a multifunctional multi-dimensional remote controller having three independent input control modes.
第一控制模式為游標位置模式,藉由結合影像感測器與觸碰板兩種輸入方式控制網路電視的游標位置。第二控制模式為手勢輸入模式,當遙控器在開放空間中移動時,藉由遙控器之影像感測器產生輸入訊號。第三控制模式為多指輸入模式,藉由複數手指觸碰觸碰板產生多指觸碰訊號。 The first control mode is a cursor position mode, and the cursor position of the network television is controlled by combining the image sensor and the touchpad. The second control mode is a gesture input mode, and the input signal is generated by the image sensor of the remote controller when the remote controller moves in the open space. The third control mode is a multi-finger input mode, in which a multi-finger touch signal is generated by a plurality of fingers touching the touch panel.
上述任一控制模式皆可應用至二維、三維或是其他多維輸入指令。甚 者,上述輸入控制模式可藉由按壓遙控器之一個或多個按鍵進行切換。 Any of the above control modes can be applied to 2D, 3D or other multidimensional input commands. very The above input control mode can be switched by pressing one or more buttons of the remote controller.
在游標位置模式中,藉由移動遙控器產生的影像感測器資料比觸碰板資料具有更高優先性。影像感測器資料可藉由絕對位置指令成為直接指向指令,用以導航電視螢幕上的游標。在游標位置模式中,若影像感測器的位移速率小於預設值,例如遙控器靜止不動,則觸碰板資料將被辨識且應用於精確的游標控制移動。舉例來說,當使用者透過遙控器指向螢幕上的物件時,可以進行物件選擇。當上述輸入方式執行且使用者需要微調游標位置時,使用者可以不動遙控器,並藉由觸碰板移動游標以實現精確的游標控制。 In the cursor position mode, image sensor data generated by moving the remote controller has higher priority than touch panel data. The image sensor data can be directly pointed to by the absolute position command to navigate the cursor on the TV screen. In the cursor position mode, if the displacement rate of the image sensor is less than a preset value, for example, the remote controller is stationary, the touch panel data will be recognized and applied to the precise cursor control movement. For example, when the user points to the object on the screen through the remote control, the object selection can be made. When the above input mode is executed and the user needs to fine tune the cursor position, the user can move the cursor by touching the pad to achieve accurate cursor control without moving the remote controller.
手勢輸入模式根據軀體在開放空間中的移動方式執行手勢輸入。舉例來說,使用者握持遙控器100且移動遙控器100以執行手勢。這些手勢將根據手勢資料庫形成輸入指令。手勢輸入模式中的觸碰板沒有功能,因此觸碰板上任何接觸不進行辨識,亦不產生任何輸入指令。 The gesture input mode performs gesture input according to the manner in which the body moves in the open space. For example, the user holds the remote controller 100 and moves the remote controller 100 to perform a gesture. These gestures will form an input command based on the gesture database. The touchpad in gesture input mode has no function, so any contact on the touchpad is not recognized and no input commands are generated.
手勢輸入模式僅利用影像感測器資料。同理,多指輸入模式僅利用觸碰板資料。於此模式中,觸碰板資料為接觸觸碰板的手指手勢輸入所產生的絕對觸碰位置資料,且所有影像感測器資料將被忽略。 Gesture input mode uses only image sensor data. Similarly, the multi-finger input mode uses only touchpad data. In this mode, the touch panel data is the absolute touch position data generated by the finger gesture input of the touch panel, and all image sensor data will be ignored.
游標位置控制模式Cursor position control mode
請繼續參閱圖3,特別是遙控器處於游標控制模式時,遙控器支援模組302的應用。在游標控制模式中,電視顯示器產生一游標影像,其中游標的位置係藉由計算影像感測器資料與觸碰板資料而得。影像感測器指向電視顯示器所產生的影像感測器資料可用以進行粗略的游標控制(精準度低), 其中游標的粗略控制為絕對位置指令。 Please continue to refer to FIG. 3, in particular, the application of the remote controller support module 302 when the remote controller is in the cursor control mode. In the cursor control mode, the television display generates a cursor image in which the position of the cursor is obtained by calculating image sensor data and touch panel data. The image sensor data generated by the image sensor pointing to the TV display can be used for rough cursor control (low precision). The coarse control of the cursor is an absolute position command.
在此過程中,遙控器之韌體執行實時影像處理,用以偵測電視框體的四個角落,並且傳送資料至網路電視內的遙控器支援模組302。於游標位置控制模式中,手指在觸碰板上拖曳而產生的觸碰板資料,透過相對的指向方法成為精確的游標位置控制(精準度高)。 During this process, the firmware of the remote controller performs real-time image processing for detecting the four corners of the television frame and transmitting the data to the remote controller support module 302 in the network television. In the cursor position control mode, the touch panel data generated by the finger dragging on the touch panel becomes precise cursor position control (high precision) through the relative pointing method.
如圖3所示,遙控器支援模組302包括游標導航模組320。游標導航模組320將相機中心位置的像素座標映對至絕對游標位置的電視螢幕座標。游標導航模組320更包括一交互比例模組312、一相機中心位置映對模組314和/或一虛擬游標映對模組316。 As shown in FIG. 3, the remote controller support module 302 includes a cursor navigation module 320. The cursor navigation module 320 maps the pixel coordinates of the camera center position to the TV screen coordinates of the absolute cursor position. The cursor navigation module 320 further includes an interaction ratio module 312, a camera center position mapping module 314, and/or a virtual cursor mapping module 316.
交互比例模組312用以計算五個點的交互比例值,其中五個點的象限代碼與預設虛擬標記對應於虛擬電視框體影像的象限代碼與像素座標。相機中心位置映對模組314用以將相機中心位置的像素座標映對於虛擬游標的虛擬電視座標。以上過程藉由象限代碼、交互比例以及預設虛擬標記的虛擬電視座標完成。虛擬游標映對模組316用以將虛擬電視框體中的虛擬游標映對至真實電視框體中的游標。游標指向模組310藉由游標導航模組320的輸出產生游標位置指令,進而協助精確的游標控制並使其過程平順化。此外,於某些較佳實施例中,支援電視軟體302透過多指觸碰板之資料封包產生一游標指令。圖3的程式模組將於下文的次標題中進一步說明。 The interaction ratio module 312 is configured to calculate an interaction ratio value of five points, wherein the quadrant code of the five points and the preset virtual mark correspond to the quadrant code and the pixel coordinates of the virtual television frame image. The camera center position mapping module 314 is used to map the pixel coordinates of the camera center position to the virtual television coordinates of the virtual cursor. The above process is accomplished by quadrant code, cross-proportion, and virtual TV coordinates of preset virtual tags. The virtual cursor mapping module 316 is configured to map the virtual cursor in the virtual TV frame to the cursor in the real TV frame. The cursor pointing module 310 generates a cursor position command by the output of the cursor navigation module 320, thereby assisting precise cursor control and smoothing the process. In addition, in some preferred embodiments, the support television software 302 generates a cursor command through the data packet of the multi-finger touch panel. The program module of Figure 3 will be further described in the subheading below.
遙控器與電視遙控器支援模組的影像處理Image processing of remote control and TV remote control support module
請再度參閱圖3,影像感測器擷取影像,且影像藉由遙控器之韌體300和/或網路電視之遙控器支援模組302進行處理。影像處理的基本步驟如下 所示。首先,透過初始步驟進行影像參數識別。這些參數包括電視尺寸、電視顯示器到影像感測器之間的距離,影像感測器的水平方向、鉛直方向與轉動方向,以及電視遙控器支援模組302選擇的映對參數組。 Referring again to FIG. 3, the image sensor captures the image, and the image is processed by the firmware 300 of the remote controller and/or the remote controller support module 302 of the network television. The basic steps of image processing are as follows Shown. First, the image parameters are identified through the initial steps. These parameters include the size of the television, the distance between the television display and the image sensor, the horizontal direction of the image sensor, the vertical direction and the direction of rotation, and the set of mapping parameters selected by the television remote control support module 302.
再則,執行邊緣偵測與分段步驟。於此步驟中,影像感測器用以偵測電視框體的四個角落,而韌體則負責影像的分段。 Then, the edge detection and segmentation steps are performed. In this step, the image sensor is used to detect the four corners of the TV frame, and the firmware is responsible for segmentation of the image.
接著,執行連續映對步驟。於此步驟中,電視遙控器支援模組302將絕對游標位置之座標映對至電視顯示器之座標。下文的表1顯示影像感測器與電視遙控器支援模組302執行的步驟。表1的內容將進一步說明。 Next, a continuous mapping step is performed. In this step, the television remote control support module 302 maps the coordinates of the absolute cursor position to the coordinates of the television display. Table 1 below shows the steps performed by the image sensor and the television remote control support module 302. The contents of Table 1 will be further explained.
偵測邊緣、分段電視框體以及識別電視螢幕的四個角落Detect edges, segment TV frames, and identify four corners of a TV screen
偵測電視螢幕以及識別電視螢幕的四個角落可藉由一深度感測相機完成,其中深度感測相機同時擷取RGB數位影像與深度影像(相機與物件之間每一像素的距離資料)。上述相機可為商業用的3D感測相機,例如微軟的Kinect®,其中Kinect®之2D/3D影像感測器的硬體結構為結合習知2D彩色影像晶片與獨立的3D深度感測晶片(如Kinect®的感測器)。此外,同時偵測2D彩色電視影像的影像與深度感測影像的單一硬體晶片亦適用。 於某些實施例中,遙控器利用2D影像與深度感測資料,進而偵測的像素平面中的電視框體之四個角落,如圖4A至4F所示。圖4A為電視顯示器螢幕擷取圖形影像之電視環境示意圖。電視環境中包括一電視400、一牆面402位於電視400後方、一嵌壁404位於電視400之更後方、一人物406以及一演講台408位於電視400之右方。 The detection of the TV screen and the identification of the four corners of the TV screen can be accomplished by a depth sensing camera that simultaneously captures RGB digital images and depth images (distance data for each pixel between the camera and the object). The camera can be a commercial 3D sensing camera, such as Microsoft's Kinect®, wherein the hardware structure of the Kinect® 2D/3D image sensor is combined with a conventional 2D color image wafer and a separate 3D depth sensing chip ( Such as the Kinect® sensor). In addition, a single hardware chip that simultaneously detects images of 2D color television images and depth-sensing images is also applicable. In some embodiments, the remote control utilizes 2D image and depth sensing data to detect four corners of the television frame in the pixel plane, as shown in FIGS. 4A through 4F. 4A is a schematic diagram of a television environment in which a television display screen captures graphic images. The television environment includes a television 400, a wall 402 located behind the television 400, a recess 404 located further behind the television 400, a character 406, and a podium 408 located to the right of the television 400.
當遙控器進行游標導航初始化時,遙控器傳送一指令至電視400,進而顯示一單發圖形影像,例如純色螢幕410。如圖4B所示,純色螢幕410為圖中被填滿斜線部分,其中上述色彩可為藍色。遙控器自擷取圖形影像獲取電視影像與深度資料。 When the remote controller performs cursor navigation initialization, the remote controller transmits an instruction to the television 400 to display a single graphic image, such as a solid color screen 410. As shown in FIG. 4B, the solid color screen 410 is filled with a hatched portion in the figure, wherein the color may be blue. The remote control captures the video image and the depth data by capturing the graphic image.
關於圖形影像之深度資料請參閱圖4C,圖4C利用不同花紋顯示物件所在的不同深度。利用圖像資料,遙控器能夠搜尋並且識別電視螢幕420與影像感測器之間的距離。當遙控器與電視螢幕420的距離被識別後,遙控器將可根據距離產生深度影像的色彩直方圖。舉例來說,圖4C顯示影像物件中物件的深度,且每一物件依據其深度的不同而標示不同花紋。如圖4C所示,物件420與422分別與影像感測器相距相同距離。 For the depth information of the graphic image, please refer to FIG. 4C. FIG. 4C uses different patterns to display different depths of the object. Using image data, the remote control can search for and identify the distance between the television screen 420 and the image sensor. When the distance between the remote control and the television screen 420 is recognized, the remote controller will generate a color histogram of the depth image based on the distance. For example, Figure 4C shows the depth of the objects in the image object, and each object is labeled with a different pattern depending on its depth. As shown in FIG. 4C, the objects 420 and 422 are respectively at the same distance from the image sensor.
接著,圖4C的影像經過二值化影像處理,且二值化的閥值與電視顯示器螢幕420之間的距離值相關。如此一來,所有物件的距離值將被二值化而形成純色。請參閱圖4D,圖4D為圖4C的二值化影像示意圖。需要注意的是,物件430與432皆顯示填滿實線,是因為兩者分別與影像感測器相距相同距離。 Next, the image of FIG. 4C is processed by binarized image, and the binarized threshold is related to the distance value between the television display screen 420. As a result, the distance values of all objects will be binarized to form a solid color. Please refer to FIG. 4D. FIG. 4D is a schematic diagram of the binarized image of FIG. 4C. It should be noted that both objects 430 and 432 are shown to fill the solid line because the two are respectively at the same distance from the image sensor.
識別電視框體的距離資料後,遙控器可透過內建的查表資料庫獲得電視框體的預測尺寸。請參閱圖4E,透過資料庫中的電視框體預測尺寸,遙 控器可以正確地識別影像440中與電視螢幕大小最相近的物件。當電視物件識別,即分段電視框體完成後,如圖4F所示,電視框體之每一角落的位置被計算為像素座標。電視框體的質心位置可藉由四個角落450的位置推得。 After identifying the distance data of the TV frame, the remote controller can obtain the predicted size of the TV frame through the built-in look-up table database. Please refer to Figure 4E, predicting the size through the TV frame in the database. The controller can correctly identify the object in image 440 that is the closest to the size of the television screen. When the television object is recognized, that is, after the segmented television frame is completed, as shown in FIG. 4F, the position of each corner of the television frame is calculated as a pixel coordinate. The centroid position of the television frame can be derived from the position of the four corners 450.
將相機中心位置映對至電視空體座標的交互比例演算法An interactive scale algorithm that maps the camera's center position to the TV's empty body coordinates
當電視螢幕的中心位置被識別為像素座標後,交互比例演算法可將相機中心位置映對至電視框體座標。交互比例為應用數學之幾何學中一種比例。舉例來說,請參閱圖5,四條直線自點O延伸,其中四條直線上的點x1、x2、x3、x4與x1’、x2’、x3’、x4’與其投影轉換相關,因此,如下方的方程式1所示,(x1,x2,x3,x4)的交互比例等於(x1’,x2’,x3’,x4’)的交互比例。 When the center of the TV screen is recognized as a pixel coordinate, the interactive scale algorithm maps the camera's center position to the TV frame coordinates. The interaction ratio is a ratio in the geometry of applied mathematics. For example, referring to FIG. 5, four straight lines extend from point O, wherein points x1, x2, x3, and x4 on four straight lines are related to x1', x2', x3', and x4', and thus are as follows. Equation 1 shows that the interaction ratio of (x1, x2, x3, x4) is equal to the interaction ratio of (x1', x2', x3', x4').
交互比例=[(x1-x3)/(x2-x3)]/[(x1-x4)/(x2-x4)]=[(x1’-x3’)/(x2’-x3’)]/[(x1’-x4’)/(x2’-x4’)]方程式1 The interaction ratio =[(x1-x3)/(x2-x3)]/[(x1-x4)/(x2-x4)]=[(x1'-x3')/(x2'-x3')]/[ (x1'-x4')/(x2'-x4')] Equation 1
上述遙控器透過交互比例的不變量,將像素座標中的相機中心位置映對至電視螢幕座標中的游標所在位置。如圖6所示,圖6為光學中心600投射一線穿經相機透鏡602產生的投影轉換,且此線延伸至電視螢幕604。 The remote controller reflects the center position of the camera in the pixel coordinate to the position of the cursor in the TV screen coordinate through the invariance of the interactive ratio. As shown in FIG. 6, FIG. 6 is a projection transition produced by the optical center 600 projecting a line through the camera lens 602, and the line extends to the television screen 604.
在習知的電腦製圖法中,像素座標(x,y,z)與電視螢幕座標(X,Y)之間的關係已由投影轉換所定義。請參閱方程式2。 In conventional computer graphics, the relationship between pixel coordinates (x, y, z) and television screen coordinates (X, Y) has been defined by projection transformation. See Equation 2.
投影轉換矩陣包括相機內部矩陣與外部矩陣。內部矩陣代表內部相機參數,例如焦距。外部矩陣代表2D像素場景與3D環境(包括電視框體)之間的3D轉換和旋轉。電視螢幕座標中每個定義的2D位置將經由數位相機的透鏡擷取,並經由透鏡執行投影轉換矩陣,使得每個像素座標與電視座標互相對應。 The projection transformation matrix includes a camera internal matrix and an external matrix. The internal matrix represents internal camera parameters such as focal length. The outer matrix represents the 3D transformation and rotation between the 2D pixel scene and the 3D environment (including the TV frame). Each defined 2D position in the television screen coordinates will be captured via the lens of the digital camera and the projection conversion matrix is performed via the lens such that each pixel coordinate corresponds to the television coordinates.
投影轉換能夠保留交互比例、長度比例的比例、點的共線性以及點的順序。由於這些投影不變量經過投影轉換後仍不被改變,因此交互比例可將像素座標中的相機中心位置映對至電視螢幕座標中的游標位置。換言之,五個像素座標中已知的點(預設四個位置或虛擬標記以及相機中心位置)所計算出的交互比例,理該相等於電視螢幕座標中四個已知的點與未知游標的位置的交互比例。如圖7、8A與8B所示,對應於相機中心的游標位置可依據像素平面中的交互比例推算其未知的位置(X游標,Y游標)。 Projection transformation preserves the proportion of the interaction, the proportion of the length scale, the collinearity of the points, and the order of the points. Since these projection invariants are still not changed after the projection conversion, the interactive scale maps the camera center position in the pixel coordinates to the cursor position in the TV screen coordinates. In other words, the ratio of interactions calculated from the known points in the five pixel coordinates (preset four positions or virtual markers and camera center position) is equivalent to four known points in the TV screen coordinates and unknown cursors. The proportion of the interaction of the location. As shown in FIGS. 7, 8A and 8B, the cursor position corresponding to the center of the camera can estimate its unknown position (X cursor , Y cursor ) according to the interaction ratio in the pixel plane.
圖7為投影轉變以及相機定位至游標的示意圖。如圖7繪有影像感測器的投影中心700。投影穿過鏡象平面704,其中鏡象平面704包括相機像素座標p1,p2,p3,p4以及相機中心C。投影繼續前進並抵達一參考平面710,其中參考平面710包括一游標712以及被標註為P1,P2,P3,P4的 電視螢幕座標714。 Figure 7 is a schematic illustration of projection transitions and camera positioning to a cursor. A projection center 700 of an image sensor is depicted in FIG. The projection passes through the mirror plane 704, wherein the mirror plane 704 includes camera pixel coordinates p1, p2, p3, p4 and camera center C. The projection continues and reaches a reference plane 710, wherein the reference plane 710 includes a cursor 712 and is labeled P1, P2, P3, P4 TV screen coordinates 714.
圖8A與8B顯示電視螢幕座標與相機像素座標之間的交互比例係為不變量之示意圖。圖8A中包括影像感測器106以及具有游標712與標記m1、m2、m3、m4的電視110。圖8B中包括被擷取電視影像800與相機中心位置,其中相機中心位置被標記為p5,且位於電視影像的中心。標記p1,p2,p3,p4對應於標記m1、m2、m3、m4。值得被注意的是,交互比例電視=F(m1,m2,m3,m4,p5)相等於交互比例相機=F(p1,p2,p3,p4,p5)。 8A and 8B are schematic diagrams showing the degree of interaction between the television screen coordinates and the camera pixel coordinates as an invariant. The image sensor 106 and the television 110 having the cursor 712 and the marks m1, m2, m3, m4 are included in FIG. 8A. Figure 8B includes captured television image 800 and camera center position, where the camera center position is labeled p5 and is located at the center of the television image. The marks p1, p2, p3, p4 correspond to the marks m1, m2, m3, m4. It is worth noting that the interactive proportional TV = F (m1, m2, m3, m4, p5) is equal to the interactive scale camera = F (p1, p2, p3, p4, p5).
在習知的應用數學中,兩個交互比例方程式之五個點中任三點不共線,且其投影轉換如下所示。透過兩個交互比例與方程式3和4,未知的游標位置(X游標,Y游標)將被獲得。 In conventional applied mathematics, any three of the five points of the two interactive proportional equations are not collinear, and their projection transformations are as follows. Through two interaction ratios and Equations 3 and 4, the unknown cursor position (X cursor , Y cursor ) will be obtained.
Cross Ratio 1=(|M431|* |M521|)/(|M421|*|M531|) Cross Ratio 2=(|M421|* |M532|)/(|M432|*|M521|)方程式3和4 Cross Ratio 1=(|M 431 | * |M 521 |)/(|M 421 | * |M 531 |) Cross Ratio 2=(|M 421 | * |M 532 |)/(|M 432 | * | M 521 |) Equations 3 and 4
在方程式3和4中,Mijk為一矩陣,且矩陣其中i、j、k分別表示點的代碼。(x i ,y i )為標號i的點的2D座標。純量值|Mijk|是Mijk矩陣的行列式。 In Equations 3 and 4, M ijk is a matrix and the matrix Where i, j, and k represent the code of the point, respectively. ( x i , y i ) is the 2D coordinate of the point of the label i. The scalar value |M ijk | is the determinant of the M ijk matrix.
透過交互比例計算游標導航的計算過程包括下列四個步驟: The calculation process of cursor navigation through interactive scale includes the following four steps:
步驟1:在第一個步驟中,接收預設四點的原始資料封包,用以計算交互比例或是像素座標中「虛擬標記」的位置。如圖9所示,電視螢幕900的擷取影像位於像素座標中的X軸904與Y軸902,其中被擷取的電視框 體影像中包括四個虛擬標記908的位置Pi(i=1-4),以及代表相機中心位置的第五點P5 906。以VGA解析度的螢幕為例,P5在(X,Y)座標系統中位於(320,240)。 Step 1: In the first step, the preset four-point original data packet is received to calculate the interaction ratio or the position of the "virtual marker" in the pixel coordinates. As shown in FIG. 9, the captured image of the television screen 900 is located in the X-axis 904 and the Y-axis 902 in the pixel coordinates, wherein the captured television frame The body image includes positions Pi (i = 1-4) of the four virtual markers 908, and a fifth point P5 906 representing the center position of the camera. Taking a VGA resolution screen as an example, P5 is located at (320, 240) in the (X, Y) coordinate system.
步驟2:在第二個步驟中,透過步驟1的點資料計算交互比例。舉例來說,利用Pi(i=1-4)的預設點資料與相機中心位置P5進行計算,如下所示,再經由方程式3與4獲得交互比例CR1與交互比例CR2。須注意的是,應該避免三點共線的條件。 Step 2: In the second step, the interaction ratio is calculated through the point data of step 1. For example, using the preset point data of Pi (i=1-4) and the camera center position P5, as shown below, the interaction ratio CR1 and the interaction ratio CR2 are obtained via Equations 3 and 4. It should be noted that the conditions of the three-point collinearity should be avoided.
Pi(x,y)=虛擬標記i的位置P5(x,y)=像素平面座標中的相機中心位置|M 431 |=P4xP3y+P3xP1y+P1xP4y-(P1xP3y+P3xP4y+P1yP4x) |M 421 |=P4xP2y+P2xP1y+P1xP4y-(P1xP2y+P2xP4y+P1yP4x) |M 432 |=P4xP3y+P3xP2y+P2xP4y-(P2xP3y+P3xP4y+P2yP4x) |M 521 |=P5xP2y+P2xP1y+P1xP5y-(P1xP2y+P2xP5y+P1yP5x) |M 531 |=P5xP3y+P3xP1y+P1xP5y-(P1xP3y+P3xP5y+P1yP5x) |M 532 |=P5xP3y+P3xP2y+P2xP5y-(P2xP3y+P3xP5y+P2yP5x) Pi(x, y) = position of virtual marker i P5 (x, y) = camera center position in pixel plane coordinates | M 431 | = P4xP3y + P3xP1y + P1xP4y - (P1xP3y + P3xP4y + P1yP4x) | M 421 | P4xP2y+P2xP1y+P1xP4y-(P1xP2y+P2xP4y+P1yP4x) |M 432 |=P4xP3y+P3xP2y+P2xP4y-(P2xP3y+P3xP4y+P2yP4x) |M 521 |=P5xP2y+P2xP1y+P1xP5y-(P1xP2y+P2xP5y+P1yP5x) |M 531 |=P5xP3y+P3xP1y+P1xP5y-(P1xP3y+P3xP5y+P1yP5x) |M 532 |=P5xP3y+P3xP2y+P2xP5y-(P2xP3y+P3xP5y+P2yP5x)
步驟3:藉由步驟2中所得的交互比例推算游標位置。舉例而言,在電視螢幕座標中的游標位置Cur(x5,y5)如下計算:x5=-(E*(A*F-C*D)/(B*D-A*E)+F)/D y5=(A*F-C*D)/(B*D-A*E) Step 3: Estimate the position of the cursor by the interaction ratio obtained in step 2. For example, the cursor position Cur(x5, y5) in the TV screen coordinates is calculated as follows: x5=-(E * (A * FC * D) / (B * DA * E) + F) / D y5 = ( A * FC * D) / (B * DA * E)
純量值A,B,C,D,E以及F如下所示:A=(CR1*|M_TV421|*(Y3-Y1)-|M_TV431|*(Y2-Y1)) B=(CR1*|M_TV421|*(X1-X3)-|M_TV431|*(X1-X2)) C=(CR1*|M_TV421|*(X3Y1-X1Y3)-|M_TV431|*(X2Y1-X1Y2)) D=(CR2*|M_TV432|*(Y2-Y1)-|M_TV421|*(Y3-Y2)) E=(CR2*|M_TV432|*(X1-X2)-|M_TV421|*(X2-X3)) F=(CR2*|MTV432|*(X2Y1-X1Y2)-|M_TV421|*(X3Y2-X2Y3)) The scalar values A, B, C, D, E, and F are as follows: A=(CR1 * |M_TV421| * (Y3-Y1)-|M_TV431| * (Y2-Y1)) B=(CR1 * |M_TV421 | * (X1-X3)-|M_TV431| * (X1-X2)) C=(CR1 * |M_TV421| * (X3Y1-X1Y3)-|M_TV431| * (X2Y1-X1Y2)) D=(CR2 * |M_TV432 | * (Y2-Y1)-|M_TV421| * (Y3-Y2)) E=(CR2 * |M_TV432| * (X1-X2)-|M_TV421| * (X2-X3)) F=(CR2 * |MTV432 | * (X2Y1-X1Y2)-|M_TV421| * (X3Y2-X2Y3))
於以上方程式中,CR1與CR2由步驟2而得。虛擬標記的位置Vi=(Xi,Yi)係電視座標中的預設值。行列式|M_TV ijk|藉由電視座標中的虛擬標記的位置Vi=(Xi,Yi)而得。 In the above equation, CR1 and CR2 are obtained from step 2. The position of the virtual marker Vi = (Xi, Yi) is the preset value in the TV coordinates. The determinant |M_TV ijk| is obtained by the position Vi=(Xi, Yi) of the virtual mark in the TV coordinates.
步驟4:當上述三個步驟完成後,將返回並且重複步驟1。 Step 4: When the above three steps are completed, it will return and repeat step 1.
避免交互比例定位的計算溢位Avoid computational overflow of interactive proportional positioning
在投影幾何學中利用交互比例的計算困難點在於,當三點處於共線時,交互比例將成為零或無限。圖10為三點所產生的計算溢位的示意圖。如圖10所示,電視框體1000的四個角落1002為預設點,與虛擬標誌相同,用以計算交互比例。當游標712位於四個角落1002的任一連線上,將發生三點共線的情形。 The difficulty in using the interactive scale in projection geometry is that when the three points are collinear, the interaction ratio will become zero or infinite. Figure 10 is a schematic diagram of the calculated overflow generated by three points. As shown in FIG. 10, the four corners 1002 of the television frame 1000 are preset points, which are the same as the virtual logos, and are used to calculate the interaction ratio. When the cursor 712 is located on any of the four corners 1002, a three-point collinear condition will occur.
圖11A與圖11B為不同虛擬標記1100組合構成三點共線的另一示意圖。如圖11A所示,電視螢幕座標分散至象限中。如圖11B所示,像素座標與相機中心1110對應至圖11A。 11A and FIG. 11B are another schematic diagrams in which different virtual markers 1100 are combined to form a three-point collinear line. As shown in Figure 11A, the television screen coordinates are dispersed into the quadrant. As shown in FIG. 11B, the pixel coordinates correspond to the camera center 1110 to FIG. 11A.
為了避免共線條件,游標導航模組識別游標所在的象限。圖12A至12D顯示虛擬標記1100的適當位置,藉以根據電視座標中的游標當前位置有效計算交互比例。用以計算交互比例的預設四點如下所示。 To avoid collinear conditions, the cursor navigation module identifies the quadrant in which the cursor is located. Figures 12A through 12D show the appropriate locations of the virtual markers 1100 to effectively calculate the interaction scale based on the current position of the cursor in the television coordinates. The preset four points used to calculate the interaction ratio are as follows.
圖12A至12D顯示一第一虛擬標記1100,其中第一虛擬標記1100位於游標所在象限的對角象限。接著,第二虛擬標記1100位於第一虛擬標記所在的水平邊線上,且第二虛擬標記距離第一虛擬標記半個水平邊線。第三虛擬標記1100位於第一虛擬標記所在的鉛直邊線上,且第三虛擬標記距離第一虛擬標記四分之一個鉛直邊線。第四虛擬標記1100位於相反的鉛直 邊線上,且其高度為四分之一個鉛直邊線。 Figures 12A through 12D show a first virtual marker 1100 in which the first virtual marker 1100 is located in a diagonal quadrant of the quadrant in which the cursor is located. Next, the second virtual mark 1100 is located on a horizontal edge of the first virtual mark, and the second virtual mark is half a horizontal line away from the first virtual mark. The third virtual mark 1100 is located on a vertical side line where the first virtual mark is located, and the third virtual mark is one-quarter of a vertical line from the first virtual mark. The fourth virtual marker 1100 is located on the opposite vertical On the sideline, and its height is a quarter of a vertical edge.
換言之,若游標位於象限I,且正往電視螢幕的象限I或象限II的方向位移,則不與圖12A或圖12B所示的虛擬標記1100產生共線情形。同理,若游標位於象限II,且正往電視螢幕的象限I或象限II的方向位移,則不與圖12B所示的虛擬標記1100產生共線情形。若游標位於象限III,且正往電視螢幕的象限III或象限IV的方向位移,則不與圖12C所示的虛擬標記1100產生共線情形。另外,若游標位於象限IV,且正往電視螢幕的象限III或象限IV的方向位移,則不與圖12D所示的虛擬標記1100產生共線情形。 In other words, if the cursor is in quadrant I and is shifting in the direction of quadrant I or quadrant II of the television screen, then no collinearity is created with virtual marker 1100 shown in FIG. 12A or 12B. Similarly, if the cursor is in quadrant II and is shifting in the direction of quadrant I or quadrant II of the television screen, then no collinearity is created with virtual marker 1100 shown in FIG. 12B. If the cursor is in quadrant III and is shifting in the direction of quadrant III or quadrant IV of the television screen, then no collinearity is created with virtual marker 1100 shown in Figure 12C. In addition, if the cursor is in quadrant IV and is being displaced in the direction of quadrant III or quadrant IV of the television screen, it does not create a collinear situation with the virtual marker 1100 shown in FIG. 12D.
圖13A至13D為順利計算交互比例的初始導航與後續導航之示意圖。如圖13A所示,當第一次的游標導航時,遙控器之影像感測器識別擷取電視框體1302,其中電視框體1302包括相機中心1304。遙控器傳送識別象限的訊息資料至電視支援程式。如圖13B所示,電視支援程式接收訊息資料,但並未更新電視螢幕1310上游標1312的位置。 13A to 13D are schematic diagrams of initial navigation and subsequent navigation for smoothly calculating the interaction ratio. As shown in FIG. 13A, when the cursor is navigated for the first time, the image sensor of the remote controller recognizes the captured television frame 1302, wherein the television frame 1302 includes the camera center 1304. The remote controller transmits the message data of the identification quadrant to the TV support program. As shown in FIG. 13B, the television support program receives the message material but does not update the location of the upstream 1312 of the television screen 1310.
如圖13C所示,當象限與預設四個虛擬標記1306出現在像素座標中,遙控器傳送資料組至電視遙控支援模組,其中資料組中包括當前象限代碼、四個角落的位置以及用以計算交筆的五個點的像素座標。接著,電視遙控支援模組代表遙控器計算像素平面中的交互比例,並且依據遙控器傳送的象限代碼,計算電視螢幕座標中所對應的預設四個虛擬標記。 As shown in FIG. 13C, when the quadrant and the preset four virtual markers 1306 appear in the pixel coordinates, the remote controller transmits the data group to the TV remote control support module, wherein the data group includes the current quadrant code, the positions of the four corners, and the use. To calculate the pixel coordinates of the five points of the pen. Then, the TV remote control support module calculates the interaction ratio in the pixel plane on behalf of the remote controller, and calculates the preset four virtual markers corresponding to the TV screen coordinates according to the quadrant code transmitted by the remote controller.
虛擬電視框體的概念Virtual TV frame concept
交互比例定位的目的在於使得像素座標中的相機中心位置對應至真實 電視螢幕座標的游標。然而,以上方法並不適用於遙控器與電視框體相距較遠的情形,因為距離遙遠導致電視框體的擷取影像過小,因而無法使用預設四點計算交互比例。 The purpose of interactive proportional positioning is to make the camera center position in the pixel coordinates correspond to the real Cursor for TV screen coordinates. However, the above method is not suitable for the case where the remote controller is far away from the TV frame body, because the distance captured by the TV frame is too small, and the preset four points cannot be used to calculate the interaction ratio.
因此,虛擬電視框體的概念能夠克服遙控器與電視框體相距較遠的情形。圖14為虛擬電視框體的操作示意圖。主要的概念是界定一虛擬電視框體1302,其中虛擬電視框體1302必須足夠大,用以確保計算交互比例的精準度以及進行游標導航。因此,圖14顯示一虛擬電視框體1302為電視框體的延伸,且虛擬電視框體1302大於實際電視框體110。游標導航模組透過虛擬電視框體1302計算交互比例,並將相機中心對應至虛擬電視框體座標。虛擬電視框體1302的尺寸可由使用者自行設定,且任何大於實際電視框體110的尺寸皆可適用。 Therefore, the concept of the virtual TV frame can overcome the situation where the remote controller is far away from the TV frame. Figure 14 is a schematic diagram of the operation of the virtual television frame. The main concept is to define a virtual TV frame 1302, wherein the virtual TV frame 1302 must be large enough to ensure the accuracy of the calculation of the interaction ratio and to navigate the cursor. Therefore, FIG. 14 shows that a virtual TV frame 1302 is an extension of the TV frame, and the virtual TV frame 1302 is larger than the actual TV frame 110. The cursor navigation module calculates the interaction ratio through the virtual TV frame 1302, and maps the camera center to the virtual TV frame coordinates. The size of the virtual TV frame 1302 can be set by the user, and any size larger than the actual TV frame 110 can be applied.
圖15為實際電視框架110與使用者自訂的虛擬電視框架1302之擷取影像示意圖。如圖15所示,在像素座標中,虛擬電視框架1302的長度與寬度約為實際電視框架110的三倍之大。 FIG. 15 is a schematic diagram of captured images of the actual television frame 110 and the user-defined virtual television frame 1302. As shown in FIG. 15, in the pixel coordinates, the length and width of the virtual television frame 1302 is approximately three times that of the actual television frame 110.
圖16A繪製透過虛擬電視框體的擷取影像計算交互比例的示意圖。在圖16A中,攝像裝置擷取實體電視框體110之影像於像素平面1600。實體電視框體110的質心位置可被計算。接著,如圖16A所示,虛擬電視框體1302包圍實體電視框體110的質心。接著,游標導航模組識別虛擬電視框體1302的象限,且相機中心位於象限之中。關於虛擬電視框體1302中預設四點1004(虛擬標記)的位置請參閱圖12A至圖12D。 FIG. 16A is a schematic diagram showing the calculation of the interaction ratio through the captured image of the virtual television frame. In FIG. 16A, the camera captures the image of the physical television frame 110 on the pixel plane 1600. The centroid position of the physical television frame 110 can be calculated. Next, as shown in FIG. 16A, the virtual television frame 1302 surrounds the centroid of the physical television frame 110. Next, the cursor navigation module identifies the quadrant of the virtual television frame 1302, and the camera center is located in the quadrant. Regarding the position of the preset four points 1004 (virtual mark) in the virtual television frame 1302, please refer to FIGS. 12A to 12D.
將相機中心1602映對於虛擬電視座標的過程請參閱圖16B至16D。圖16B繪製在虛擬電視座標1302中交互比例的計算。圖16C繪製游標1610 以及相機中心映對至虛擬電視座標的映圖之示意圖。圖16D繪製游標1610以及虛擬游標映對至真實電視座標的映圖與真實游標1622之示意圖。當上述的中間映對過程完成虛擬游標Vcur(Vx,Vy)的計算,接下來如圖16D所示,將虛擬游標位置映對至真實電視110的真實虛擬游標位置Cur(x,y),其計算如下示方程式5與6。 Refer to Figures 16B through 16D for the process of mapping camera center 1602 to virtual television coordinates. Figure 16B plots the calculation of the interaction ratio in virtual television coordinates 1302. Figure 16C is a schematic diagram showing the cursor 1610 and the map of the camera center mapped to the virtual television coordinates. Figure 16D depicts a cursor 1610 and a schematic diagram of the virtual cursor mapping to the real television coordinates and the real cursor 1622. When the above-described intermediate mapping process completes the calculation of the virtual cursor V cur (Vx, Vy), then as shown in FIG. 16D, the virtual cursor position is mapped to the real virtual cursor position Cur(x, y) of the real television 110, Its calculations are shown in Equations 5 and 6 below.
x=Vx/寬度的延伸比例y=Vy/高度的延伸比例方程式5與6 x=Vx/width extension ratio y=Vy/height extension ratio equations 5 and 6
透過預測資料庫查表推算電視框體之尺寸Estimate the size of the TV frame through the forecast database
於某些實施例中,如圖4E所示,查表資料庫用以協助遙控器判斷影像中的電視框體。查表包括電視框體尺寸的擷取資料,其中電視框體尺寸(像素座標中的寬度與高度)係透過2D/3D影像感測器位於不同3D位置與方向進行的實際量測結果。 In some embodiments, as shown in FIG. 4E, the look-up database is used to assist the remote controller in determining the television frame in the image. The look-up table includes the data of the size of the TV frame. The size of the TV frame (width and height in the pixel coordinates) is the actual measurement result obtained by the 2D/3D image sensor at different 3D positions and directions.
當3D相機在游標倒向中獲得遙控器與電視框體的距離(例如深度)時,影像感測器亦擷取攝像影像。透過上述影像,遙控器將傳送攝像影像之四個角落位置的像素座標至電視遙控支援模組。 When the 3D camera obtains the distance (for example, depth) between the remote controller and the TV frame in the backward direction of the cursor, the image sensor also captures the captured image. Through the above image, the remote controller will transmit the pixel coordinates of the four corners of the camera image to the TV remote control support module.
於某些實施例中,電視遙控器支援模組中的游標導航模組搜尋查表,藉由與電視間的距離推得電視框架的一參考尺寸。如圖17所示,查表將依據距離的不同而對應不同表格。 In some embodiments, the cursor navigation module in the TV remote control support module searches the look-up table and derives a reference size of the television frame by the distance from the television. As shown in Figure 17, the lookup table will correspond to different tables depending on the distance.
然而於某些實施例中,由於遙控器與電視螢幕110之間形成各種水平角度或鉛直角度,因此遙控器的影像感測器所擷取的電視框體畫面並非標 準的矩形。請參閱圖18與圖19,圖18為遙控器與電視螢幕110形成各種水平角度之示意圖,圖19為遙控器與電視螢幕110形成各種鉛直角度之示意圖。如圖18與圖19所示,遙控器100a-j所拍攝的各種擷取影像1804a-e以及1900a-e。值得一提的是,在圖18中的電視長度1802是不變量,而在圖19中的電視寬度1902是不變量。 However, in some embodiments, since various horizontal angles or vertical angles are formed between the remote controller and the television screen 110, the television frame captured by the image sensor of the remote controller is not labeled. Quasi-rectangular. Referring to FIG. 18 and FIG. 19, FIG. 18 is a schematic diagram showing various horizontal angles formed by the remote controller and the television screen 110, and FIG. 19 is a schematic diagram showing various vertical angles of the remote controller and the television screen 110. As shown in FIGS. 18 and 19, various captured images 1804a-e and 1900a-e captured by the remote controllers 100a-j. It is worth mentioning that the TV length 1802 in Fig. 18 is an invariant, and the TV width 1902 in Fig. 19 is an invariant.
游標導航模組利用水平角度與鉛直角度皆為零時所測得的尺寸,比較圖20A至20D中影像的各邊邊長,並藉由比較的結果回推遙控器的角度與方向。接著,游標導航模組搜尋最相近的參考尺寸,而此參考尺寸將被傳送至遙控器,用以協助遙控器判斷影像中的電視框體,如圖4E所示。 The cursor navigation module compares the lengths of the edges of the images in FIGS. 20A to 20D by using the measured dimensions when the horizontal angle and the vertical angle are both zero, and pushes back the angle and direction of the remote controller by comparing the results. Next, the cursor navigation module searches for the closest reference size, and the reference size is transmitted to the remote controller to assist the remote controller in determining the television frame in the image, as shown in FIG. 4E.
然而,在某些例子中,假如遙控器的位置與方向被明顯的改變,則參考尺寸可能不準確。若遙控器無法順利分段電視框體影像,則遙控器傳送一要求至電視遙控器支援模組,用以使電視重新顯示一攝像影像,如前述藍色螢幕以修正參考尺寸。 However, in some instances, the reference size may be inaccurate if the position and orientation of the remote control are significantly changed. If the remote controller cannot segment the TV frame image smoothly, the remote controller transmits a request to the TV remote control support module for causing the television to redisplay a camera image, such as the aforementioned blue screen, to correct the reference size.
處理Z角度Handling Z angle
Z角度的改變來自影像感測器垂直於電視螢幕的旋轉。Z角度的改變可用來執行電視指令或是電視音量控制。一般而言,Z角度的改變並不運用於游標導航,因為這對游標位置的變化而言毫無必要。因此若Z角度的改變量已大於閥值,例如10、15、20、25、30、35、40、45、50、55、60或甚至大於60度,則游標導航模組將游標圖示改變為使用者自訂的非游標圖示。 The change in Z angle comes from the rotation of the image sensor perpendicular to the TV screen. The Z angle change can be used to perform TV commands or TV volume control. In general, the change in the Z angle is not used for cursor navigation because it is not necessary for changes in the position of the cursor. Therefore, if the amount of change in the Z angle is greater than a threshold, such as 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60 or even greater than 60 degrees, the cursor navigation module changes the cursor icon A non-cursor icon customized for the user.
然而,若Z角度改變小於閥值,則游標導航計算擷取電視框體影像的質心,且在交互比例計算時,將虛擬電視框體的Z角度視為等於零。請參 閱圖21,可藉由X軸與電視框體之擷取影像的較長側邊的正切值,進行Z角度的估算。圖21為擷取影像2100之示意圖,其中擷取影像2100包括一相機中心位置2104與標有Z角度2106的電視框體2102。 However, if the Z angle change is less than the threshold, the cursor navigation calculation captures the centroid of the TV frame image, and in the calculation of the interaction ratio, the Z angle of the virtual TV frame is considered to be equal to zero. Please refer to Referring to Figure 21, the Z-angle can be estimated by capturing the tangent of the longer side of the image from the X-axis and the TV frame. 21 is a schematic diagram of a captured image 2100, wherein the captured image 2100 includes a camera center position 2104 and a television frame 2102 labeled with a Z angle 2106.
同時操作粗略-精確模式的游標指令產生Simultaneous operation of coarse-precision mode cursor instructions
請參閱圖22,圖22繪製在X-Y平面上同時粗略-精確模式的輸入產生。如上文所述,藉由將像素平面投影映對至電視框體平面的交互比例,可獲取電視螢幕座標中的游標的絕對位置。圍繞於當前游標位置2202的使用者自訂局部映對區域2200,用以進行觸碰板104之精確控制。游標可藉由局部映對區域2200連續移動,而當游標2202移動時,局部映對區域亦同步跟隨游標的軌跡移動。 Referring to Figure 22, Figure 22 plots the simultaneous coarse-precision mode input generation on the X-Y plane. As described above, the absolute position of the cursor in the television screen coordinates can be obtained by projecting the pixel plane projection to the interactive ratio of the plane of the television frame. The user-customized partial mapping area 2200 surrounding the current cursor position 2202 is used to perform precise control of the touchpad 104. The cursor can be continuously moved by the partial mapping area 2200, and when the cursor 2202 is moved, the local mapping area also synchronously follows the trajectory movement of the cursor.
使用者移動遙控器100以進行導航游標2202,而局部映對區域2200則為游標指向的粗略控制。接著,使用手指觸碰或拖曳觸碰板106使游標2202以進行精確控制。執行精確控制時,使用者可以透過局部映對區域2200將游標220移動至任何位置。圖23A顯示遙控器100進行粗略游標控制,而圖23B則顯示利用觸碰板106繼續進行精確游標控制。 The user moves the remote control 100 to navigate the cursor 2202, while the partial mapping area 2200 is the coarse control of the cursor pointing. Next, the cursor 2202 is touched or dragged with a finger to make the cursor 2202 for precise control. When performing precise control, the user can move the cursor 220 to any position through the partial mapping area 2200. FIG. 23A shows that the remote controller 100 performs coarse cursor control, and FIG. 23B shows that the precise cursor control is continued using the touch panel 106.
此概念亦適用於其他粗略輸入控制裝置,例如遙控器中的慣性感測器。搭配觸碰板,迴轉感測器或加速度感測器亦適用於游標指向的粗略控制,而觸碰板106則提供游標指向的精確控制。小型軌跡球、小型搖桿或是其他具有類斯功能的裝置皆可取代觸碰板106達到同時粗略-精確的控制方法。 This concept also applies to other coarse input control devices, such as inertial sensors in remote controls. With the touchpad, the gyro sensor or acceleration sensor is also suitable for coarse control of the cursor pointing, while the touchpad 106 provides precise control of the vernier pointing. A small trackball, a small rocker or other device with a class function can replace the touchpad 106 to achieve a simultaneous coarse-accurate control method.
回到圖24A至24C,局部映對區域2200的尺寸可依據電視110與遙控 器100之間的距離而定。若距離越遠,則游標控制的精準度需要越低。 Referring back to Figures 24A to 24C, the size of the partial mapping area 2200 can be based on the television 110 and the remote control. Depending on the distance between the devices 100. If the distance is farther, the accuracy of the cursor control needs to be lower.
圖25A至25D顯示三個可供使用者選擇的游標控制模式,其提供(i)只有數位板的相對游標指向,(ii)同時粗略/精確的游標控制,(iii)只有遙控器之影像感測器資料粗略控制。 Figures 25A through 25D show three user-selectable cursor control modes that provide (i) relative cursor orientation for only the tablet, (ii) simultaneous coarse/precise cursor control, and (iii) only the imagery of the remote control The tester data is roughly controlled.
圖26A至26D顯示如何避免轉換粗略與精確控制而產生不順暢的游標導航。如圖26A所示,剛結束拖曳動作且游標2202位於局部映對區域2200的左上角。當使用者欲透過遙控器100的移動進行游標導航時,如圖26B與圖26C所示,游標2202將停留於相同的位置。當遙控器100的位移停止或小於移動閥值,則使用者可重新在觸碰板進行手指拖曳,使精確游標控制初始化。 Figures 26A through 26D show how to avoid the conversion of coarse and precise controls resulting in poor cursor navigation. As shown in FIG. 26A, the drag action is just finished and the cursor 2202 is located at the upper left corner of the partial map area 2200. When the user wants to navigate the cursor through the movement of the remote controller 100, as shown in FIG. 26B and FIG. 26C, the cursor 2202 will stay at the same position. When the displacement of the remote controller 100 stops or is less than the movement threshold, the user can re-drag the finger on the touchpad to initialize the precise cursor control.
電視與遙控器之間相距極短距離時的游標導航Cursor navigation when the distance between the TV and the remote control is very short
假若電視與遙控器之間相距極短距離時,影像感測器可能無法擷取完整電視框體,而無法偵測電視螢幕框體的四個角落。於此例中,遙控器之韌體將影像感測器之游標導航自動切換為觸碰板之游標導航,並且傳送「過於接近電視」的訊號至主機的電視遙控器支援模組。主機的電視程式將顯示注意訊息以告知使用者距離問題,以及在此情況下數位板的游標導航的可用性。 If the distance between the TV and the remote control is extremely short, the image sensor may not be able to capture the complete TV frame and cannot detect the four corners of the TV screen frame. In this example, the firmware of the remote control automatically switches the cursor navigation of the image sensor to the cursor navigation of the touch panel, and transmits a signal that is “too close to the TV” to the remote control support module of the host. The host's TV program will display a note message to inform the user of the distance issue and the availability of the tablet's cursor navigation in this case.
3D輸入模式3D input mode
圖27A至27D顯示額外兩個自由角度(DOF)由遙控器100產生輸入指令。請參閱圖27A與圖27B,遙控器100包括Z軸的轉換指令。請參閱圖27C與圖27D,遙控器100包括Z軸的旋轉指令。電視遙控器支援模組可 用以改變擷取電視框體物件2710a-2710b的尺寸,藉以產生放大/縮小指令。Z角度改變的計算已於上文中提及,且適用於使用者特殊指令,例如控制音量大小。 27A to 27D show that an additional two free angles (DOF) are generated by the remote controller 100. Referring to FIGS. 27A and 27B, the remote controller 100 includes a Z-axis conversion command. Referring to FIGS. 27C and 27D, the remote controller 100 includes a rotation command of the Z axis. TV remote control support module It is used to change the size of the captured TV frame objects 2710a-2710b, thereby generating an enlargement/reduction instruction. The calculation of the Z angle change has been mentioned above and is applicable to user specific instructions, such as controlling the volume level.
當3D圖像應用被使用時,使用者在3D環境中移動3D游標以選擇3D物件。舉例來說,如圖28A所示,使用者以前後方向移動遙控器100則產生粗略控制指令,和/或使用者以左右方向移動遙控器100則產生3D游標的X轉換指令。手指觸碰訊號亦被用以產生3D游標的精確控制,其中使用者自訂的局部映對區域2200則位於X-Z平面。 When a 3D image application is used, the user moves the 3D cursor in the 3D environment to select the 3D object. For example, as shown in FIG. 28A, the user moves the remote controller 100 in the front-rear direction to generate a coarse control command, and/or the user moves the remote controller 100 in the left-right direction to generate an X-conversion command of the 3D cursor. The finger touch signal is also used to generate precise control of the 3D cursor, wherein the user-defined partial mapping area 2200 is located in the X-Z plane.
如圖28b所示,使用者亦可移動遙控器100產生3D游標的Y轉換指令或Z轉換指令以進行粗略控制指令。再者,手指觸碰訊號可以產生精確控制,其中使用者自訂映對區域22000位於Y-Z平面中。 As shown in FIG. 28b, the user can also move the remote controller 100 to generate a Y-conversion command or a Z-conversion command of the 3D cursor to perform a coarse control command. Furthermore, the finger touch signal can produce precise control in which the user-customized mapping area 22000 is located in the Y-Z plane.
自由空間中的手勢輸入模式Gesture input mode in free space
於某些實施例中,使用者在接觸觸碰板106的狀態下,於自由空間中透過移動遙控器100產生2D或3D手勢輸入。如上文所述,如圖1之按鍵被裝設於遙控器之本體,用以切換三種控制模式:2D游標模式、移動遙控器本體的手勢模式,以及多指手勢模式。 In some embodiments, the user generates a 2D or 3D gesture input through the mobile remote controller 100 in free space while in contact with the touchpad 106. As described above, the button of FIG. 1 is installed on the body of the remote controller for switching three control modes: a 2D cursor mode, a gesture mode of the mobile remote controller body, and a multi-finger gesture mode.
如圖29A與29B顯示自由空間中的2D手勢輸入之示意圖。在手勢模式中,假若使用者接觸到觸碰板106之表面,將顯示泡泡圖示2900。當使用者在自由空間中依序移動遙控器,泡泡軌跡將被顯示於電視顯示器。電視中的手勢辨識應用程式用以辨識單一行程字元2902,例如單獨的阿拉伯數字。若手勢辨識系統成功辨識字元,則程式將顯示純色軌跡2904,用以 告知使用者此為成功結果。手勢輸入特徵亦可被延伸至3D手勢。如圖30A與圖30B顯示如何藉由3D空間中的遙控器100在X-Z平面移動,進而產生3D的泡泡軌跡3000、3002。 Figures 29A and 29B show schematic diagrams of 2D gesture input in free space. In the gesture mode, if the user touches the surface of the touchpad 106, a bubble icon 2900 will be displayed. When the user moves the remote control in free space, the bubble track will be displayed on the TV display. The gesture recognition application in the television recognizes a single stroke character 2902, such as a separate Arabic numeral. If the gesture recognition system successfully recognizes the character, the program will display a solid color track 2904 for Inform the user that this is a successful outcome. Gesture input features can also be extended to 3D gestures. As shown in FIGS. 30A and 30B, how to move the X-Z plane by the remote controller 100 in the 3D space, thereby generating 3D bubble tracks 3000, 3002.
多指輸入模式Multi-finger input mode
於多指輸入模式之某些實施例中,觸碰板資料被視為唯一的輸入指令。為了避免因為遙控器本體的搖晃而導致非預期和/或不受歡迎的輸入,影像感測器資料將被忽略為不作為條件。 In some embodiments of the multi-finger input mode, the touchpad data is treated as the only input command. In order to avoid unintended and/or unwelcome inputs due to the shaking of the remote control body, the image sensor data will be ignored as a condition.
圖31A至31C繪製藉由手指3100在觸碰板106之表面形成複數個雙指觸控,並由此產生額外四個DOF輸入指令。然而,有些使用者可能會想同時使用數位板的多指觸碰手勢與移動遙控器的本體,因此韌體與主機的電視程式允許使用者可選擇使用兩種輸入資料。圖32顯示同時結合影像感測器與觸碰板以產生螺旋輸入指令3200。 31A through 31C depict the formation of a plurality of two-finger touches on the surface of the touchpad 106 by the finger 3100, and thereby generate an additional four DOF input commands. However, some users may want to use the tablet's multi-finger touch gesture and the mobile remote's body at the same time, so the firmware and host TV program allows the user to choose between two types of input data. Figure 32 shows the simultaneous integration of the image sensor and the touchpad to produce a spiral input command 3200.
如圖32顯示使用者以前後方向移動遙控器則產生一粗略Z轉換指令。同時,使用者可以藉由觸碰板106上的手指手勢產生一Z軸旋轉指令。結合上述兩種手勢,並且將環形軌跡轉換為3D圖像環境中的3D旋轉指令,遙控器100中的韌體以及主機中的電視介面軟體產生Z軸旋轉指令。 As shown in FIG. 32, the user moves the remote controller in the front and rear direction to generate a coarse Z conversion command. At the same time, the user can generate a Z-axis rotation command by touching the finger gesture on the board 106. In combination with the above two gestures, and converting the circular trajectory into a 3D rotation instruction in the 3D image environment, the firmware in the remote controller 100 and the television interface software in the host generate a Z-axis rotation command.
以上所述僅為本發明之較佳實施例,並非用以限定本發明之申請專利範圍,因此凡其它未脫離本發明所揭示之精神下所完成之等效改變或修飾,均應包含於本案之申請專利範圍內。 The above are only the preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Therefore, any equivalent changes or modifications made without departing from the spirit of the present invention should be included in the present invention. Within the scope of the patent application.
100‧‧‧遙控器 100‧‧‧Remote control
1000‧‧‧電視框體 1000‧‧‧TV frame
102‧‧‧影像感測器 102‧‧‧Image sensor
1002‧‧‧角落 1002‧‧‧ corner
104‧‧‧觸碰板 104‧‧‧Touch panel
1004‧‧‧虛擬標記 1004‧‧‧virtual mark
106‧‧‧按鍵 106‧‧‧ button
1100‧‧‧虛擬標記 1100‧‧‧virtual mark
110‧‧‧網路電視 110‧‧‧Internet TV
1110‧‧‧虛擬標記 1110‧‧‧virtual mark
112‧‧‧無線通訊裝置 112‧‧‧Wireless communication device
1302‧‧‧電視框體 1302‧‧‧TV frame
114‧‧‧無線通訊裝置 114‧‧‧Wireless communication device
1304‧‧‧相機中心 1304‧‧‧ Camera Center
200‧‧‧微處理器 200‧‧‧Microprocessor
1306‧‧‧虛擬標記 1306‧‧‧virtual mark
300‧‧‧韌體 300‧‧‧ Firmware
1310‧‧‧電視螢幕 1310‧‧‧TV screen
302‧‧‧遙控器支援模組 302‧‧‧Remote Control Support Module
1312‧‧‧游標 1312‧‧‧ cursor
304‧‧‧電視操作系統 304‧‧‧TV operating system
1600‧‧‧像素平面 1600‧‧‧pixel plane
306‧‧‧無線通訊驅動器 306‧‧‧Wireless communication driver
1602‧‧‧相機中心 1602‧‧‧ Camera Center
308‧‧‧USB數位板驅動器 308‧‧‧USB tablet driver
1610‧‧‧游標 1610‧‧‧ cursor
310‧‧‧游標指向模組 310‧‧‧ cursor pointing module
1802‧‧‧電視長度 1802‧‧‧TV length
312‧‧‧交互比例模組 312‧‧‧Interactive proportional module
1902‧‧‧電視寬度 1902‧‧‧TV width
314‧‧‧相機中心位置映對模組 314‧‧‧ Camera Center Position Mapping Module
2100‧‧‧影像 2100‧‧‧ images
316‧‧‧虛擬游標映對模組 316‧‧‧Virtual cursor mapping module
2102‧‧‧電視框體 2102‧‧‧TV frame
320‧‧‧游標導航模組 320‧‧‧ cursor navigation module
2104‧‧‧相機中心位置 2104‧‧‧ Camera Center Location
400‧‧‧電視 400‧‧‧TV
2106‧‧‧Z角度 2106‧‧‧Z angle
402‧‧‧牆面 402‧‧‧ wall
2200‧‧‧局部映對區域 2200‧‧‧Partial mapping area
404‧‧‧嵌壁 404‧‧‧
2202‧‧‧游標 2202‧‧‧ cursor
406‧‧‧人物 406‧‧‧ characters
2900‧‧‧泡泡圖示 2900‧‧‧ bubble icon
408‧‧‧演講台 408‧‧‧Podium
2902‧‧‧單一行程字元 2902‧‧‧Single stroke character
410‧‧‧純色螢幕 410‧‧‧ solid color screen
2904‧‧‧純色軌跡 2904‧‧‧ Solid color track
420‧‧‧電視螢幕 420‧‧‧TV screen
3000‧‧‧泡泡軌跡 3000‧‧‧ bubble track
422‧‧‧物件 422‧‧‧ objects
3002‧‧‧泡泡軌跡 3002‧‧‧ bubble track
430‧‧‧物件 430‧‧‧ objects
3100‧‧‧手指 3100‧‧‧ fingers
432‧‧‧物件 432‧‧‧ objects
3200‧‧‧螺旋輸入指令 3200‧‧‧Spiral input instructions
440‧‧‧影像 440‧‧ images
100a-100j‧‧‧遙控器 100a-100j‧‧‧Remote control
450‧‧‧角落 450‧‧‧ corner
1804a-1804e‧‧‧擷取影像 1804a-1804e‧‧‧ Capture images
700‧‧‧投影中心 700‧‧‧Projection Center
1900a-1900e‧‧‧擷取影像 1900a-1900e‧‧‧ capture images
704‧‧‧鏡象平面 704‧‧‧Mirror plane
2710a-2710d‧‧‧電視框體物件 2710a-2710d‧‧‧TV frame objects
710‧‧‧參考平面 710‧‧‧ reference plane
C‧‧‧相機中心 C‧‧‧ Camera Center
712‧‧‧游標 712‧‧‧ cursor
m1-m5‧‧‧標記 M1-m5‧‧‧ mark
900‧‧‧電視螢幕 900‧‧‧TV screen
P1-P5‧‧‧電視螢幕座標 P1-P5‧‧‧ TV screen coordinates
902‧‧‧Y軸 902‧‧‧Y axis
p1-p5‧‧‧相機像素座標 P1-p5‧‧‧ camera pixel coordinates
904‧‧‧X軸 904‧‧‧X-axis
x1-x4‧‧‧四條直線上的點 X1-x4‧‧‧ points on four straight lines
906‧‧‧第五點 906‧‧‧ fifth point
X1-X4‧‧‧四條直線上的點 X1-X4‧‧‧ points on four straight lines
908‧‧‧虛擬標記 908‧‧‧virtual mark
o‧‧‧一點 O‧‧‧a little
714‧‧‧電視螢幕座標 714‧‧‧TV screen coordinates
1622‧‧‧真實游標 1622‧‧‧ true cursor
600‧‧‧光學中心 600‧‧‧Optical Center
604‧‧‧電視螢幕 604‧‧‧TV screen
602‧‧‧相機透鏡 602‧‧‧ camera lens
800‧‧‧電視影像 800‧‧‧TV images
圖1:係為遙控器與網路電視之立體圖。 Figure 1: A perspective view of a remote control and a network TV.
圖2:係為遙控器之韌體之方塊圖。 Figure 2: Block diagram of the firmware of the remote control.
圖3:係為遙控器韌體與網路電視之遙控器支持模組方塊圖。 Figure 3: Block diagram of the remote control support module for the remote control firmware and network TV.
圖4A至4F:係為偵測邊緣、分段與辨識角落過程之示意圖。 4A to 4F are schematic diagrams showing the process of detecting edges, segmenting and recognizing corners.
圖5:係為四線交叉於二平面之二維示意圖。 Figure 5: A two-dimensional diagram of a four-line intersection with two planes.
圖6:係為一線穿越二平面之三維示意圖。 Figure 6: is a three-dimensional diagram of a line crossing the second plane.
圖7:係為相機中心位置投影轉換與映對至游標之示意圖。 Figure 7: Schematic diagram of projection conversion and mapping to the cursor at the center of the camera.
圖8A與8B:係為電視螢幕座標與相機像素座標之交互比例的不變量之示意圖。 8A and 8B are schematic diagrams showing the invariants of the ratio of the interaction between the television screen coordinates and the camera pixel coordinates.
圖9:係為具有虛擬標記映對於電視座標之像素座標之示意圖。 Figure 9 is a schematic diagram of pixel coordinates with virtual markers mapped to television coordinates.
圖10:係為虛擬標記之三點共線而產生計算溢位之示意圖。 Figure 10: Schematic diagram of calculating the overflow of the three points of the virtual mark.
圖11A與11B:係為顯示器螢幕上的虛擬標記以及三點共線之擷取圖形影像之示意圖。 11A and 11B are schematic diagrams showing the virtual mark on the display screen and the captured image of the three-point collinear line.
圖12A至12D:係為虛擬標記位置基於游標所在象限之示意圖。 Figures 12A through 12D are schematic diagrams showing the position of the virtual marker based on the quadrant of the cursor.
圖13A至13D:係為包含成功執行交互比例計算之游標產生過程示意圖。 13A to 13D are diagrams showing a cursor generation process including successful execution of an interactive scale calculation.
圖14:係為遙控器、網路點是與虛擬電視框體之立體圖。 Figure 14: is a perspective view of the remote control, the network point and the virtual TV frame.
圖15:係為電視框體之擷取影像以及虛擬電視框體之示意圖。 Figure 15 is a schematic diagram of the captured image of the TV frame and the virtual TV frame.
圖16A:係為擷取影像與透過虛擬電視框體對準電視之代表圖。 Figure 16A is a representative diagram of capturing an image and aligning it with a virtual television frame.
圖16B至16B D:係為計算游標位置之步驟示意圖。 16B to 16B D: are schematic diagrams showing the steps of calculating the position of the cursor.
圖17:係為查表示意圖。 Figure 17: is a schematic diagram of the table.
圖18:係為依據不同水平角度之遙控器擷取電視顯示器之圖形影像之前視 圖。 Figure 18: Front view of the graphic image of the TV display based on remote controllers with different horizontal angles Figure.
圖19:係為依據不同鉛直角度之遙控器擷取電視顯示器之圖形影像之側視圖。 Figure 19 is a side view of a graphical image of a television display taken from a remote control with different vertical angles.
圖20A至20D:係為於不同角度擷取之電視顯示器。 20A to 20D are television displays that are captured at different angles.
圖21:係為電視顯示器包括一Z角度之示意圖。 Figure 21 is a schematic illustration of a television display including a Z angle.
圖22:係為遙控器與具有粗略-精確控制特徵之遙控器所應用之電視顯示器之立體圖。 Figure 22 is a perspective view of a television display applied to a remote control and a remote control having coarse-precision control features.
圖23A:係為顯示器螢幕上之局部映對區域之轉換示意圖。 Figure 23A is a schematic diagram showing the transition of a partially mapped area on a display screen.
圖23B:係為顯示器螢幕上之局部映對區域之游標移動示意圖。 Figure 23B is a schematic illustration of cursor movement for a partially mapped area on a display screen.
圖24A至24C:係為相距電視不同距離之遙控器與其局部映對區域之尺寸之立體圖。 Figures 24A through 24C are perspective views of the dimensions of the remote control and its partially mapped regions at different distances from the television.
圖25A至25C:係為三個獨立游標控制模式之示意圖。 25A to 25C are schematic diagrams showing three independent cursor control modes.
圖26A至26D:係為避免游標導航不順暢之過程示意圖。 26A to 26D are schematic diagrams of processes for avoiding smooth navigation of the cursor.
圖27A與27B:係為Z軸移動而得額外一自由角度輸入手勢與其顯示影像示意圖。 27A and 27B are schematic diagrams showing an additional free-angle input gesture and its display image for the Z-axis movement.
圖27C與27D:係為Z軸旋轉而得額外一自由角度輸入手勢與其顯示影像示意圖。 27C and 27D are schematic diagrams showing an additional free-angle input gesture and its display image for the Z-axis rotation.
圖28A與28B:係為三維中產生游標移動之示意圖。 28A and 28B are schematic diagrams showing the movement of a cursor in three dimensions.
圖29A至29C:係為開放空間中二維手勢之示意圖。 29A to 29C are schematic views of a two-dimensional gesture in an open space.
圖30A至30B:係為開放空間中三維手勢之示意圖。 30A to 30B are schematic views of three-dimensional gestures in an open space.
圖31A至31C:係為多指觸碰板之多扺觸碰手勢之示意圖。 31A to 31C are schematic views of a multi-touch gesture of a multi-finger touch panel.
圖32:係為透過影像感測器與觸碰板之三維手勢之示意圖。 Figure 32 is a schematic illustration of a three-dimensional gesture through an image sensor and a touchpad.
700‧‧‧投影中心 700‧‧‧Projection Center
704‧‧‧鏡象平面 704‧‧‧Mirror plane
710‧‧‧參考平面 710‧‧‧ reference plane
712‧‧‧游標 712‧‧‧ cursor
P1-P5‧‧‧電視螢幕座標 P1-P5‧‧‧ TV screen coordinates
p1-p5‧‧‧相機像素座標 P1-p5‧‧‧ camera pixel coordinates
714‧‧‧電視螢幕座標 714‧‧‧TV screen coordinates
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW101121525A TW201301877A (en) | 2011-06-17 | 2012-06-15 | Imaging sensor based multi-dimensional remote controller with multiple input modes |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201161498030P | 2011-06-17 | 2011-06-17 | |
| TW100125120 | 2011-07-15 | ||
| TW101121525A TW201301877A (en) | 2011-06-17 | 2012-06-15 | Imaging sensor based multi-dimensional remote controller with multiple input modes |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW201301877A true TW201301877A (en) | 2013-01-01 |
Family
ID=48137698
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW101121525A TW201301877A (en) | 2011-06-17 | 2012-06-15 | Imaging sensor based multi-dimensional remote controller with multiple input modes |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TW201301877A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI505135B (en) * | 2013-08-20 | 2015-10-21 | Utechzone Co Ltd | Control system for display screen, control apparatus and control method |
| TWI563818B (en) * | 2013-05-24 | 2016-12-21 | Univ Central Taiwan Sci & Tech | Three dimension contactless controllable glasses-like cell phone |
-
2012
- 2012-06-15 TW TW101121525A patent/TW201301877A/en unknown
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI563818B (en) * | 2013-05-24 | 2016-12-21 | Univ Central Taiwan Sci & Tech | Three dimension contactless controllable glasses-like cell phone |
| TWI505135B (en) * | 2013-08-20 | 2015-10-21 | Utechzone Co Ltd | Control system for display screen, control apparatus and control method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9001208B2 (en) | Imaging sensor based multi-dimensional remote controller with multiple input mode | |
| CN102541365B (en) | System and method for generating multi-touch commands | |
| JP5921835B2 (en) | Input device | |
| KR101522991B1 (en) | Operation Input Apparatus, Operation Input Method, and Program | |
| US9575562B2 (en) | User interface systems and methods for managing multiple regions | |
| CN101123445B (en) | Portable terminal and user interface control method | |
| US9936168B2 (en) | System and methods for controlling a surveying device | |
| KR20100041006A (en) | A user interface controlling method using three dimension multi-touch | |
| CN102314301A (en) | Virtual touch sensing system and method | |
| CN106033250A (en) | Object sensing device and method | |
| JP2012027515A (en) | Input method and input device | |
| JP2014029656A (en) | Image processor and image processing method | |
| CN104978018B (en) | Touch system and touch method | |
| JP2022525326A (en) | Methods to assist object control using 2D cameras, systems and non-transient computer-readable recording media | |
| WO2018076720A1 (en) | One-hand operation method and control system | |
| TW201301877A (en) | Imaging sensor based multi-dimensional remote controller with multiple input modes | |
| KR101535738B1 (en) | Smart device with touchless controlling operation function and the control method of using the same | |
| CN103869941B (en) | Electronic device with virtual touch service and virtual touch real-time calibration method | |
| JP6075193B2 (en) | Mobile terminal device | |
| TWI444875B (en) | Multi-touch input apparatus and its interface method using data fusion of a single touch sensor pad and imaging sensor | |
| US10803836B2 (en) | Switch device and switch system and the methods thereof | |
| JP2013109538A (en) | Input method and device | |
| TWI522871B (en) | Processing method of object image for optical touch system | |
| KR101547512B1 (en) | Fine pointing method and system using display pattern | |
| CN102023790B (en) | Method and system for dynamically operating interactive objects |