[go: up one dir, main page]

TW201516892A - Method, apparatus and system for monitoring - Google Patents

Method, apparatus and system for monitoring Download PDF

Info

Publication number
TW201516892A
TW201516892A TW102137367A TW102137367A TW201516892A TW 201516892 A TW201516892 A TW 201516892A TW 102137367 A TW102137367 A TW 102137367A TW 102137367 A TW102137367 A TW 102137367A TW 201516892 A TW201516892 A TW 201516892A
Authority
TW
Taiwan
Prior art keywords
user
image sequence
module
state
monitoring
Prior art date
Application number
TW102137367A
Other languages
Chinese (zh)
Inventor
Chia-Chun Tsou
Chih-Heng Fang
Po-Tsung Lin
Chia-Wen Kao
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Priority to TW102137367A priority Critical patent/TW201516892A/en
Priority to CN201410054265.XA priority patent/CN104571487B/en
Publication of TW201516892A publication Critical patent/TW201516892A/en

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

A method, an apparatus and a system for monitoring are provided. A user interface having a plurality of seat block corresponding to a plurality of location is displayed on a display unit. An image sequence is obtained from each image capturing unit where is disposed on each location in a space. A face recognition algorithm is executed for the image sequence so as to determine whether the image sequence has a human image of a user. If the image sequence has the human image, an eye tracking algorithm is executed for the image sequence. And a corresponding marking action is executed for each seat block based on whether the user gazes a specific direction.

Description

監控方法、裝置及系統 Monitoring method, device and system

本發明是有關於一種監控機制,且特別是有關於一種基於眼動追蹤的監控方法、裝置及系統。 The present invention relates to a monitoring mechanism, and in particular to an eye tracking based monitoring method, apparatus and system.

目前眼動追蹤技術主要可區分為與侵入性(invasive)與非侵入性(non-invasive)兩種。侵入性的眼動追蹤技術主要是在眼睛中設置搜尋線圈法(search Coil)或使用眼動電波圖(electrooculogram)。而非侵入性的眼動追蹤技術則可區分為免頭戴式(free-head)或頭戴式(head-mount)人眼追蹤技術。 At present, eye tracking technology can be mainly divided into two types: invasive and non-invasive. Invasive eye tracking technology is mainly to set up a search Coil in the eye or use an electrooculogram. Non-invasive eye tracking technology can be distinguished as a free-head or head-mount eye tracking technology.

而隨著科技的發展,眼動追蹤技術大幅應用於各種領域,例如神經科學、心理學、工業工程、人因工程、行銷廣告、電腦科學等。例如,美國專利號US 2010/0092929提出了利用眼球追蹤技術的認知以及語言評估系統,其利用了眼球追蹤技術,獲得病友的眼球注視位置與持續的時間,藉此用來測試語言理解力、工作記憶力、注意力分配能力,以及語意關連啟動的能力。 With the development of technology, eye tracking technology has been widely used in various fields, such as neuroscience, psychology, industrial engineering, human factors engineering, marketing advertising, computer science and so on. For example, US Patent No. US 2010/0092929 proposes a cognitive and language evaluation system that utilizes eye tracking technology, which utilizes eye tracking technology to obtain the eyeball gaze position and duration of the patient, thereby testing language understanding and work. Memory, ability to assign attention, and the ability to initiate a semantic connection.

本發明提供一種監控方法、裝置及系統,基於眼動追蹤演算法來輔助監控使用者的專注程度。 The invention provides a monitoring method, device and system, which are based on an eye tracking algorithm to assist in monitoring the degree of concentration of a user.

本發明的監控方法,其用於一監控裝置。在此,於顯示單元中顯示使用者介面,其中使用者介面包括多個座位區塊,而這些座位區塊分別對應至與監控裝置同一空間的多個位置,而每個位置設置有一取像單元。自這些位置各自的取像單元獲得影像序列,對影像序列執行人臉辨識演算法,以判斷影像序列是否存在使用者的人像;若影像序列存在人像,對影像序列執行眼動追蹤演算法;並且,基於使用者是否注視於指定方向,而對各座位區塊執行對應的標記動作。 The monitoring method of the present invention is for use in a monitoring device. Here, the user interface is displayed in the display unit, wherein the user interface includes a plurality of seat blocks, and the seat blocks respectively correspond to a plurality of positions in the same space as the monitoring device, and each of the positions is provided with an image capturing unit. . Obtaining an image sequence from respective image capturing units at these positions, performing a face recognition algorithm on the image sequence to determine whether the image sequence has a user's portrait; if the image sequence has a portrait, performing an eye tracking algorithm on the image sequence; And performing a corresponding marking action on each seat block based on whether the user is looking at the specified direction.

本發明的監控系統,其包括設置於同一空間的多個取像單元與監控裝置。上述取像單元分別設置於上述空間中的多個位置。監控裝置包括顯示單元、通訊單元、儲存單元以及處理單元。顯示單元用以顯示使用者介面,其中使用者介面包括多個座位區塊,而這些座位區塊分別對應至上述位置。通訊單元用以自各取像單元獲得影像序列。儲存單元用以儲存影像序列。處理單元耦接至顯示單元與通訊單元,且用以驅動監控模組。其中,監控模組對影像序列執行人臉辨識演算法,以判斷影像序列是否存在使用者的人像;若影像序列存在人像,監控模組對影像序列執行眼動追蹤演算法;並且,監控模組基於使用者是否注視於指定方向,而對各座位區塊執行對應的標記動作。 The monitoring system of the present invention includes a plurality of image capturing units and monitoring devices disposed in the same space. The image capturing units are respectively disposed at a plurality of positions in the space. The monitoring device includes a display unit, a communication unit, a storage unit, and a processing unit. The display unit is configured to display a user interface, wherein the user interface includes a plurality of seat blocks, and the seat blocks respectively correspond to the positions. The communication unit is used to obtain an image sequence from each image capturing unit. The storage unit is used to store the image sequence. The processing unit is coupled to the display unit and the communication unit, and is configured to drive the monitoring module. The monitoring module performs a face recognition algorithm on the image sequence to determine whether the image sequence has a user portrait; if the image sequence has a portrait, the monitoring module performs an eye tracking algorithm on the image sequence; and the monitoring module A corresponding marking action is performed on each seat block based on whether the user is looking at the specified direction.

本發明的監控裝置包括,其顯示單元、通訊單元、儲存單元以及處理單元。顯示單元用以顯示使用者介面,其中使用者介面包括多個座位區塊,而這些座位區塊分別對應至與監控裝置處於同一空間的多個位置,而各位置設置有一取像單元。通訊單元用以自各取像單元獲得影像序列。儲存單元用以儲存影像序列。處理單元耦接至顯示單元與通訊單元,且用以驅動監控模組。其中,監控模組對影像序列執行人臉辨識演算法,以判斷影像序列是否存在使用者的人像;若影像序列存在人像,監控模組對影像序列執行眼動追蹤演算法;並且,監控模組基於使用者是否注視於指定方向,而對各座位區塊執行對應的標記動作。 The monitoring device of the present invention comprises a display unit, a communication unit, a storage unit and a processing unit. The display unit is configured to display a user interface, wherein the user interface comprises a plurality of seat blocks, and the seat blocks respectively correspond to a plurality of positions in the same space as the monitoring device, and each position is provided with an image capturing unit. The communication unit is used to obtain an image sequence from each image capturing unit. The storage unit is used to store the image sequence. The processing unit is coupled to the display unit and the communication unit, and is configured to drive the monitoring module. The monitoring module performs a face recognition algorithm on the image sequence to determine whether the image sequence has a user portrait; if the image sequence has a portrait, the monitoring module performs an eye tracking algorithm on the image sequence; and the monitoring module A corresponding marking action is performed on each seat block based on whether the user is looking at the specified direction.

在本發明的一實施例中,上述監控模組更包括:人臉辨識模組,其用以對影像序列執行人臉辨識演算法;眼動追蹤模組,其用以對影像序列執行眼動追蹤演算法;標記模組,其用以對各座位區塊執行對應的標記動作。 In an embodiment of the invention, the monitoring module further includes: a face recognition module for performing a face recognition algorithm on the image sequence; and an eye movement tracking module for performing eye movement on the image sequence A tracking algorithm; a marking module for performing a corresponding marking action on each seat block.

在本發明的一實施例中,上述該監控模組更包括:狀態識別模組,其用以依據眼動追蹤模組的結果,判別使用者目前處於專注狀態、瞌睡狀態或不專注狀態;閉眼偵測模組,其用以對影像序列執行閉眼偵測演算法。其中,當眼動追蹤模組判定使用者未注視於指定方向,狀態識別模組判定使用者目前處於專注狀態。當眼動追蹤模組判定使用者未注視於指定方向,透過閉眼偵測模組偵測到使用者為閉眼狀態時,由狀態識別模組判定使用者目前處於瞌睡狀態,並在偵測到使用者為非閉眼狀態時,由狀態 識別模組判定使用者目前處於不專注狀態。 In an embodiment of the invention, the monitoring module further includes: a state recognition module, configured to determine, according to the result of the eye tracking module, that the user is currently in a state of concentration, dozing or not focusing; A detection module for performing a closed-eye detection algorithm on an image sequence. Wherein, when the eye tracking module determines that the user is not looking at the specified direction, the state recognition module determines that the user is currently in a focused state. When the eye tracking module determines that the user is not looking at the specified direction, and the closed eye detection module detects that the user is in the closed state, the state recognition module determines that the user is currently in a doze state and is detected to be in use. State in the non-closed state The recognition module determines that the user is currently in an unfocused state.

在本發明的一實施例中,上述標記模組在人臉辨識模組判定影像序列不存在人像時,於對應的座位區塊中顯示第四標記。 In an embodiment of the invention, the marking module displays a fourth mark in the corresponding seat block when the face recognition module determines that there is no portrait in the image sequence.

在本發明的一實施例中,上述監控模組更包括:擺頭偵測模組,其基於鼻孔位置資訊來判斷使用者是否轉頭,藉此獲得臉部擺動資訊,其中在人臉辨識模組獲得人臉區域時,在人臉辨識模組偵測人臉區域的鼻孔區域,而獲得鼻孔位置資訊。其中,當擺頭偵測模組基於臉部擺動資訊而判定使用者面向指定方向時,透過眼動追蹤模組對影像序列執行眼動追蹤演算法。 In an embodiment of the invention, the monitoring module further includes: a oscillating head detecting module, which determines whether the user turns the head based on the information of the nostril position, thereby obtaining facial swing information, wherein the face recognition mode When the group obtains the face region, the face recognition module detects the nostril region of the face region to obtain the information of the nostril position. Wherein, when the oscillating head detecting module determines that the user faces the specified direction based on the face swaying information, the eye tracking algorithm is performed on the image sequence through the eye tracking module.

基於上述,基於眼動追蹤演算法來判斷使用者的專注程度,並在使用者介面中對應的座位區塊呈現出對應的標記,藉此透過監控裝置(例如:教師端裝置)得以更直觀地獲知使用者(例如:聽講者)的出席率以及專注程度。 Based on the above, the eye movement tracking algorithm is used to determine the degree of concentration of the user, and the corresponding seat block in the user interface presents a corresponding mark, thereby being more intuitively transmitted through the monitoring device (eg, the teacher device). Know the attendance and concentration of users (eg, listeners).

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the invention will be apparent from the following description.

100‧‧‧監控系統 100‧‧‧Monitoring system

110‧‧‧監控裝置 110‧‧‧Monitor

120‧‧‧顯示單元 120‧‧‧Display unit

130‧‧‧通訊單元 130‧‧‧Communication unit

140‧‧‧儲存單元 140‧‧‧ storage unit

150‧‧‧處理單元 150‧‧‧Processing unit

160‧‧‧監控模組 160‧‧‧Monitoring module

170‧‧‧取像單元 170‧‧‧Image capture unit

201‧‧‧人臉辨識模組 201‧‧‧Face recognition module

203‧‧‧眼動追蹤模組 203‧‧‧Eye tracking module

205‧‧‧標記模組 205‧‧‧Marking module

207‧‧‧狀態識別模組 207‧‧‧State Identification Module

209‧‧‧閉眼偵測模組 209‧‧‧Closed eye detection module

211‧‧‧擺頭偵測模組 211‧‧‧Swing head detection module

400‧‧‧使用者介面 400‧‧‧User interface

401‧‧‧第一標記 401‧‧‧ first mark

402‧‧‧第二標記 402‧‧‧Second mark

403‧‧‧第三標記 403‧‧‧ third mark

404‧‧‧第四標記 404‧‧‧ fourth mark

S‧‧‧空間 S‧‧‧ Space

S305~S325‧‧‧監控方法的各步驟 S305~S325‧‧‧ steps of the monitoring method

S505~S550‧‧‧另一監控方法的各步驟 S505~S550‧‧‧Steps of another monitoring method

圖1是依照本發明一實施例的監控系統的方塊圖。 1 is a block diagram of a monitoring system in accordance with an embodiment of the present invention.

圖2是依照本發明一實施例的監控模組的方塊圖。 2 is a block diagram of a monitoring module in accordance with an embodiment of the present invention.

圖3是依照本發明一實施例的監控方法的流程圖。 3 is a flow chart of a monitoring method in accordance with an embodiment of the present invention.

圖4是依照本發明一實施例的使用者介面的示意圖。 4 is a schematic diagram of a user interface in accordance with an embodiment of the present invention.

圖5是依照本發明另一實施例的監控方法的流程圖。 FIG. 5 is a flow chart of a monitoring method in accordance with another embodiment of the present invention.

圖1是依照本發明一實施例的監控系統的方塊圖。請參照圖1,監控系統100包括監控裝置110與多台取像單元170。監控系統100例如設置於教室、演講廳、禮堂等空間S中,即,監控裝置110與多台取像單元170是設置於同一空間S中。 1 is a block diagram of a monitoring system in accordance with an embodiment of the present invention. Referring to FIG. 1 , the monitoring system 100 includes a monitoring device 110 and a plurality of imaging units 170 . The monitoring system 100 is installed, for example, in a space S such as a classroom, a lecture hall, or an auditorium, that is, the monitoring device 110 and the plurality of image capturing units 170 are disposed in the same space S.

取像單元170例如為攝影機或照相機,其具有電荷耦合元件(Charge coupled device,CCD)鏡頭、互補式金氧半電晶體(Complementary metal oxide semiconductor transistors,CMOS)鏡頭、或紅外線鏡頭。在空間S的多個位置中分別設置取像單元170,以擷取在位置上之使用者的影像序列。以空間S為教室而言,教室中包括多個座位,於每一個座位設置一個取像單元170,並將取像單元170的鏡頭朝向可拍攝至當使用者坐在座位上的方向。 The image capturing unit 170 is, for example, a camera or a camera, and has a charge coupled device (CCD) lens, a complementary metal oxide semiconductor transistor (CMOS) lens, or an infrared lens. The image capturing unit 170 is separately disposed in a plurality of positions of the space S to capture an image sequence of the user in the position. In the case where the space S is a classroom, the classroom includes a plurality of seats, and an image capturing unit 170 is provided for each seat, and the lens of the image capturing unit 170 is oriented to be photographable in a direction in which the user sits on the seat.

監控裝置110包括顯示單元120、通訊單元130、儲存單元140、處理單元150以及監控模組160。其中,處理單元150耦接至顯示單元120、通訊單元130、儲存單元140以及監控模組160。 The monitoring device 110 includes a display unit 120, a communication unit 130, a storage unit 140, a processing unit 150, and a monitoring module 160. The processing unit 150 is coupled to the display unit 120, the communication unit 130, the storage unit 140, and the monitoring module 160.

顯示單元120例如為液晶顯示器(Liquid-Crystal Display,LCD)、電漿顯示器、真空螢光顯示器、發光二極體(Light-Emitting Diode,LED)顯示器、場發射顯示器(Field Emission Display,FED)及/或其他合適種類的顯示器,在此並不限制其種類。 The display unit 120 is, for example, a liquid crystal display (LCD), a plasma display, a vacuum fluorescent display, a Light-Emitting Diode (LED) display, a Field Emission Display (FED), and / or other suitable type of display, here is not limited to its type.

通訊單元130用以自多個取像單元170接收其各自的影像序列。例如,通訊單元130為實體的乙太網路卡、或無線網路卡等,或者為第三代(third Generation,3G)行動通訊模組、通用封包無線服務(General Packet Radio Service,GPRS)模組或Wi-Fi模組。 The communication unit 130 is configured to receive its respective image sequence from the plurality of image capturing units 170. For example, the communication unit 130 is a physical Ethernet card, or a wireless network card, or a third generation (3G) mobile communication module, and a General Packet Radio Service (GPRS) module. Group or Wi-Fi module.

儲存單元140例如是任意型式的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體(Flash memory)、硬碟或其他類似裝置或這些裝置的組合。儲存單元130中包括了多份電子文件,並且暫存由取像單元110所擷取的影像序列。 The storage unit 140 is, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory (Flash memory), hard Disc or other similar device or a combination of these devices. The storage unit 130 includes a plurality of electronic files, and temporarily stores the image sequence captured by the image capturing unit 110.

處理單元150例如是中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)、可程式化邏輯裝置(Programmable Logic Device,PLD)或其他類似裝置或這些裝置的組合。 The processing unit 150 is, for example, a central processing unit (CPU), or other programmable general purpose or special purpose microprocessor (Microprocessor), digital signal processor (DSP), programmable Controllers, Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), or other similar devices or combinations of these devices.

處理單元150耦接至顯示單元120、通訊單元130與儲存單元140,並且用以驅動監控模組160。監控模組160例如是由電腦程式語言所撰寫的驅動程式、韌體或軟體,其儲存在儲存單元140中。監控模組160基本上是由多數個程式碼片段所組成的(例如建立組織圖程式碼片段、簽核表單程式碼片段、設定程式碼片段、以及部署程式碼片段),並且這些程式碼片段在載入監控裝置 110中並執行之後,即可實現監控的功能。或者,監控模組160是由各種邏輯閘所形成的晶片組等。處理單元150驅動監控模組160來執行監控方法。 The processing unit 150 is coupled to the display unit 120, the communication unit 130, and the storage unit 140, and is configured to drive the monitoring module 160. The monitoring module 160 is, for example, a driver, firmware or software written by a computer programming language, which is stored in the storage unit 140. The monitoring module 160 is basically composed of a plurality of code segments (for example, creating an organization code code segment, signing a form code segment, setting a code segment, and deploying a code segment), and the code segments are Loading monitoring device After being executed in 110, the monitoring function can be realized. Alternatively, the monitoring module 160 is a wafer set or the like formed by various logic gates. The processing unit 150 drives the monitoring module 160 to perform the monitoring method.

例如,監控模組160對影像序列執行人臉辨識演算法,以判斷影像序列是否存在使用者的人像,並且監控模組160對影像序列執行眼動追蹤演算法,以獲得使用者所注視的方向。另外,監控模組160基於使用者是否注視於指定方向(例如注視於前方),而對各座位區塊執行對應的標記動作。例如,在各座位區塊標記上對應的顏色、或者於各座位區塊上顯示對應的標記(例如為圖像)等等。 For example, the monitoring module 160 performs a face recognition algorithm on the image sequence to determine whether the image sequence has a user's portrait, and the monitoring module 160 performs an eye tracking algorithm on the image sequence to obtain a direction in which the user looks. . In addition, the monitoring module 160 performs a corresponding marking action on each seat block based on whether the user is looking at the specified direction (for example, looking at the front). For example, a corresponding color on each seat block mark, or a corresponding mark (for example, an image) or the like on each seat block.

圖2是依照本發明一實施例的監控模組的方塊圖。請參照圖2,監控模組160主要包括人臉辨識模組201、眼動追蹤模組203及標記模組205。人臉辨識模組201用以對影像序列執行人臉辨識演算法,例如基於Haar-like特徵的AdaBoost演算法,藉此來偵測影像序列中是否存在人臉。透過對影像序列偵測人臉的存在與否來判斷對應的位置上是否存在使用者。眼動追蹤模組203用以對影像序列執行眼動追蹤演算法,以追蹤使用者的眼球的移動軌跡。標記模組205用以對座位區塊執行對應的標記動作。 2 is a block diagram of a monitoring module in accordance with an embodiment of the present invention. Referring to FIG. 2 , the monitoring module 160 mainly includes a face recognition module 201 , an eye tracking module 203 , and a marking module 205 . The face recognition module 201 is configured to perform a face recognition algorithm on the image sequence, for example, an AdaBoost algorithm based on the Haar-like feature, thereby detecting whether a face exists in the image sequence. Whether the user exists in the corresponding position is determined by detecting the presence or absence of the face of the image sequence. The eye tracking module 203 is configured to perform an eye tracking algorithm on the image sequence to track the movement trajectory of the user's eyeball. The marking module 205 is configured to perform a corresponding marking action on the seat block.

另外,監控模組160還可進一步包括狀態識別模組207、閉眼偵測模組209以及擺頭偵測模組211。狀態識別模組207用以依據眼動追蹤模組203的結果,判別使用者目前處於專注狀態、瞌睡狀態或不專注狀態。閉眼偵測模組209用以對影像序列執行 閉眼偵測演算法。擺頭偵測模組211基於鼻孔位置資訊來判斷使用者是否轉頭,藉此獲得臉部擺動資訊。例如,在偵測到臉部之後,人臉辨識模組201還可以進一步搜尋鼻孔區域,即,兩個鼻孔的所在位置。而鼻孔位置資訊例如為兩個鼻孔的位置。 In addition, the monitoring module 160 may further include a state recognition module 207, a closed eye detection module 209, and a oscillating head detection module 211. The state recognition module 207 is configured to determine, according to the result of the eye tracking module 203, that the user is currently in a state of concentration, a doze state, or an unfocused state. The closed eye detection module 209 is configured to execute the image sequence Closed eye detection algorithm. The oscillating head detecting module 211 determines whether the user turns the head based on the nostril position information, thereby obtaining the face swing information. For example, after detecting the face, the face recognition module 201 can further search for the nostril area, that is, the position of the two nostrils. The information on the position of the nostrils is, for example, the position of the two nostrils.

另外,在其他實施例中,在空間S的各個位置中,還可相對於各取像單元170配置有一顯示器,以在顯示器上顯示課程內容等資訊。據此,而在進行眼動追蹤演算之前,還可先對取像單元170執行校正程序。倘若各取像單元170的硬體規格相同,則以其中一台來進行校正即可;倘若各取像單元170的硬體規格不同,則逐一對各取像單元來進行校正。舉例來說,在進行眼球位置的偵測之前,依序自取像單元170接收多個校正影像。在此,上述校正影像分別為使用者觀看顯示單元120的多個校正點所獲得。例如,以顯示單元120左上、右上、左下、右下4個點作為校正點。在進行校正程序時,在顯示單元120中提示使用者觀看上述4個校正點,藉此獲得4張校正影像。而校正模組314依據每一張校正影像中的眼部區域的兩個亮點位置,來獲得基準校正參數。上述兩個亮點位置的形成是由於取像單元170中所設置的發光模組在眼球上所造成的反光。藉由每一張校正影像中的兩個亮點位置,來獲得基準校正參數。校正參數例如為基於亮點位置G1、G2的向量。並且,基於校正影像而藉由透視轉換(perspective transformation)法來產生座標轉換矩陣,此座標轉換矩陣是用以將眼部區域的座標位置轉換為顯示器的座標位置。 In addition, in other embodiments, in each position of the space S, a display may be disposed with respect to each of the image capturing units 170 to display information such as course content on the display. Accordingly, the correction program may be performed on the image capturing unit 170 before the eye tracking calculation is performed. If the hardware specifications of the image capturing units 170 are the same, correction may be performed by one of the image capturing units 170. If the hardware specifications of the image capturing units 170 are different, the image capturing units are corrected one by one. For example, the image capturing unit 170 sequentially receives a plurality of corrected images before detecting the position of the eyeball. Here, the corrected images are respectively obtained by the user viewing a plurality of correction points of the display unit 120. For example, four points of the upper left, upper right, lower left, and lower right of the display unit 120 are used as correction points. When the calibration procedure is performed, the user is prompted to view the above four correction points in the display unit 120, thereby obtaining four corrected images. The calibration module 314 obtains the reference correction parameters according to the two bright spot positions of the eye region in each of the corrected images. The above two bright spot positions are formed due to the reflection caused by the light-emitting module provided in the image capturing unit 170 on the eyeball. The reference correction parameters are obtained by correcting the two bright spot positions in each image. The correction parameters are, for example, vectors based on the bright spot positions G1, G2. And, a coordinate transformation matrix is generated by a perspective transformation method based on the corrected image, and the coordinate transformation matrix is used to convert the coordinate position of the eye region into a coordinate position of the display.

而眼動追蹤模組203偵測影像序列中的目前影像的眼部區域,以獲得在目前影像中的瞳孔位置及兩個亮點位置(底下稱為亮點位置G1'、G2')。並且,眼動追蹤模組203依據目前影像的亮點位置G1'、G2',獲得比對校正參數,藉此進一步依據基準校正參數(C1)與比對校正參數(C2),來獲得動態補正參數(C3)。例如,動態補正參數為基準校正參數與比對校正參數的比率,即,C3=C2/C1。之後,眼動追蹤模組203再依據目前影像中的亮點位置G1'(或亮點位置G2')、瞳孔位置(例如以瞳孔中心的座標來進行計算)以及動態補正參數,計算眼球移動座標。例如,眼球移動座標為(X',Y')。而眼動追蹤模組203利用座標轉換矩陣,轉換眼球移動座標(X',Y')為對應於該顯示單元的視線落點座標(例如,視線落點座標為(Xs,Ys)),之後,記錄視線落點座標(Xs,Ys)。據此,藉由所記錄的多個視線落點座標而可獲得眼球的移動軌跡,並且可依據視線落點座標可獲得使用者目前注視的方向。 The eye tracking module 203 detects the eye region of the current image in the image sequence to obtain the pupil position and the two bright spot positions in the current image (hereinafter referred to as the bright spot positions G1', G2'). Moreover, the eye tracking module 203 obtains the alignment correction parameter according to the bright spot positions G1', G2' of the current image, thereby further obtaining the dynamic correction parameter according to the reference correction parameter (C1) and the comparison correction parameter (C2). (C3). For example, the dynamic correction parameter is the ratio of the reference correction parameter to the comparison correction parameter, that is, C3=C2/C1. Thereafter, the eye tracking module 203 calculates the eye movement coordinate according to the bright spot position G1' (or the bright spot position G2') in the current image, the pupil position (for example, calculated by the coordinates of the pupil center), and the dynamic correction parameter. For example, the eye movement coordinates are (X', Y'). The eye tracking module 203 uses the coordinate conversion matrix to convert the eye movement coordinate (X', Y') to the line-of-sight coordinates of the display unit (for example, the line-of-sight coordinates are (Xs, Ys)), and then , record the line of sight coordinates (Xs, Ys). Accordingly, the movement trajectory of the eyeball can be obtained by recording the plurality of line-of-sight coordinates, and the direction in which the user is currently gazing can be obtained according to the line-of-sight coordinates.

底下即舉例來說明監控方法的各步驟。圖3是依照本發明一實施例的監控方法的流程圖。請同時參照圖1~圖3,在步驟S305中,監控裝置110於顯示單元120中顯示使用者介面。在此,使用者介面包括多個座位區塊,而這些座位區塊分別對應至空間S的多個位置,而每一個位置設置有一取像單元170。以空間S為教室而言,在使用者介面中顯示教室的座位表。 The steps of the monitoring method are illustrated below by way of example. 3 is a flow chart of a monitoring method in accordance with an embodiment of the present invention. Referring to FIG. 1 to FIG. 3 simultaneously, in step S305, the monitoring device 110 displays the user interface in the display unit 120. Here, the user interface includes a plurality of seat blocks, and the seat blocks respectively correspond to a plurality of positions of the space S, and each of the positions is provided with an image capturing unit 170. The space S is used as a classroom, and the seating table of the classroom is displayed in the user interface.

接著,在步驟S310中,監控裝置110透過通訊單元130自取像單元170獲得影像序列。之後,便可利用監控模組160來 分析影像序列。 Next, in step S310, the monitoring device 110 obtains an image sequence from the image capturing unit 170 through the communication unit 130. After that, the monitoring module 160 can be utilized. Analyze the image sequence.

在步驟S315中,監控模組160透過人臉辨識模組201來執行人臉辨識演算法,藉此來判斷影像序列是否存在使用者的人像。也就是偵測對應的位置上是否有使用者,例如,判斷學生是否有出席。 In step S315, the monitoring module 160 executes the face recognition algorithm through the face recognition module 201, thereby determining whether the image sequence has a user's portrait. That is, whether there is a user at the corresponding location, for example, to determine whether the student is present.

接著,在步驟S320中,監控模組160透過眼動追蹤模組203來執行眼動追蹤演算法。具體而言,在偵測到影像序列存在使用者的人像,監控模組160才會啟用眼動追蹤模組203。 Next, in step S320, the monitoring module 160 performs an eye tracking algorithm through the eye tracking module 203. Specifically, the monitoring module 160 activates the eye tracking module 203 when the image of the user is detected in the image sequence.

然後,在步驟S325中,監控模組160基於使用者是否注視於指定方向,而對各座位區塊執行對應的標記動作。例如,偵測使用者是否有在觀看其桌上的顯示器(其搭配取像單元170使用),或是使用者是否有在觀看設置於最前方的顯示器或白板(黑板)等。 Then, in step S325, the monitoring module 160 performs a corresponding marking action on each seat block based on whether the user is looking at the specified direction. For example, it is detected whether the user has a display on the table (which is used in conjunction with the image capturing unit 170), or whether the user is viewing the display or whiteboard (blackboard) disposed at the forefront.

在座位表中以顏色或以圖案來表示學生是否缺席、是否專心、或打瞌睡等等。另外,還可透過狀態識別模組207依據眼動追蹤模組203的結果,判別使用者目前處於專注狀態、瞌睡狀態或不專注狀態。 Color or pattern in the seating chart indicates whether the student is absent, attentive, or dozing off. In addition, the state recognition module 207 can also determine whether the user is currently in a state of concentration, a doze state, or an unfocused state according to the result of the eye tracking module 203.

另外,倘若事先已分配好使用各個位置的學生,並事先在監控裝置100的儲存單元140中建立一資料庫,便可進一步針對缺席狀況來採用記名動作。 In addition, if the students who have used the respective locations have been allocated in advance and a database is established in the storage unit 140 of the monitoring device 100 in advance, the registration action can be further adopted for the absence status.

圖4是依照本發明一實施例的使用者介面的示意圖。請參照圖4,使用者介面400為一座位表,其包括5×4座位區塊 B1~B20。在此僅以5×4為例,然,在其他實施例中可依照實際空間S中設置有取像單元170的位置數量來進行調整。 4 is a schematic diagram of a user interface in accordance with an embodiment of the present invention. Referring to FIG. 4, the user interface 400 is a seating table including a 5×4 seat block. B1~B20. Here, only 5×4 is taken as an example. However, in other embodiments, the adjustment may be made according to the number of positions in which the image capturing unit 170 is disposed in the actual space S.

底下以座位區塊B5、B8、B16、B9而言,其分別顯示有第一標記401、第二標記402、第三標記403及第四標記404。即,在座位區塊B5中顯示有第一標記401,表示其對應的實際位置存在有使用者,且使用者目前為專注狀態。在座位區塊B8中顯示有第二標記402,表示其對應的實際位置存在有使用者,且使用者目前為瞌睡狀態。在座位區塊B16中顯示有第三標記403,表示其對應的實際位置存在有使用者,且使用者目前為不專注狀態。在座位區塊B9中顯示有第四標記404,表示其對應的實際位置並不存在有使用者。據此,應用於教學上,藉由使用者介面400可讓教師快速地得知學生的出席狀況與上課的專注與否。 In the case of the seat blocks B5, B8, B16, and B9, the first mark 401, the second mark 402, the third mark 403, and the fourth mark 404 are respectively displayed. That is, the first mark 401 is displayed in the seat block B5, indicating that there is a user in the corresponding actual position, and the user is currently in a focused state. A second mark 402 is displayed in the seat block B8, indicating that there is a user in its corresponding actual position, and the user is currently in a doze state. A third mark 403 is displayed in the seat block B16, indicating that there is a user in the corresponding actual position, and the user is currently in an unfocused state. A fourth marker 404 is displayed in the seat block B9 indicating that there is no user in the corresponding actual location. According to this, in the teaching, the user interface 400 allows the teacher to quickly know the attendance of the student and the concentration of the class.

圖5是依照本發明另一實施例的監控方法的流程圖。請同時參照圖1~2、圖4~5,在步驟S505中,監控模組160透過人臉辨識模組201來執行人臉辨識演算法。在步驟S510中,判斷影像序列是否存在使用者的人像,藉此來判斷實際位置上是否存在使用者。若影像序列中不存在人像,判定傳送此影像序列的取像單元所設置的位置沒有使用者,進而在步驟S515中,由標記模組205在對應的座位區塊顯示第四標記404。 FIG. 5 is a flow chart of a monitoring method in accordance with another embodiment of the present invention. Referring to FIG. 1 to FIG. 2 and FIG. 4 to FIG. 5 simultaneously, in step S505, the monitoring module 160 performs a face recognition algorithm through the face recognition module 201. In step S510, it is determined whether or not there is a portrait of the user in the video sequence, thereby determining whether or not the user exists in the actual location. If there is no portrait in the image sequence, it is determined that the location of the image capturing unit that transmits the image sequence is not provided by the user, and in step S515, the marking module 205 displays the fourth marker 404 in the corresponding seat block.

接著,在步驟S520中,監控模組160透過眼動追蹤模組203來執行眼動追蹤演算法。之後,在步驟S525中,眼動追蹤模組203判斷使用者是否注視於指定方向。當眼動追蹤模組203判 定使用者注視於指定方向,由狀態識別模組207判定使用者目前處於專注狀態,進而執行步驟S530,由標記模組205於對應的座位區塊顯示第一標記401。 Next, in step S520, the monitoring module 160 performs an eye tracking algorithm through the eye tracking module 203. Thereafter, in step S525, the eye tracking module 203 determines whether the user is looking at the specified direction. When the eye tracking module 203 judges The user is in the specified direction, and the state recognition module 207 determines that the user is currently in the focused state. In step S530, the marking module 205 displays the first marker 401 in the corresponding seat block.

另外,在判定影像序列中存在有人像時,還可先透過擺頭偵測模組211基於臉部擺動資訊來判斷使用者是否面向於指定方向,以在判定使用者面向指定方向時,再對影像序列執行眼動追蹤演算法。然,可視情況來決定是否使用擺頭偵測模組211。 In addition, when it is determined that there is a human image in the image sequence, the swing head detecting module 211 may first determine whether the user faces the specified direction based on the face swing information, so as to determine when the user faces the specified direction, The image sequence performs an eye tracking algorithm. However, whether or not to use the oscillating head detecting module 211 can be determined depending on the situation.

若使用者未注視於指定方向,執行步驟S535,透過閉眼偵測模組209對影像序列執行閉眼偵測演算法。接著,在步驟S540中,閉眼偵測模組209判斷使用者是否為閉眼狀態。例如,以偵測到眼部物件的尺寸來判斷是否為閉眼狀態。例如,當眼部物件的高度小於高度門檻值(例如,高度門檻值的範圍界於5~7個像素),且眼部物的寬度大於寬度門檻值(例如,寬度門檻值的範圍界於60~80個像素)時,判定為閉眼狀態。若不符合上述條件,則非閉眼狀態。 If the user does not look at the specified direction, step S535 is executed to perform a closed-eye detection algorithm on the image sequence through the closed-eye detection module 209. Next, in step S540, the closed-eye detection module 209 determines whether the user is in the closed-eye state. For example, it is determined whether the eye object is in a closed state by detecting the size of the eye object. For example, when the height of the eye object is less than the height threshold (eg, the height threshold is in the range of 5-7 pixels), and the width of the eye is greater than the width threshold (eg, the width threshold is outside the range of 60). When it is ~80 pixels, it is judged to be in the closed eye state. If the above conditions are not met, the state of the eye is not closed.

在偵測到使用者為閉眼狀態時,由狀態識別模組207判定使用者目前處於瞌睡狀態,進而在步驟S545中,標記模組205於對應的座位區塊顯示第二標記402。而在偵測到使用者為非閉眼狀態時,由狀態識別模組207判定使用者目前處於不專注狀態,進而在步驟S550中,標記模組205於對應的座位區塊顯示第三標記403。 When it is detected that the user is in the closed state, the state recognition module 207 determines that the user is currently in a doze state, and in step S545, the marking module 205 displays the second mark 402 in the corresponding seat block. When it is detected that the user is in the non-closed state, the state recognition module 207 determines that the user is currently in an unfocused state, and in step S550, the marking module 205 displays the third mark 403 in the corresponding seat block.

另,上述第一標記~第四標記亦可以為不同顏色的底色, 藉由在座位區塊中顯示不同顏色的底色來表示使用者的專注程度與出缺席狀況。 In addition, the first mark to the fourth mark may also be ground colors of different colors. The user's concentration and absence status are indicated by displaying the background colors of different colors in the seat block.

綜上所述,利用眼動追蹤演算法來判斷使用者的專注程度,並在使用者介面中對應的座位區塊呈現出對應的標記。據此,當應用於教學上時,可讓教師得以更直觀地獲知學生的出席率以及專注程度。另外,還可應用於一般演講,使得講師能夠透過使用者介面上的標記來得知聽講者的專注程度,進而適當地調整演講方式達到吸引聽講者注意力的目的。 In summary, the eye tracking algorithm is used to determine the degree of concentration of the user, and the corresponding seat block in the user interface presents a corresponding mark. According to this, when applied to teaching, teachers can be more intuitive to know the attendance rate and concentration of students. In addition, it can also be applied to general lectures, so that the lecturer can know the degree of concentration of the listener through the mark on the user interface, and then appropriately adjust the manner of the lecture to attract the attention of the listener.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.

S305~S325‧‧‧監控方法的各步驟 S305~S325‧‧‧ steps of the monitoring method

Claims (13)

一種監控方法,用於一監控裝置,其中該監控裝置位於一空間中,而該方法包括:於一顯示單元中顯示一使用者介面,其中該使用者介面包括多個座位區塊,而該些座位區塊分別對應至該空間的多個位置,而每一該些位置設置有一取像單元;自該些位置各自的該取像單元獲得一影像序列;對該影像序列執行一人臉辨識演算法,以判斷該影像序列是否具有一使用者的人像;若該影像序列存在該人像,對該影像序列執行一眼動追蹤演算法;以及基於該使用者是否注視於一指定方向,而對每一該些座位區塊執行對應的標記動作。 A monitoring method for a monitoring device, wherein the monitoring device is located in a space, the method comprising: displaying a user interface in a display unit, wherein the user interface comprises a plurality of seating blocks, and the The seating blocks respectively correspond to a plurality of positions of the space, and each of the positions is provided with an image capturing unit; and the image capturing unit of each of the positions obtains an image sequence; performing a face recognition algorithm on the image sequence Determining whether the image sequence has a portrait of a user; if the image sequence exists in the image sequence, performing an eye tracking algorithm on the image sequence; and based on whether the user is looking at a specified direction, These seating blocks perform corresponding marking actions. 如申請專利範圍第1項所述的方法,其中對該影像序列執行該眼動追蹤演算法的步驟之後,更包括:若該使用者注視於該指定方向,判定該使用者目前處於一專注狀態;以及若該使用者未注視於該指定方向,對該影像序列執行一閉眼偵測演算法,以判定該使用者目前處於一瞌睡狀態或一不專注狀態。 The method of claim 1, wherein the step of performing the eye tracking algorithm on the image sequence further comprises: if the user is in the specified direction, determining that the user is currently in a focused state And if the user is not looking at the specified direction, performing a closed-eye detection algorithm on the image sequence to determine whether the user is currently in a doze state or an unfocused state. 如申請專利範圍第2項所述的方法,其中對該影像序列執行該閉眼偵測演算法的步驟之後,更包括: 若偵測到該使用者為一閉眼狀態,判定該使用者目前處於該瞌睡狀態;以及若未偵測到該使用者為該閉眼狀態,判定該使用者目前處於該不專注狀態。 The method of claim 2, wherein after the step of performing the closed-eye detection algorithm on the image sequence, the method further comprises: If it is detected that the user is in a closed state, it is determined that the user is currently in the doze state; and if the user is not detected to be in the closed state, it is determined that the user is currently in the unfocused state. 如申請專利範圍第2項所述的方法,其中基於該使用者是否注視於該指定方向,而對每一該些座位區塊執行對應的標記動作的步驟包括:當該使用者目前處於該專注狀態,於對應的該座位區塊中顯示一第一標記;當該使用者目前處於該瞌睡狀態,於對應的該座位區塊中顯示一第二標記;當該使用者目前處於該不專注狀態,於對應的該座位區塊中顯示一第三標記。 The method of claim 2, wherein the step of performing a corresponding marking action on each of the seating blocks based on whether the user is looking at the specified direction comprises: when the user is currently in the focus a state, displaying a first mark in the corresponding seat block; when the user is currently in the doze state, displaying a second mark in the corresponding seat block; when the user is currently in the unfocused state Displaying a third mark in the corresponding seat block. 如申請專利範圍第1項所述的方法,其中在執行該人臉辨識演算法的步驟之後,更包括:若該影像序列不存在該人像,於對應的該座位區塊中顯示一第四標記。 The method of claim 1, wherein after the step of performing the face recognition algorithm, the method further comprises: if the image sequence does not exist, displaying a fourth mark in the corresponding seat block. . 如申請專利範圍第1項所述的方法,其中對該影像序列執行該人臉辨識演算法的步驟之後,更包括:在獲得一人臉區域時,偵測該人臉區域的鼻孔區域,而獲得一鼻孔位置資訊;基於該鼻孔位置資訊來判斷該使用者是否轉頭,藉此獲得一 臉部擺動資訊;以及基於該臉部擺動資訊來判斷該使用者是否面向於該指定方向,以在判定該使用者面向該指定方向時,對該影像序列執行該眼動追蹤演算法。 The method of claim 1, wherein the step of performing the face recognition algorithm on the image sequence further comprises: detecting a nostril region of the face region when obtaining a face region, a nostril position information; based on the nostril position information to determine whether the user turns his head, thereby obtaining a The face swing information; and determining whether the user faces the specified direction based on the face swing information, and executing the eye tracking algorithm on the image sequence when determining that the user faces the specified direction. 一種監控裝置,包括:一顯示單元,顯示一使用者介面,其中該使用者介面包括多個座位區塊,而該些座位區塊分別對應至與該監控裝置處於同一空間的多個位置,而每一該些位置設置有一取像單元;一通訊單元,自該些位置各自的該取像單元獲得一影像序列;一儲存單元,儲存該影像序列;以及一處理單元,耦接至該顯示單元、該通訊單元與該儲存單元,且驅動一監控模組;其中,該監控模組對該影像序列執行一人臉辨識演算法,以判斷該影像序列是否存在一使用者的人像;若該影像序列存在該人像,該監控模組對該影像序列執行一眼動追蹤演算法;並且,該監控模組基於該使用者是否注視於一指定方向,而對每一該些座位區塊執行對應的標記動作。 A monitoring device includes: a display unit displaying a user interface, wherein the user interface comprises a plurality of seat blocks, and the seat blocks respectively correspond to a plurality of positions in the same space as the monitoring device, and Each of the positions is provided with an image capturing unit; a communication unit obtains an image sequence from the image capturing units of the respective positions; a storage unit stores the image sequence; and a processing unit coupled to the display unit The communication unit and the storage unit drive a monitoring module; wherein the monitoring module performs a face recognition algorithm on the image sequence to determine whether the image sequence has a user portrait; if the image sequence The presence of the portrait, the monitoring module performs an eye tracking algorithm on the image sequence; and the monitoring module performs a corresponding marking action on each of the seating blocks based on whether the user is looking at a specified direction. . 如申請專利範圍第7項所述的監控裝置,其中該監控模組更包括:一人臉辨識模組,對該影像序列執行該人臉辨識演算法;一眼動追蹤模組,對該影像序列執行該眼動追蹤演算法;以及 一標記模組,對每一該些座位區塊執行對應的標記動作。 The monitoring device of claim 7, wherein the monitoring module further comprises: a face recognition module, the face recognition algorithm is executed on the image sequence; and an eye tracking module is executed on the image sequence. The eye tracking algorithm; A marking module performs a corresponding marking action for each of the seating blocks. 如申請專利範圍第8項所述的監控裝置,其中該監控模組更包括:一狀態識別模組,依據該眼動追蹤模組的結果,判別該使用者目前處於一專注狀態、一瞌睡狀態或一不專注狀態;以及一閉眼偵測模組,對該影像序列執行一閉眼偵測演算法;其中,當該眼動追蹤模組判定該使用者未注視於該指定方向,該狀態識別模組判定該使用者目前處於該專注狀態;其中,當該眼動追蹤模組判定該使用者未注視於該指定方向,透過該閉眼偵測模組測到該使用者為閉眼狀態時,由該狀態識別模組判定該使用者目前處於該瞌睡狀態,並在偵測到該使用者為非閉眼狀態時,由該狀態識別模組判定該使用者目前處於該不專注狀態。 The monitoring device of claim 8, wherein the monitoring module further comprises: a state recognition module, according to the result of the eye tracking module, determining that the user is currently in a state of concentration and a state of sleep Or an unfocused state; and a closed-eye detection module, performing a closed-eye detection algorithm on the image sequence; wherein, when the eye tracking module determines that the user is not looking at the specified direction, the state recognition mode The group determines that the user is currently in the state of focus; wherein when the eye tracking module determines that the user is not looking at the specified direction, and the closed eye detection module detects that the user is in the closed eye state, The state recognition module determines that the user is currently in the doze state, and when detecting that the user is in a non-closed state, the state recognition module determines that the user is currently in the unfocused state. 如申請專利範圍第9項所述的監控裝置,其中當該使用者目前處於該專注狀態,該標記模組於對應的該座位區塊中顯示一第一標記;當該使用者目前處於該瞌睡狀態,該標記模組於對應的該座位區塊中顯示一第二標記;當該使用者目前處於該不專注狀態,該標記模組於對應的該座位區塊中顯示一第三標記。 The monitoring device of claim 9, wherein when the user is currently in the state of concentration, the marking module displays a first mark in the corresponding seat block; when the user is currently in the sleepy state In the state, the marking module displays a second mark in the corresponding seat block; when the user is currently in the unfocused state, the marking module displays a third mark in the corresponding seat block. 如申請專利範圍第8項所述的監控裝置,其中當該人臉辨識模組判定該影像序列不存在該人像時,該標記模組於對應的該座位區塊中顯示一第四標記。 The monitoring device of claim 8, wherein the marking module displays a fourth mark in the corresponding seat block when the face recognition module determines that the portrait is not present in the image sequence. 如申請專利範圍第8項所述的監控裝置,其中該監控模 組更包括:一擺頭偵測模組,基於一鼻孔位置資訊來判斷該使用者是否轉頭,藉此獲得一臉部擺動資訊,其中在該人臉辨識模組獲得一人臉區域時,在該人臉辨識模組偵測該人臉區域的鼻孔區域,而獲得該鼻孔位置資訊;其中,當該擺頭偵測模組基於該臉部擺動資訊而判定該使用者面向該指定方向時,透過該眼動追蹤模組對該影像序列執行該眼動追蹤演算法。 The monitoring device of claim 8, wherein the monitoring mode The group further includes: a oscillating head detecting module, which determines whether the user turns the head based on a nostril position information, thereby obtaining a face swinging information, wherein when the face recognition module obtains a face area, The face recognition module detects the nostril area of the face area to obtain the nostril position information. When the swing head detection module determines that the user faces the designated direction based on the face swing information, The eye tracking algorithm is executed on the image sequence through the eye tracking module. 一種監控系統,包括:多個取像單元,分別設置於一空間中的多個位置;一監控裝置,設置於該空間,其中該監控裝置包括:一顯示單元,顯示一使用者介面,其中該使用者介面包括多個座位區塊,而該些座位區塊分別對應至該些位置;一通訊單元,自每一該些取像單元獲得一影像序列;一儲存單元,儲存該影像序列;以及一處理單元,耦接至該顯示單元與該通訊單元,且驅動一監控模組;其中,該監控模組對該影像序列執行一人臉辨識演算法,以判斷該影像序列是否存在一使用者的人像;若該影像序列存在該人像,該監控模組對該影像序列執行一眼動追蹤演算法;並且,該監控模組基於該使用者是否注視於一指定方向,而對每一該些座位區塊執行對應的標記動作。 A monitoring system, comprising: a plurality of image capturing units respectively disposed at a plurality of locations in a space; a monitoring device disposed in the space, wherein the monitoring device comprises: a display unit displaying a user interface, wherein the The user interface includes a plurality of seating blocks, and the seating blocks respectively correspond to the positions; a communication unit obtains an image sequence from each of the image capturing units; and a storage unit stores the image sequence; a processing unit coupled to the display unit and the communication unit, and driving a monitoring module; wherein the monitoring module performs a face recognition algorithm on the image sequence to determine whether the image sequence exists for a user a portrait image; if the image sequence exists in the image sequence, the monitoring module performs an eye tracking algorithm on the image sequence; and the monitoring module is based on whether the user is looking at a specified direction, and each of the seating areas The block performs the corresponding tag action.
TW102137367A 2013-10-16 2013-10-16 Method, apparatus and system for monitoring TW201516892A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW102137367A TW201516892A (en) 2013-10-16 2013-10-16 Method, apparatus and system for monitoring
CN201410054265.XA CN104571487B (en) 2013-10-16 2014-02-18 Monitoring method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW102137367A TW201516892A (en) 2013-10-16 2013-10-16 Method, apparatus and system for monitoring

Publications (1)

Publication Number Publication Date
TW201516892A true TW201516892A (en) 2015-05-01

Family

ID=53087773

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102137367A TW201516892A (en) 2013-10-16 2013-10-16 Method, apparatus and system for monitoring

Country Status (2)

Country Link
CN (1) CN104571487B (en)
TW (1) TW201516892A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI813329B (en) * 2021-06-11 2023-08-21 見臻科技股份有限公司 Cognitive assessment system

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935884A (en) * 2015-06-05 2015-09-23 重庆智韬信息技术中心 Method for intelligently monitoring class attendance order of students
CN105844253A (en) * 2016-04-01 2016-08-10 乐视控股(北京)有限公司 Mobile terminal image identification data comparison method and device
CN106297213A (en) * 2016-08-15 2017-01-04 欧普照明股份有限公司 Detection method, detection device and lighting
CN106372614A (en) * 2016-09-13 2017-02-01 南宁市远才教育咨询有限公司 Class discipline monitoring prompt auxiliary apparatus
CN106933367A (en) * 2017-03-28 2017-07-07 安徽味唯网络科技有限公司 It is a kind of to improve student and attend class the method for notice
US10643485B2 (en) * 2017-03-30 2020-05-05 International Business Machines Corporation Gaze based classroom notes generator
US10417502B2 (en) * 2017-12-15 2019-09-17 Accenture Global Solutions Limited Capturing series of events in monitoring systems
CN108460700B (en) * 2017-12-28 2021-11-16 北京科教科学研究院 Intelligent student education management regulation and control system
CN109448337A (en) * 2018-11-21 2019-03-08 重庆工业职业技术学院 Multimedia teaching is attended class based reminding method and system
CN109740498A (en) * 2018-12-28 2019-05-10 广东新源信息技术有限公司 A kind of wisdom classroom based on face recognition technology
CN117137427B (en) * 2023-08-31 2024-10-01 深圳市华弘智谷科技有限公司 A method and device for vision detection based on VR, and smart glasses

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556717A (en) * 2009-05-19 2009-10-14 上海海隆软件股份有限公司 ATM intelligent security system and monitoring method
CN102018519B (en) * 2009-09-15 2012-09-05 由田新技股份有限公司 Personnel concentration monitoring system
CN103208212A (en) * 2013-03-26 2013-07-17 陈秀成 Anti-cheating remote online examination method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI813329B (en) * 2021-06-11 2023-08-21 見臻科技股份有限公司 Cognitive assessment system
US12458264B2 (en) 2021-06-11 2025-11-04 Ganzin Technology, Inc. Cognitive assessment system based on eye movement

Also Published As

Publication number Publication date
CN104571487B (en) 2018-04-24
CN104571487A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
TW201516892A (en) Method, apparatus and system for monitoring
US11616906B2 (en) Electronic system with eye protection in response to user distance
US8243132B2 (en) Image output apparatus, image output method and image output computer readable medium
US20160054794A1 (en) Eye-control reminding method, eye-control image display method and display system
US7506979B2 (en) Image recording apparatus, image recording method and image recording program
CN105892647B (en) A kind of display screen method of adjustment, its device and display device
CN103513768B (en) A kind of control method based on mobile terminal attitudes vibration and device, mobile terminal
US8150118B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US20130190093A1 (en) System and method for tracking and mapping an object to a target
US20190098219A1 (en) Mobile device
CN105279459A (en) Terminal anti-peeping method and mobile terminal
US9498123B2 (en) Image recording apparatus, image recording method and image recording program stored on a computer readable medium
US10861423B2 (en) Display apparatus and display method thereof
US20170156585A1 (en) Eye condition determination system
CN105719439A (en) Eye protection system and method
CN109948435A (en) Sitting posture prompting method and device
US20110279665A1 (en) Image recording apparatus, image recording method and image recording program
CN108154450A (en) Digital studying intelligent monitor system
WO2020116181A1 (en) Concentration degree measurement device and concentration degree measurement method
CN110148092A (en) The analysis method of teenager's sitting posture based on machine vision and emotional state
CN113570916A (en) Multimedia remote teaching auxiliary method, equipment and system
JP2020126214A (en) Information processing apparatus and information processing method
CN111582003A (en) Sight tracking student classroom myopia prevention system
CN113936323A (en) Detection method and device, terminal and storage medium
KR20230079942A (en) Apparatus for display control for eye tracking and method thereof