[go: up one dir, main page]

TWI551861B - System and method for estimating carbohydrate content - Google Patents

System and method for estimating carbohydrate content Download PDF

Info

Publication number
TWI551861B
TWI551861B TW103145951A TW103145951A TWI551861B TW I551861 B TWI551861 B TW I551861B TW 103145951 A TW103145951 A TW 103145951A TW 103145951 A TW103145951 A TW 103145951A TW I551861 B TWI551861 B TW I551861B
Authority
TW
Taiwan
Prior art keywords
tested
data
identity
estimating
light field
Prior art date
Application number
TW103145951A
Other languages
Chinese (zh)
Other versions
TW201623960A (en
Inventor
曹思漢
藍永松
林雁容
張奇偉
賴才雅
莊凱評
宋新岳
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to TW103145951A priority Critical patent/TWI551861B/en
Publication of TW201623960A publication Critical patent/TW201623960A/en
Application granted granted Critical
Publication of TWI551861B publication Critical patent/TWI551861B/en

Links

Landscapes

  • Investigating Or Analysing Materials By Optical Means (AREA)

Description

估算碳水化合物含量之系統以及方法 System and method for estimating carbohydrate content

本發明是有關於一種估算待測物中的營養成分含量之系統與方法,且特別是有關於一種估算碳水化合物含量之系統以及方法。 The present invention relates to a system and method for estimating nutrient content in an analyte, and more particularly to a system and method for estimating carbohydrate content.

隨著科技的發展,現代人的生活型態亦有相當大的轉變。雖然醫藥技術十分發達,但由於環境與飲食習慣等的改變,造成如氣喘、癌症、糖尿病、心血管疾病等的慢性疾病盛行。尤其近年來,在開發中國家及新興工業化國家中,糖尿病發生率(incidence)與盛行率(prevalence)正快速增加中。目前全球約有一億九千萬名糖尿病的患者,但據世界衛生組織估計,2025年全球將會有三億三千萬名糖尿病的患者,且其中絕大多數為第2型糖尿病。在台灣,自1987年起糖尿病始終高居十大死亡原因的第五名,且在2011年糖尿病升至第四名,為過去二十年來死亡率增加速度最快的一種疾病。 With the development of science and technology, the lifestyle of modern people has also undergone considerable changes. Although medical technology is very developed, chronic diseases such as asthma, cancer, diabetes, and cardiovascular diseases are prevalent due to changes in the environment and eating habits. Especially in recent years, among developing countries and newly industrialized countries, diabetes incidence and prevalence are rapidly increasing. There are currently about 190 million people with diabetes worldwide, but according to World Health Organization estimates, there will be 330 million people with diabetes worldwide in 2025, and most of them are type 2 diabetes. In Taiwan, diabetes has been the fifth most common cause of death since 1987, and in 2011, diabetes rose to fourth place, the fastest-growing disease in the past two decades.

從社會成本的觀點來看,糖尿病的盛行所造成的社會負擔亦不可小覷。根據統計,美國糖尿病患者之人數約為2,800萬,約占總人口比例的8.3%,而2012年美國花費在糖尿病的醫療成本為2,450億美元,其中1,760億美元為直接醫療成本,690億為糖尿病醫療照護間接成本(資料來源:美國糖尿病協會(American Diabetes Association)(2012))。另一方面,自1997年至2009年,台灣糖尿病患者之人數由53.8萬增加至122.3萬,成長超過2.3倍。此外,糖尿病患者醫療總費用也逐年增加,於2009年已突破1,000億,約占整體健保總支出的22%;亦即,我國政府每日約需花費3億元於糖尿病相關的醫療費用上(資料來源:中華民國糖尿病學會(2011))。 From the point of view of social costs, the social burden caused by the prevalence of diabetes cannot be underestimated. According to statistics, the number of people with diabetes in the United States is about 28 million, accounting for 8.3% of the total population. In 2012, the cost of medical expenses for diabetes in the United States was $245 billion, of which $176 billion was direct medical cost and $69 billion was diabetes. Indirect costs of medical care (Source: American Diabetes Association (2012)). On the other hand, from 1997 to 2009, the number of diabetic patients in Taiwan increased from 538,000 to 1.23 million, growing more than 2.3 times. In addition, the total medical expenses for diabetes patients have also increased year by year. In 2009, it exceeded 100 billion, accounting for 22% of the total total health insurance expenditure; that is, the Chinese government spends about 300 million yuan a day on diabetes-related medical expenses ( Source: Republic of China Diabetes Association (2011)).

對於糖尿病患者來說,控制血糖是非常重要的。血糖會受到飲食、運動、情緒等因素的影響而不斷變動。由於血糖的變動不易察覺,因此必須透過規律的自我監測,來確保血糖值控制在目標範圍內。此外,透過了解血糖值的變化情況(例如追蹤並記錄每日血糖值的高點與低點),才能更有效的評估飲食、活動及藥物等處方的成效,並適時調整療程。 For people with diabetes, controlling blood sugar is very important. Blood sugar is constantly changing due to factors such as diet, exercise, and mood. Since changes in blood sugar are not easily detectable, regular self-monitoring is necessary to ensure that blood glucose levels are controlled within the target range. In addition, by understanding the changes in blood glucose levels (such as tracking and recording the highs and lows of daily blood glucose values), we can more effectively assess the effectiveness of prescriptions such as diet, activities and drugs, and adjust the course of treatment in a timely manner.

然而,一般糖尿病的患者缺乏能進行個人糖分攝取的管理的工具,而不易進行日常飲食的控制,也會造成病情不穩定的問題,間接增加國家的醫療成本。有鑒於此,需要開發能夠估算碳水化合物含量的系統與方法,以利個人進行糖分攝取的管理。 However, patients with general diabetes lack the tools to manage the management of personal sugar intake, and it is not easy to control the daily diet. It also causes instability and indirectly increases the medical cost of the country. In view of this, there is a need to develop systems and methods for estimating carbohydrate content for the management of sugar intake by individuals.

本發明提供一種估算碳水化合物含量的系統以及方法,可應用於確認待測物中碳水化合物之含量,從而讓使用者(例如:糖尿病患者)能夠容易地進行糖分攝取的管理。 The present invention provides a system and method for estimating carbohydrate content, which can be applied to confirm the content of carbohydrates in a test substance, thereby enabling a user (for example, a diabetic patient) to easily manage sugar intake.

本發明的估算碳水化合物含量之系統,包括:影像資料擷取模組,擷取待測物的影像資料,上述影像資料包含光場數據以及待測物身分辨識數據;數據處理比對模組,耦接於上述影像資料擷取模組,且上述數據處理比對模組根據上述待測物身分辨識數據在資料庫進行比對而取得上述待測物的身分,並根據上述光場數據估算上述待測物的體積;以及成分整合模組,耦接於上述數據處理比對模組,且上述成分整合模組利用上述待測物的身分取得上述待測物的碳水化合物換算數據,並利用上述待測物的碳水化合物換算數據以及上述待測物的體積而計算出上述待測物的碳水化合物含量。 The system for estimating carbohydrate content of the present invention comprises: an image data acquisition module for capturing image data of the object to be tested, the image data comprising light field data and identification data of the object to be tested; and a data processing comparison module, The image data acquisition module is coupled to the image data acquisition module, and the data processing comparison module obtains the identity of the object to be tested according to the identity identification data of the object to be tested, and estimates the image according to the light field data. a volume of the object to be tested; and a component integration module coupled to the data processing comparison module, wherein the component integration module obtains the carbohydrate conversion data of the object to be tested by using the identity of the object to be tested, and uses the above The carbohydrate conversion data of the analyte and the volume of the analyte to be tested are used to calculate the carbohydrate content of the analyte.

在本發明的一實施例中,上述待測物身分辨識數據包括上述待測物之二維/三維影像數據及/或上述待測物之光譜數據。 In an embodiment of the invention, the object identification data includes the two-dimensional/three-dimensional image data of the object to be tested and/or the spectral data of the object to be tested.

在本發明的一實施例中,上述數據處理比對模組包括:影像前處理模組,對上述光場數據以及上述待測物身分辨識數據進行前處理;特徵比對模組,耦接於上述影像前處理模組,上述特徵比對模組根據上述待測物身分辨識數據在上述資料庫進行比對而取得上述待測物的身分;體積估算模組,耦接於上述特徵比對模組,根據上述光場數據估算上述待測物的體積。 In an embodiment of the present invention, the data processing comparison module includes: an image pre-processing module that performs pre-processing on the light field data and the identification data of the object to be tested; and the feature comparison module is coupled to In the image pre-processing module, the feature comparison module obtains the identity of the object to be tested according to the identity identification data of the object to be tested, and the volume estimation module is coupled to the feature comparison module. And estimating the volume of the object to be tested based on the light field data.

在本發明的一實施例中,上述影像資料擷取模組包括光場相機,且上述體積估算模組是藉由下式(1)來進行待測物的體積(V)估算:V=fVol(:K,θ,D,H)............式(1) In an embodiment of the invention, the image data capture module includes a light field camera, and the volume estimation module performs volume (V) estimation of the object to be tested by the following formula (1): V=fVol (:K, θ, D, H)............(1)

其中,K表示待測物之平均截面積;θ表示光場相機的感應器的陀螺儀對待測物所取得的夾角;D表示感應器與待測物之間的距離;H則表示待測物的高度,H=cosθ‧D。 Where K represents the average cross-sectional area of the object to be tested; θ represents the angle between the gyroscope of the sensor of the light field camera; D represents the distance between the sensor and the object to be tested; and H represents the object to be tested. Height, H = cos θ ‧ D.

在本發明的一實施例中,上述影像資料擷取模組包括光場相機,上述待測物為盛裝在餐具內部,且上述體積估算模組是藉由下式(2)來進行待測物的體積(Vol)估算:Vol=ʃʃ[z 0(x,y)-z(x,y)]dxdy............式(2) In an embodiment of the present invention, the image data capture module includes a light field camera, and the object to be tested is contained inside the tableware, and the volume estimation module performs the object to be tested by the following formula (2). Volume (Vol) estimation: Vol = ʃʃ [ z 0 ( x , y ) - z ( x , y )] dxdy ............ (2)

其中,以餐具的開口緣部作為參考平面而設定出X、Y及Z方向,z0(x,y)表示於Z方向上由光場相機至餐具底部的距離,z(x,y)表示於Z方向上由光場相機至待測物表面的距離。 Wherein, the X, Y, and Z directions are set with the opening edge portion of the tableware as a reference plane, and z 0 (x, y) represents the distance from the light field camera to the bottom of the tableware in the Z direction, and z(x, y) represents The distance from the light field camera to the surface of the object to be tested in the Z direction.

在本發明的一實施例中,上述特徵比對模組包括:影像比對單元,根據上述待測物之二維/三維影像數據取得上述待測物的身分;光譜比對單元,根據上述待測物之光譜數據取得上述待測物的身分;以及資料輸入單元,供操作者輸入上述待測物的身分。 In an embodiment of the present invention, the feature comparison module includes: an image comparison unit, obtaining the identity of the object to be tested according to the two-dimensional/three-dimensional image data of the object to be tested; and the spectral comparison unit according to the The spectral data of the measuring object obtains the identity of the object to be tested; and the data input unit for the operator to input the identity of the object to be tested.

在本發明的一實施例中,上述操作者是以語音或文字方式輸入上述待測物的身分。 In an embodiment of the invention, the operator inputs the identity of the object to be tested in a voice or text manner.

在本發明的一實施例中,在上述體積估算模組中進行影 像深度估算程序以建立待測物的深度圖,再結合由光譜比對單元所得的特徵標示影像資料,以估算待測物的體積。 In an embodiment of the invention, performing shadowing in the volume estimation module The depth estimation program is used to establish a depth map of the object to be tested, and then combined with the features obtained by the spectral comparison unit to mark the image data to estimate the volume of the object to be tested.

在本發明的一實施例中,上述估算碳水化合物含量之系統更包括遠端裝置,至少與上述成分整合模組耦接,接收由上述成分整合模組所計算出的結果。 In an embodiment of the invention, the system for estimating carbohydrate content further includes a remote device coupled to at least the component integration module to receive a result calculated by the component integration module.

本發明的估算碳水化合物含量之方法,包括如下步驟:擷取待測物的影像資料,上述影像資料包含光場數據以及待測物身分辨識數據;根據上述待測物身分辨識數據在資料庫進行比對而取得上述待測物的身分;根據上述光場數據估算上述待測物的體積;以及利用上述待測物的身分取得上述待測物的碳水化合物換算數據,並利用上述待測物的碳水化合物換算數據以及上述待測物的體積而計算出上述待測物的碳水化合物含量。 The method for estimating the carbohydrate content of the present invention comprises the steps of: capturing image data of the object to be tested, the image data comprising light field data and identification data of the object to be tested; and performing identification data according to the identity of the object to be tested in the database Obtaining the identity of the object to be tested according to the comparison; estimating the volume of the object to be tested according to the light field data; and obtaining the carbohydrate conversion data of the object to be tested by using the identity of the object to be tested, and using the object to be tested The carbohydrate content of the analyte to be tested is calculated from the carbohydrate conversion data and the volume of the above-mentioned analyte.

在本發明的一實施例中,上述待測物身分辨識數據包括上述待測物之二維/三維影像數據及/或上述待測物之光譜數據。 In an embodiment of the invention, the object identification data includes the two-dimensional/three-dimensional image data of the object to be tested and/or the spectral data of the object to be tested.

在本發明的一實施例中,上述待測物之二維/三維影像數據是由光場相機所取得。 In an embodiment of the invention, the two-dimensional/three-dimensional image data of the object to be tested is obtained by a light field camera.

在本發明的一實施例中,根據上述待測物身分辨識數據在上述資料庫進行比對而取得上述待測物的身分之步驟包括:先根據上述待測物之二維/三維影像數據取得上述待測物的身分,在無法根據上述待測物之二維/三維影像數據取得上述待測物的身分時,則根據上述待測物之光譜數據取得上述待測物的身分。 In an embodiment of the present invention, the step of obtaining the identity of the object to be tested by comparing the identification data of the object to be tested in the database includes: first obtaining the image according to the two-dimensional/three-dimensional image data of the object to be tested. When the identity of the object to be tested is unable to obtain the identity of the object to be tested based on the two-dimensional/three-dimensional image data of the object to be tested, the identity of the object to be tested is obtained based on the spectral data of the object to be tested.

在本發明的一實施例中,在無法根據上述待測物的二維/ 三維影像數據或光譜數據取得上述待測物的身分時,透過人工判定方式取得上述待測物的身分。 In an embodiment of the invention, it is not possible to When the three-dimensional image data or the spectral data acquires the identity of the object to be tested, the identity of the object to be tested is obtained by manual determination.

在本發明的一實施例中,在透過人工判定方式取得上述待測物的身分後,將上述待測物身分辨識數據加入上述資料庫中。 In an embodiment of the present invention, after the identity of the object to be tested is obtained by a manual determination method, the object identification data of the object to be tested is added to the database.

在本發明的一實施例中,根據光場數據估算待測物的體積之步驟包括:進行影像深度估算程序以建立待測物的深度圖。 In an embodiment of the invention, the step of estimating the volume of the object to be tested according to the light field data comprises: performing an image depth estimation process to establish a depth map of the object to be tested.

在本發明的一實施例中,在利用光場相機擷取待測物的影像資料的情況下,根據光場數據估算待測物的體積之步驟包括:藉由下式(1)來進行待測物的體積(V)估算:V=fVol(:K,θ,D,H)............式(1)其中,K表示待測物之平均截面積;θ表示光場相機的感應器的陀螺儀對待測物所取得的夾角;D表示感應器與待測物之間的距離;H則表示待測物的高度,H=cosθ‧D。 In an embodiment of the present invention, in the case of capturing the image data of the object to be tested by using the light field camera, the step of estimating the volume of the object to be tested according to the light field data includes: performing the following formula (1) Estimation of volume (V) of the object: V = fVol (: K, θ, D, H) (1) where K represents the average cross-sectional area of the object to be tested; θ represents the angle between the gyro of the sensor of the light field camera and the object to be tested; D represents the distance between the sensor and the object to be tested; H represents the height of the object to be tested, H = cos θ ‧ D.

在本發明的一實施例中,在利用光場相機擷取待測物的影像資料,且待測物為盛裝在餐具內部的情況下,根據光場數據估算待測物的體積之步驟包括:藉由下式(2)來進行待測物的體積(Vol)估算:Vol=ʃʃ[z 0(x,y)-z(x,y)]dxdy............式(2)其中,以餐具的開口緣部作為參考平面而設定出X、Y及Z方向,z0(x,y)表示於Z方向上由光場相機至餐具底部的距離,z(x,y)表示於Z方向上由光場相機至待測物表面的距離。 In an embodiment of the present invention, when the image data of the object to be tested is captured by the light field camera, and the object to be tested is contained inside the tableware, the step of estimating the volume of the object to be tested according to the light field data includes: The volume (Vol) estimation of the object to be tested is performed by the following formula (2): Vol = ʃʃ [ z 0 ( x , y ) - z ( x , y )] dxdy ........... In the formula (2), the X, Y, and Z directions are set with the opening edge portion of the tableware as a reference plane, and z 0 (x, y) represents the distance from the light field camera to the bottom of the tableware in the Z direction, z ( x, y) represents the distance from the light field camera to the surface of the object to be tested in the Z direction.

基於上述,藉由本發明所提供之系統與方法,能夠透過 非接觸式且非破壞式的方式確認待測物之種類,並且得到其碳水化合物含量之數據,從而實現讓使用者(例如:糖尿病患者)能夠容易地進行糖分攝取管理的互動式智慧服務系統。 Based on the above, the system and method provided by the present invention can be transmitted through The non-contact and non-destructive manner confirms the type of the analyte and obtains data on its carbohydrate content, thereby realizing an interactive intelligent service system that enables users (for example, diabetic patients) to easily manage sugar intake.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the invention will be apparent from the following description.

100‧‧‧估算碳水化合物含量之系統 100‧‧‧System for estimating carbohydrate content

102‧‧‧影像資料擷取模組 102‧‧‧Image data capture module

102a、128‧‧‧光場相機 102a, 128‧‧‧ light field camera

102b‧‧‧光譜相機 102b‧‧·Spectrum camera

104‧‧‧數據處理比對模組 104‧‧‧Data processing comparison module

106‧‧‧成分整合模組 106‧‧‧Component Integration Module

108‧‧‧遠端裝置 108‧‧‧Remote device

110‧‧‧影像前處理模組 110‧‧‧Image pre-processing module

112‧‧‧特徵比對模組 112‧‧‧Feature comparison module

112a‧‧‧影像比對單元 112a‧‧·Image comparison unit

112b‧‧‧光譜比對單元 112b‧‧‧spectral comparison unit

112c‧‧‧資料輸入單元 112c‧‧‧Data input unit

112d‧‧‧語音前處理單元 112d‧‧‧Voice pre-processing unit

112e‧‧‧語音辨識單元 112e‧‧‧Voice recognition unit

114‧‧‧體積估算模組 114‧‧‧ Volume Estimation Module

116‧‧‧資料庫 116‧‧‧Database

118‧‧‧影像資料 118‧‧‧Image data

120‧‧‧待測物身分序號 120‧‧‧Substance ID number

122‧‧‧待測物體積數據 122‧‧‧Study volume data

124、132‧‧‧待測物 124, 132‧‧‧Test objects

126‧‧‧感測器 126‧‧‧ sensor

130‧‧‧餐具 130‧‧‧Tableware

D‧‧‧距離 D‧‧‧Distance

H‧‧‧高度 H‧‧‧ Height

θ‧‧‧角度 Θ‧‧‧ angle

S1、S2、S3、S4、S101、S102、S201、S202、S203、S204、S205、S301、S401‧‧‧步驟 Steps S1, S2, S3, S4, S101, S102, S201, S202, S203, S204, S205, S301, S401‧‧

X、Y、Z‧‧‧方向 X, Y, Z‧‧ Direction

圖1是依照本發明一實施例所繪示的估算碳水化合物含量之系統的示意圖。 1 is a schematic diagram of a system for estimating carbohydrate content, in accordance with an embodiment of the invention.

圖2是本發明的估算碳水化合物含量之方法的概略流程圖。 2 is a schematic flow chart of a method of estimating carbohydrate content of the present invention.

圖3是依照本發明一實施例所繪示的估算碳水化合物含量之系統流程圖。 3 is a system flow diagram for estimating carbohydrate content, in accordance with an embodiment of the invention.

圖4是依照本發明一實施例所繪示的估算待測物體積的示意圖。 FIG. 4 is a schematic diagram of estimating a volume of an object to be tested according to an embodiment of the invention.

圖5是依照本發明另一實施例所繪示的估算待測物體積的示意圖。 FIG. 5 is a schematic diagram of estimating a volume of a sample to be tested according to another embodiment of the invention.

以下將參照所附圖式,對本發明的實施方式進行更詳細的說明。 Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings.

圖1是依照本發明一實施例所繪示的估算碳水化合物含 量之系統的示意圖。 1 is an estimated carbohydrate content according to an embodiment of the invention. Schematic diagram of the system of quantities.

請參照圖1,本實施例的估算碳水化合物含量之系統100包括影像資料擷取模組102、數據處理比對模組104以及成分整合模組106。 Referring to FIG. 1 , the system 100 for estimating carbohydrate content in the embodiment includes an image data capturing module 102 , a data processing comparison module 104 , and a component integration module 106 .

影像資料擷取模組102用於擷取待測物的影像資料。上述待測物例如是僅含單一成分的食物,亦可為包含多種成分的混合食物,但並不限於此。上述影像資料例如是光場數據、二維/三維影像數據(即,二維或三維影像數據)以及光譜數據。其中,二維/三維影像數據及光譜數據均可作為待測物的身分辨識用數據(以下簡稱為「待測物身分辨識數據」),而上述光譜數據例如是近紅外線光譜數據。 The image data capturing module 102 is configured to capture image data of the object to be tested. The above-mentioned test object is, for example, a food containing only a single component, and may be a mixed food containing a plurality of components, but is not limited thereto. The above image data is, for example, light field data, two-dimensional/three-dimensional image data (ie, two-dimensional or three-dimensional image data), and spectral data. The two-dimensional/three-dimensional image data and the spectral data can be used as the identification data for the object to be tested (hereinafter referred to as "the object identification data"), and the spectral data is, for example, near-infrared spectrum data.

由圖1可知,在本實施例中,影像資料擷取模組102包括光場相機102a以及光譜相機102b。光場相機102a可用於拍攝待測物並取得待測物的光場數據及二維/三維影像數據,而光譜相機102b則可用來取得待測物的光譜數據,例如是近紅外線光譜數據。然而,本發明並不限於上述配置方式。舉例來說,亦可採用一般數位相機與光場相機進行組合,藉由一般數位相機取得待測物的二維影像,並由光場相機取得光場數據,以作為待測物身分辨識數據。此外,亦可使用同時具備光場數據及光譜數據擷取功能的相機,藉此可透過單一曝光的方式取得估算碳水化合物含量所需的資訊,亦即,只要進行一次拍攝就能夠獲取待測物的光場數據、二維/三維影像數據以及光譜數據,更有利於提升檢測速度。 As can be seen from FIG. 1, in the embodiment, the image data capturing module 102 includes a light field camera 102a and a spectral camera 102b. The light field camera 102a can be used to capture the object to be tested and obtain the light field data and the two-dimensional/three-dimensional image data of the object to be tested, and the spectrum camera 102b can be used to obtain the spectral data of the object to be tested, for example, near-infrared spectrum data. However, the present invention is not limited to the above configuration. For example, a general digital camera and a light field camera may be combined to obtain a two-dimensional image of the object to be tested by a general digital camera, and the light field data is obtained by the light field camera as the identification data of the object to be tested. In addition, a camera with both light field data and spectral data capture can be used, so that the information needed to estimate the carbohydrate content can be obtained by a single exposure, that is, the test object can be obtained by one shot. The light field data, 2D/3D image data and spectral data are more conducive to improving the detection speed.

數據處理比對模組104耦接於影像資料擷取模組102。在本實施例中,數據處理比對模組104包括影像前處理模組110、特徵比對模組112以及體積估算模組114。 The data processing comparison module 104 is coupled to the image data capturing module 102. In the embodiment, the data processing comparison module 104 includes an image pre-processing module 110, a feature comparison module 112, and a volume estimation module 114.

影像前處理模組110可對待測物的光場數據以及待測物身分辨識數據進行前處理。上述前處理例如是對上述數據進行陣列轉換以及校正。於修正後,影像前處理模組110可將數據傳送至特徵比對模組112。 The image pre-processing module 110 can perform pre-processing on the light field data of the object to be tested and the identification data of the object to be tested. The above pre-processing is, for example, performing array conversion and correction on the above data. After the correction, the image pre-processing module 110 can transmit the data to the feature comparison module 112.

特徵比對模組112耦接於影像前處理模組110,其可接收經影像前處理模組處理後的數據,並根據待測物身分辨識數據在資料庫116進行比對而取得待測物的身分。 The feature comparison module 112 is coupled to the image pre-processing module 110, and can receive the data processed by the image pre-processing module, and compare the data according to the object identification data to obtain the object to be tested. Identity.

資料庫116例如是由一或多個子資料庫組成的資料集合,其中包括各類食物及營養成分的光譜、影像或是食物名稱語音的特徵辨識資料、以及各類食物及營養成分的體積/重量轉換資料等。此外,各個子資料庫例如是建立於資料儲存裝置中的資料庫,亦可為以網路為基礎(web-based)的資料庫,並不限於此。作為子資料庫的具體實例,可列舉:行政院衛生署之食品營養成分資料庫、國家衛生研究院所提供之食品資料庫或業者自行建立的食品資料庫等。 The database 116 is, for example, a collection of data consisting of one or more sub-databases, including spectral, image or food name speech identification data of various foods and nutrients, and volume/weight of various foods and nutrients. Convert data, etc. In addition, each sub-database is, for example, a database built in the data storage device, and may also be a web-based database, and is not limited thereto. Specific examples of the sub-database may include a food nutrition database of the Health Department of the Executive Yuan, a food database provided by the National Institutes of Health, or a food database established by the industry itself.

如圖1所示,本實施例中,上述特徵比對模組112包括影像比對單元112a、光譜比對單元112b以及資料輸入單元112c。影像比對單元112a可根據待測物之二維/三維影像數據,在資料庫116中進行比對而取得待測物的身分。光譜比對單元112b則可根 據待測物之光譜數據,在資料庫116中進行比對而取得待測物的身分。資料輸入單元112c可供操作者輸入上述待測物的身分。上述操作者例如是使用者本身或位於後端的具備食品營養知識之專業人員。此外,輸入待測物身分的方式並無特別限定,例如是透過語音或文字輸入的方式來指定待測物的身分。上述待測物的身分例如是以資料庫中所指定的序號來表示,但並不限於此。 As shown in FIG. 1 , in the embodiment, the feature comparison module 112 includes an image comparison unit 112a, a spectral comparison unit 112b, and a data input unit 112c. The image matching unit 112a can perform the comparison in the database 116 according to the two-dimensional/three-dimensional image data of the object to be tested to obtain the identity of the object to be tested. The spectral comparison unit 112b can be rooted According to the spectral data of the analyte, the identity of the object to be tested is obtained by comparison in the database 116. The data input unit 112c is available for the operator to input the identity of the above-mentioned object to be tested. The above operator is, for example, the user himself or a professional with food nutrition knowledge at the back end. In addition, the manner of inputting the identity of the object to be tested is not particularly limited, for example, the identity of the object to be tested is specified by voice or text input. The identity of the above-mentioned object to be tested is expressed, for example, by a serial number specified in the database, but is not limited thereto.

如圖1所示,在需要操作者以語音方式輸入待測物的身分時,數據處理比對模組104可進一步包括語音前處理單元112d以及語音辨識單元112e,且語音前處理單元112d以及語音辨識單元112e例如是整合於特徵比對模組112內,但並不限於此。具體來說,上述語音前處理單元112d例如是至少耦接於資料輸入單元112c以及語音辨識單元112e,用於對資料輸入單元112c所擷取的待測物語音資料進行前處理(例如是對語音資料進行校正以及取樣等),並將處理後的數據傳送至語音辨識單元112e。上述語音辨識單元112e例如是至少耦接於語音前處理單元112d以及資料庫116,其可接收經語音前處理單元112d處理後的語音數據,並根據此語音數據在資料庫116進行比對而取得待測物的身分。 As shown in FIG. 1 , when the operator needs to input the identity of the object to be tested by voice, the data processing comparison module 104 may further include a voice pre-processing unit 112d and a voice recognition unit 112e, and the voice pre-processing unit 112d and the voice. The identification unit 112e is integrated into the feature comparison module 112, for example, but is not limited thereto. Specifically, the voice pre-processing unit 112d is coupled to the data input unit 112c and the voice recognition unit 112e, for example, for pre-processing the voice data of the object to be tested captured by the data input unit 112c (for example, for voice). The data is corrected and sampled, etc., and the processed data is transmitted to the speech recognition unit 112e. The voice recognition unit 112e is coupled to, for example, the voice pre-processing unit 112d and the data library 116, and can receive the voice data processed by the voice pre-processing unit 112d, and obtain the voice data according to the voice data in the database 116. The identity of the object to be tested.

體積估算模組114耦接於特徵比對模組112,且體積估算模組114可根據光場數據估算待測物的體積。上述待測物的體積估算程序例如是藉由影像深度估算程序對光場數據進行分析後,並接收由光譜比對單元112b所傳回的特徵標示影像資料進行數值轉換,從而進行待測物的體積估算。於完成體積估算後,體積估 算模組114可將所得的結果傳送至成分整合模組106。 The volume estimation module 114 is coupled to the feature comparison module 112, and the volume estimation module 114 can estimate the volume of the object to be tested according to the light field data. The volume estimation program of the object to be tested is, for example, analyzing the light field data by the image depth estimation program, and receiving the characteristic image data returned by the spectral comparison unit 112b for numerical conversion, thereby performing the object to be tested. Volume estimation. Volume estimation after volume estimation is completed The calculation module 114 can communicate the results to the component integration module 106.

成分整合模組106耦接於數據處理比對模組104,且成分整合模組106利用待測物的身分取得待測物的碳水化合物換算數據,並利用待測物的碳水化合物換算數據以及待測物的體積而計算出待測物的碳水化合物含量。 The component integration module 106 is coupled to the data processing comparison module 104, and the component integration module 106 obtains the carbohydrate conversion data of the object to be tested by using the identity of the object to be tested, and uses the carbohydrate conversion data of the object to be tested and The volume of the analyte is measured to calculate the carbohydrate content of the analyte.

此外,估算碳水化合物含量之系統100更可包括遠端裝置108。遠端裝置108與成分整合模組106以及影像資料擷取模組102耦接,可接收由成分整合模組106所計算出的結果,並將結果透過影像顯示或聲音等方式呈現給使用者。此外,遠端裝置108也可透過使用者的操作,傳送起始訊號至影像資料擷取模組102,以啟動影像擷取流程。遠端裝置108例如是智慧型手機、平板、筆記型電腦或其他可攜式裝置,但不限於此,遠端裝置108亦可視需求而設計為固定式的設備。 Additionally, the system 100 for estimating carbohydrate content may further include a remote device 108. The remote device 108 is coupled to the component integration module 106 and the image data capture module 102, and can receive the result calculated by the component integration module 106, and present the result to the user through image display or sound. In addition, the remote device 108 can also transmit the start signal to the image data capturing module 102 through the operation of the user to start the image capturing process. The remote device 108 is, for example, a smart phone, a tablet, a notebook computer or other portable device, but is not limited thereto, and the remote device 108 can also be designed as a stationary device according to requirements.

此外,就系統之硬體架構而言,數據處理比對模組104與成分整合模組106例如是整合於一或多個處理器內,並透過通信鏈(communication link)與資料庫116及影像資料擷取模組102進行資訊的交換,以執行其功能,但並不限於此。本領域具通常知識者應理解本發明可藉由一或多個處理器、資料庫與運算系統等的整合來實現。 In addition, in terms of the hardware architecture of the system, the data processing comparison module 104 and the component integration module 106 are integrated into one or more processors, for example, and communicate with the database 116 and the image through the communication link (communication link). The data capture module 102 exchanges information to perform its functions, but is not limited thereto. Those of ordinary skill in the art will appreciate that the present invention can be implemented by the integration of one or more processors, libraries, computing systems, and the like.

圖2是本發明的估算碳水化合物含量之方法的概略流程圖,而圖3是依照本發明一實施例所繪示的估算碳水化合物含量之系統流程圖。圖3所示的系統流程圖例如可藉由圖1的實施例 所述估算碳水化合物含量之系統來執行,但並不限於此。 2 is a schematic flow chart of a method for estimating carbohydrate content of the present invention, and FIG. 3 is a system flow diagram for estimating carbohydrate content according to an embodiment of the present invention. The system flow diagram shown in FIG. 3 can be performed, for example, by the embodiment of FIG. 1. The system for estimating carbohydrate content is performed, but is not limited thereto.

以下,將參照圖1的實施例所述估算碳水化合物含量之系統,並搭配圖2及圖3來詳細說明本發明的估算碳水化合物含量之方法。 Hereinafter, the method for estimating the carbohydrate content according to the embodiment of Fig. 1 will be described with reference to Figs. 2 and 3 to explain in detail the method for estimating the carbohydrate content of the present invention.

首先,進行步驟S1,擷取待測物的影像資料。例如可藉由遠端裝置108向影像資料擷取模組102發出起始訊號,對待測物進行影像資料擷取(步驟S101),而獲得影像資料118。影像資料118包括待測物之光場數據、二維/三維影像數據以及光譜數據。上述光場數據及二維/三維影像數據例如是由光場相機所取得,但並不限於此,只要能夠取得所需的數據,亦可使用其他類型的相機。上述光譜數據例如是近紅外線光譜數據。之後,例如是將影像資料118傳輸至影像前處理模組110進行前處理(步驟S102)。前處理可包括對初級的影像資料118進行陣列轉換(array transformation)、顏色修正(color correction)以及校正(calibration)等處理。 First, step S1 is performed to extract image data of the object to be tested. For example, the remote device 108 sends a start signal to the image data capturing module 102, and the image data is captured by the object to be tested (step S101), and the image data 118 is obtained. The image data 118 includes light field data, two-dimensional/three-dimensional image data, and spectral data of the object to be tested. The above-described light field data and two-dimensional/three-dimensional image data are obtained by, for example, a light field camera. However, the present invention is not limited thereto, and other types of cameras may be used as long as necessary data can be acquired. The above spectral data is, for example, near-infrared spectroscopy data. Thereafter, for example, the image data 118 is transmitted to the image pre-processing module 110 for pre-processing (step S102). The pre-processing may include performing an array transformation, a color correction, and a calibration on the primary image data 118.

經過前處理之後的影像資料118例如是同時傳輸至特徵比對模組112以及體積估算模組114,以進行後續比對分析,但並不限於此,亦可先後將影像資料118傳輸至特徵比對模組112以及體積估算模組114。 The pre-processed image data 118 is simultaneously transmitted to the feature comparison module 112 and the volume estimation module 114 for subsequent comparison analysis, but is not limited thereto, and the image data 118 may be sequentially transmitted to the feature ratio. The module 112 and the volume estimation module 114 are provided.

接下來,進行步驟S2,根據待測物身分辨識數據在資料庫進行比對而取得待測物的身分。以下將詳細說明步驟S2的流程。 Next, in step S2, the identity of the object to be tested is obtained by performing comparison in the database according to the identification data of the object to be tested. The flow of step S2 will be described in detail below.

首先,例如是將經前處理的影像資料118傳輸至影像比 對單元112a以進行影像比對(步驟S201)。影像比對單元112a可先對影像資料118中的二維影像進行影像特徵的擷取。接下來,影像比對單元112a可對所得影像特徵進行PCA(Principal component analysis)降維處理,再與資料庫116中的資料樣本特徵進行比對,若存在完全符合或最接近的資料樣本,則確認取得待測物的身分,例如是獲得待測物身分序號120。然而,本發明並不限於此,影像比對單元112a亦可視實際需求而針對待測物的三維影像進行處理及比對,或者是針對待測物的二維及三維影像均進行比對,以取得待測物的身分。 First, for example, transmitting pre-processed image data 118 to image ratio The unit 112a performs image comparison (step S201). The image comparison unit 112a may first perform image feature extraction on the two-dimensional image in the image data 118. Next, the image matching unit 112a may perform a PCA (Principal Component Analysis) dimensionality reduction process on the obtained image features, and then compare with the data sample features in the database 116. If there is a completely or closest data sample, then Confirming the identity of the object to be tested, for example, obtaining the identity number 120 of the object to be tested. However, the present invention is not limited thereto, and the image matching unit 112a may process and compare the three-dimensional image of the object to be tested according to actual needs, or compare the two-dimensional and three-dimensional images of the object to be tested. Obtain the identity of the object to be tested.

在上述影像比對的流程中,當進行比對的樣本特徵與現有資料庫中的樣本特徵相似度過低時,並無法根據上述影像比對(步驟S201)取得待測物的身分,此時則需進一步進行光譜比對(步驟S202)。 In the above process of image comparison, when the sample feature of the comparison is too low in similarity with the sample feature in the existing database, the identity of the object to be tested cannot be obtained according to the image comparison (step S201). Further, spectral alignment is required (step S202).

此外,當影像比對的結果包含了兩種以上的身分時,亦可能有需要進行光譜比對。舉例來說,對於外型或顏色類似的食物(如薯條與起士條),僅藉由影像比對可能尚無法區隔,此時,就需要進一步利用光譜特徵來進行待測物的辨識。 In addition, when the results of the image comparison contain more than two types of identity, there may be a need for spectral alignment. For example, for foods with similar appearance or color (such as French fries and cheese bars), it may not be possible to distinguish only by image comparison. In this case, further use of spectral features is needed to identify the object to be tested. .

另一方面,當待測物為需要進一步確認成分的混合食物時,亦可能需要進行光譜比對,以取得更精確的結果。例如,進行影像比對的結果初步確認待測物為炒飯,但炒飯中可能同時包括含蛋白質的肉類以及含碳水化合物的玉米、青豆仁及米飯等,在此種情況下,亦需要藉由進行光譜的比對,以進一步確認炒飯 中的組成。 On the other hand, when the analyte is a mixed food that requires further confirmation of the ingredients, spectral alignment may also be required to achieve more accurate results. For example, the results of the image comparison initially confirm that the test object is fried rice, but the fried rice may include protein-containing meat and carbohydrate-containing corn, green bean kernel, and rice. In this case, it is also necessary to carry out Spectral alignment to further confirm fried rice In the composition.

具體來說,可由影像比對單元112a發出特定訊號通知系統,以將影像資料118中由光譜相機102b取得的高光譜影像(hyperspectral image)資料傳輸至光譜比對單元112b進行光譜比對(步驟S202)。上述高光譜影像例如是待測物的近紅外線光譜影像,但不限於此。光譜比對單元112b可先將高光譜影像資料轉換為N維空間(N-dinension space),再進行凸面最佳化(convex optimization)後,與資料庫116中樣本之光譜特徵加權值進行比對,若存在完全符合或最接近的資料樣本,則確認取得待測物的身分,獲得待測物身分序號120。 Specifically, the specific signal notification system may be sent by the image comparison unit 112a to transmit the hyperspectral image data acquired by the spectral camera 102b in the image data 118 to the spectral comparison unit 112b for spectral comparison (step S202). ). The above hyperspectral image is, for example, a near-infrared spectrum image of the object to be tested, but is not limited thereto. The spectral comparison unit 112b can first convert the hyperspectral image data into an N-dimensional space, and then perform convex optimization, and compare with the spectral feature weights of the samples in the database 116. If there is a data sample that is completely consistent or closest, it is confirmed that the identity of the object to be tested is obtained, and the identity number of the object to be tested is obtained 120.

在仍無法藉由上述光譜比對(步驟S202)取得待測物的身分時,可進一步透過人工判定(步驟S203)的方式取得待測物的身分(步驟S204)。舉例來說,在無法取得待測物身分時,可由光譜比對單元112b發送特定訊號通知系統,並將影像資料118傳送至操作者,而由操作者協助進行影像辨識及歸類,再透過資料輸入單元112c而以語音或文字輸入的方式來指定待測物的身分,藉此取得待測物身分序號120。上述操作者例如是位於後端的具備食品營養知識之專業人員。 When the identity of the object to be tested cannot be obtained by the above spectral comparison (step S202), the identity of the object to be tested can be further obtained by manual determination (step S203) (step S204). For example, when the identity of the object to be tested cannot be obtained, the specific signal notification system can be sent by the spectral comparison unit 112b, and the image data 118 is transmitted to the operator, and the operator assists in image recognition and classification, and then transmits the data. The input unit 112c specifies the identity of the object to be tested by voice or text input, thereby obtaining the object number 120 of the object to be tested. The above operator is, for example, a professional with food nutrition knowledge at the back end.

此外,亦可透過使用者協助來進行待測物身分的指定。舉例來說,在無法取得待測物身分時,可藉由光譜比對單元112b發送特定訊號至遠端裝置108,以通知位於前端的使用者進行輔助特徵資料的輸入。上述輔助特徵資料例如是食物名稱、外形、顏 色、氣味或成分等可供專業人員進行辨識的資訊。在使用者透過語音或文字輸入的方式傳回待測物的特徵後,系統可再通知後端的操作者進行待測物的身分的指定。此時,除了影像資料外,操作者還可一併參考使用者所輸入的輔助特徵資料來進行待測物的身分指定,藉此給定待測物身分序號120。或者,使用者也可自行透過語音或文字指定待測物的身分,藉此取得待測物身分序號120。 In addition, the identity of the object to be tested can also be specified through user assistance. For example, when the identity of the object to be tested cannot be obtained, the specific signal is sent to the remote device 108 by the spectral comparison unit 112b to notify the user at the front end to input the auxiliary feature data. The above auxiliary feature data is, for example, a food name, a shape, a face Information such as color, smell or composition that can be identified by professionals. After the user returns the characteristics of the object to be tested through voice or text input, the system can notify the operator at the back end to specify the identity of the object to be tested. At this time, in addition to the image data, the operator can also refer to the auxiliary feature data input by the user to specify the identity of the object to be tested, thereby giving the object number 120 of the object to be tested. Alternatively, the user can specify the identity of the object to be tested by voice or text, thereby obtaining the object number 120 of the object to be tested.

此外,在透過上述人工判定方式取得待測物的身分後,可進一步將作為上述待測物的身分辨識數據加入資料庫(步驟S205)。例如,可在資料庫116中增加同一食物的新辨識特徵,或者建立新種類的食物之辨識特徵,以提高未來進行辨識之成功率。 Further, after the identity of the object to be tested is obtained by the manual determination method, the identity identification data as the object to be tested can be further added to the database (step S205). For example, new identification features of the same food may be added to the database 116, or identification features of new types of food may be created to increase the success rate of identification in the future.

在進行上述步驟S2時,可同時進行步驟S3,根據光場數據估算待測物的體積。然而,本領域中具通常知識者應理解的是,步驟S2與步驟S3的先後次序並無特別限制,只要能夠順利取得待測物的體積以及身分資料即可。 When the above step S2 is performed, step S3 may be simultaneously performed to estimate the volume of the object to be tested based on the light field data. However, those skilled in the art should understand that the order of steps S2 and S3 is not particularly limited as long as the volume of the object to be tested and the identity data can be obtained smoothly.

在步驟S3中,進行光場數據分析(步驟S301)而獲得待測物體積數據122。具體來說,例如可將已在影像前處理模組110中經過顏色修正以及校正處理的光場數據傳送至體積估算模組114,並在體積估算模組114中進行影像深度估算程序以建立待測物的深度圖(depth map),再結合由光譜比對單元所得的特徵標示影像資料,以進行體積估算(volume estimation),從而取得待測物體積數據122。 In step S3, light field data analysis is performed (step S301) to obtain object volume data 122. Specifically, for example, the light field data that has undergone color correction and correction processing in the image pre-processing module 110 can be transmitted to the volume estimation module 114, and the image depth estimation program is performed in the volume estimation module 114 to establish The depth map of the object is combined with the feature obtained by the spectral comparison unit to mark the image data for volume estimation, thereby obtaining the volume data 122 to be tested.

圖4是依照本發明一實施例所繪示的估算待測物體積的示意圖。請參照圖4,在欲藉由光場相機之感應器126對待測物124進行體積估算時,體積估算模組114例如是藉由下式(1)來進行待測物124的體積(V)估算:V=fVol(:K,θ,D,H)............式(1) FIG. 4 is a schematic diagram of estimating a volume of an object to be tested according to an embodiment of the invention. Referring to FIG. 4, when volume estimation is to be performed by the sensor 126 of the light field camera, the volume estimation module 114 performs volume (V) of the object to be tested 124 by, for example, the following formula (1). Estimate: V=fVol(:K, θ, D, H)............(1)

其中,K表示待測物124之平均截面積(圖4中以剖面線填滿的區域);θ表示由感應器126的陀螺儀(gyroscope)對待測物124所取得的夾角;D表示感應器126與待測物124之間的距離;H則表示待測物124的高度,H=cosθ‧D。 Where K represents the average cross-sectional area of the object to be tested 124 (the area filled with hatching in FIG. 4); θ represents the angle obtained by the gyroscope of the sensor 126, and D represents the sensor. The distance between 126 and the object to be tested 124; H indicates the height of the object to be tested 124, H = cos θ ‧ D.

此外,上述平均截面積K例如是由光譜比對單元所傳回的特徵標示影像資料進行數值轉換而得;θ、D例如是由光場相機之感應器所測得,H則可由θ、D求得。 In addition, the average cross-sectional area K is obtained by, for example, numerically converting the characteristic image data returned by the spectral comparison unit; θ, D are measured by, for example, a sensor of the light field camera, and H is determined by θ, D. Seek.

圖5是依照本發明另一實施例所繪示的估算待測物體積的示意圖,其中示出利用光場相機128對盛裝在餐具130內部的待測物132進行體積估算的示意圖。 FIG. 5 is a schematic diagram of estimating the volume of the object to be tested according to another embodiment of the present invention, and showing a volume estimation of the object to be tested 132 contained inside the dish 130 by using the light field camera 128.

請參考圖5,在對盛裝在餐具130內部的待測物132進行體積估算時,可利用下式(2)來進行待測物132的體積(Vol)估算:Vol=ʃʃ[z 0(x,y)-z(x,y)]dxdy............式(2)其中,如圖5所示,若以餐具130的開口緣部作為參考平面而設定出X、Y及Z方向,則可以z0(x,y)表示於Z方向上由光場相機128至餐具130底部的距離(背景值),並以z(x,y)表示於Z方向 上由光場相機128至待測物132表面的距離(前景值)。所屬技術領域中具通常知識者應了解上述背景值與前景值可藉由對光場相機128所截取的光場數據進行分析換算而獲得。將上述背景值減去前景值並進行二重積分,即可獲得待測物132體積的估計值。藉此,能夠根據餐具種類來進行參數修正,以更準確取得待測物體積。另外,對於不同形狀、結構或大小的餐具,亦可先設計好不同的參數,以利於快速進行待測物體積的估算。藉由上述方式,即使待測物為盛放在餐具中,也能夠有效的進行體積估算。 Referring to FIG. 5, when estimating the volume of the object to be tested 132 contained in the tableware 130, the volume (Vol) estimation of the object to be tested 132 can be performed by using the following formula (2): Vol = ʃʃ [ z 0 ( x , y )- z ( x , y )] dxdy (2) wherein, as shown in FIG. 5, the opening edge portion of the tableware 130 is set as a reference plane. In the X, Y, and Z directions, z 0 (x, y) may represent the distance (background value) from the light field camera 128 to the bottom of the tableware 130 in the Z direction, and is expressed in the Z direction by z(x, y). The distance from the light field camera 128 to the surface of the object to be tested 132 (foreground value). Those of ordinary skill in the art will appreciate that the background values and foreground values described above can be obtained by analytically converting the light field data intercepted by the light field camera 128. By subtracting the foreground value from the above background value and performing double integration, an estimated value of the volume of the object to be tested 132 can be obtained. Thereby, the parameter correction can be performed according to the type of the tableware, so that the volume of the object to be tested can be more accurately obtained. In addition, for different shapes, structures or sizes of tableware, different parameters can be designed first to facilitate rapid estimation of the volume of the object to be tested. According to the above manner, even if the object to be tested is placed in the tableware, the volume estimation can be performed efficiently.

在取得待測物的體積以及身分資料後,即進行步驟S4,利用待測物的身分取得待測物的碳水化合物換算數據,並利用上述待測物的碳水化合物換算數據以及上述待測物的體積而計算出上述待測物的碳水化合物含量。具體來說,例如是將由步驟S2以及步驟S3所取得的待測物體積數據122以及待測物身份序號120傳送至成分整合模組106,以進行成分整合運算(步驟S401)。上述整合運算例如是先根據待測物身份序號120於資料庫116中取得待測物的體積/重量轉換率後,結合待測物體積數據122而算出待測物的重量,接下來,再根據待測物身份序號120於資料庫116中取得待測物的碳水化合物轉換率,並利用待測物的重量與碳水化合物轉換率計算出待測物中的碳水化合物含量。 After obtaining the volume of the object to be tested and the identity data, step S4 is performed, and the carbohydrate conversion data of the object to be tested is obtained by using the identity of the object to be tested, and the carbohydrate conversion data of the object to be tested and the object to be tested are used. The carbohydrate content of the above test substance was calculated from the volume. Specifically, for example, the object volume data 122 and the object identification number 120 obtained by the step S2 and the step S3 are transmitted to the component integration module 106 to perform a component integration operation (step S401). For example, after the volume/weight conversion rate of the object to be tested is obtained in the database 116 according to the identifier number 120 of the object to be tested, the weight of the object to be tested is calculated by combining the volume data 122 of the object to be tested, and then The analyte identification number 120 obtains the carbohydrate conversion rate of the analyte in the database 116, and calculates the carbohydrate content in the analyte by using the weight of the analyte and the carbohydrate conversion rate.

在獲得待測物中的碳水化合物含量資訊之後,系統例如是將結果輸出至遠端裝置108,並藉由語音、文字或圖像等方式將結果提供給使用者作為參考。 After obtaining the carbohydrate content information in the test object, the system outputs the result to the remote device 108, for example, and provides the result to the user as a reference by means of voice, text or image.

綜上所述,藉由本發明所提供之估算碳水化合物含量之系統與方法,能夠透過非接觸式且非破壞式的方式確認待測物之種類,並且得到其碳水化合物含量之數據,從而實現讓使用者(例如:糖尿病患者)能夠容易地進行糖分攝取管理的互動式智慧服務系統。 In summary, the system and method for estimating carbohydrate content provided by the present invention can confirm the type of the analyte and obtain the data of the carbohydrate content through a non-contact and non-destructive manner, thereby realizing Users (eg, diabetics) can easily perform interactive intelligence service systems for sugar intake management.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.

S1、S2、S3、S4‧‧‧步驟 S1, S2, S3, S4‧‧‧ steps

Claims (17)

一種估算碳水化合物含量之系統,包括:一影像資料擷取模組,擷取一待測物的一影像資料,該影像資料包含一光場數據以及一待測物身分辨識數據;一數據處理比對模組,耦接於該影像資料擷取模組,且該數據處理比對模組根據該待測物身分辨識數據在一資料庫進行比對而取得該待測物的身分,並根據該光場數據估算該待測物的體積,其中,在透過人工判定方式取得該待測物的身分後,將該待測物身分辨識數據加入該資料庫中;以及一成分整合模組,耦接於該數據處理比對模組,且該成分整合模組利用該待測物的身分取得該待測物的碳水化合物換算數據,並利用該待測物的碳水化合物換算數據以及該待測物的體積而計算出該待測物的碳水化合物含量。 A system for estimating carbohydrate content, comprising: an image data acquisition module, capturing an image data of a sample to be tested, the image data comprising a light field data and an object identification data; a data processing ratio The module is coupled to the image data capture module, and the data processing comparison module obtains the identity of the object to be tested according to the identity identification data of the object to be tested, and according to the The light field data is used to estimate the volume of the object to be tested, wherein after the identity of the object to be tested is obtained by manual determination, the identity identification data of the object to be tested is added to the database; and a component integration module is coupled The data processing comparison module, wherein the component integration module obtains the carbohydrate conversion data of the object to be tested by using the identity of the object to be tested, and uses the carbohydrate conversion data of the object to be tested and the object to be tested The carbohydrate content of the analyte was calculated from the volume. 如申請專利範圍第1項所述的估算碳水化合物含量之系統,其中,該待測物身分辨識數據包括該待測物之二維/三維影像數據及/或該待測物之光譜數據。 The system for estimating carbohydrate content according to claim 1, wherein the object identification data includes two-dimensional/three-dimensional image data of the object to be tested and/or spectral data of the object to be tested. 如申請專利範圍第1項所述的估算碳水化合物含量之系統,其中,該數據處理比對模組包括:一影像前處理模組,對該光場數據以及該待測物身分辨識數據進行一前處理;一特徵比對模組,耦接於該影像前處理模組,該特徵比對模 組根據該待測物身分辨識數據在該資料庫進行比對而取得該待測物的身分;一體積估算模組,耦接於該特徵比對模組,根據該光場數據估算該待測物的體積。 The system for estimating the carbohydrate content according to claim 1, wherein the data processing comparison module comprises: an image pre-processing module, and the light field data and the object identification data of the object to be tested are performed. Pre-processing; a feature comparison module coupled to the image pre-processing module, the feature comparison mode The group obtains the identity of the object to be tested according to the identity identification data of the object to be tested, and a volume estimation module is coupled to the feature comparison module, and estimates the to be tested according to the light field data. The volume of the object. 如申請專利範圍第3項所述的估算碳水化合物含量之系統,其中,該影像資料擷取模組包括一光場相機,且該體積估算模組是藉由下式(1)來進行該待測物的體積(V)估算:V=fVol(:K,θ,D,H)............式(1)其中,K表示該待測物之平均截面積;θ表示該光場相機的一感應器的陀螺儀對該待測物所取得的夾角;D表示該感應器與該待測物之間的距離;H則表示該待測物的高度,H=cosθ‧D。 The system for estimating the carbohydrate content according to claim 3, wherein the image data acquisition module comprises a light field camera, and the volume estimation module performs the method by using the following formula (1). Estimation of volume (V) of the object: V = fVol (: K, θ, D, H) (1) where K represents the average cross-sectional area of the object to be tested ; θ represents the angle of the gyro of the sensor of the light field camera to the object to be tested; D represents the distance between the sensor and the object to be tested; H represents the height of the object to be tested, H =cosθ‧D. 如申請專利範圍第3項所述的估算碳水化合物含量之系統,其中,該影像資料擷取模組包括一光場相機,該待測物為盛裝在一餐具內部,且該體積估算模組是藉由下式(2)來進行該待測物的體積(Vol)估算:Vol=ʃʃ[z 0(x,y)-z(x,y)]dxdy............式(2)其中,以該餐具的開口緣部作為參考平面而設定出X、Y及Z方向,z0(x,y)表示於Z方向上由該光場相機至該餐具底部的距離,z(x,y)表示於Z方向上由該光場相機至該待測物表面的距離。 The system for estimating the carbohydrate content according to claim 3, wherein the image data acquisition module comprises a light field camera, the object to be tested is contained inside a tableware, and the volume estimation module is The volume (Vol) estimation of the object to be tested is performed by the following formula (2): Vol = ʃʃ [ z 0 ( x , y ) - z ( x , y )] dxdy .......... . . . (2) wherein the X, Y, and Z directions are set with the opening edge of the table as a reference plane, and z 0 (x, y) is represented by the light field camera to the bottom of the tableware in the Z direction. The distance, z(x, y) represents the distance from the light field camera to the surface of the object to be tested in the Z direction. 如申請專利範圍第3項所述的估算碳水化合物含量之系統,其中,該特徵比對模組包括:一影像比對單元,根據該待測物之二維/三維影像數據取得該 待測物的身分;一光譜比對單元,根據該待測物之光譜數據取得該待測物的身分;以及一資料輸入單元,供一操作者輸入該待測物的身分。 The system for estimating carbohydrate content according to claim 3, wherein the feature comparison module comprises: an image comparison unit, which is obtained according to the two-dimensional/three-dimensional image data of the object to be tested. The identity of the object to be tested; a spectral comparison unit that obtains the identity of the object to be tested according to the spectral data of the object to be tested; and a data input unit for an operator to input the identity of the object to be tested. 如申請專利範圍第6項所述的估算碳水化合物含量之系統,其中,該操作者是以語音或文字方式輸入該待測物的身分。 The system for estimating the carbohydrate content according to claim 6, wherein the operator inputs the identity of the object to be tested in a voice or text manner. 如申請專利範圍第6項所述的估算碳水化合物含量之系統,其中,在該體積估算模組中進行影像深度估算程序以建立該待測物的深度圖,再結合由該光譜比對單元所得的特徵標示影像資料,以估算該待測物的體積。 The system for estimating the carbohydrate content according to claim 6, wherein the image depth estimation program is performed in the volume estimation module to establish a depth map of the object to be tested, and then combined with the spectrum comparison unit. The feature marks the image data to estimate the volume of the object to be tested. 如申請專利範圍第1項所述的估算碳水化合物含量之系統,更包括一遠端裝置,至少與該成分整合模組耦接,接收由該成分整合模組所計算出的結果。 The system for estimating the carbohydrate content according to claim 1 further includes a remote device coupled to at least the component integration module to receive the result calculated by the component integration module. 一種估算碳水化合物含量之方法,包括如下步驟:擷取一待測物的影像資料,該影像資料包含一光場數據以及一待測物身分辨識數據;根據該待測物身分辨識數據在一資料庫進行比對而取得該待測物的身分,其中,在透過人工判定方式取得該待測物的身分後,將該待測物身分辨識數據加入該資料庫中;根據該光場數據估算該待測物的體積;以及利用該待測物的身分取得該待測物的碳水化合物換算數據,並利用該待測物的碳水化合物換算數據以及該待測物的體積而計 算出該待測物的碳水化合物含量。 A method for estimating a carbohydrate content, comprising the steps of: capturing image data of a sample to be tested, the image data comprising a light field data and an identity identification data of the object to be tested; and identifying data according to the identity of the object to be tested The library performs the comparison to obtain the identity of the object to be tested, wherein after obtaining the identity of the object to be tested through manual determination, the identity identification data of the object to be tested is added to the database; and the light field data is used to estimate the a volume of the analyte; and obtaining the carbohydrate conversion data of the analyte by using the identity of the analyte, and using the carbohydrate conversion data of the analyte and the volume of the analyte Calculate the carbohydrate content of the test substance. 如申請專利範圍第10項所述的估算碳水化合物含量之方法,其中,該待測物身分辨識數據包括該待測物之二維/三維影像數據及/或該待測物之光譜數據。 The method for estimating a carbohydrate content according to claim 10, wherein the object identification data includes two-dimensional/three-dimensional image data of the object to be tested and/or spectral data of the object to be tested. 如申請專利範圍第11項所述的估算碳水化合物含量之方法,其中,該待測物之二維/三維影像數據是由一光場相機所取得。 The method for estimating a carbohydrate content according to claim 11, wherein the two-dimensional/three-dimensional image data of the object to be tested is obtained by a light field camera. 如申請專利範圍第11項所述的估算碳水化合物含量之方法,其中,根據該待測物身分辨識數據在該資料庫進行比對而取得該待測物的身分之步驟包括:先根據該待測物之二維/三維影像數據取得該待測物的身分,在無法根據該待測物之二維/三維影像數據取得該待測物的身分時,則根據該待測物之光譜數據取得該待測物的身分。 The method for estimating the carbohydrate content according to claim 11, wherein the step of obtaining the identity of the object to be tested according to the identity identification data of the object to be tested comprises: first Obtaining the identity of the object to be tested by the two-dimensional/three-dimensional image data of the object, and obtaining the identity of the object to be tested according to the two-dimensional/three-dimensional image data of the object to be tested, obtaining the spectrum data of the object to be tested The identity of the object to be tested. 如申請專利範圍第13項所述的估算碳水化合物含量之方法,其中,在無法根據該待測物的二維/三維影像數據或光譜數據取得該待測物的身分時,透過人工判定方式取得該待測物的身分。 The method for estimating the carbohydrate content according to claim 13 , wherein when the identity of the object to be tested cannot be obtained from the two-dimensional/three-dimensional image data or the spectral data of the object to be tested, the method is manually determined. The identity of the object to be tested. 如申請專利範圍第10項所述的估算碳水化合物含量之方法,其中,根據該光場數據估算該待測物的體積之步驟包括:進行影像深度估算程序以建立該待測物的深度圖。 The method for estimating a carbohydrate content according to claim 10, wherein the estimating the volume of the analyte according to the light field data comprises: performing an image depth estimation procedure to establish a depth map of the object to be tested. 如申請專利範圍第10項所述的估算碳水化合物含量之方法,其中,在利用一光場相機擷取該待測物的影像資料的情況下,根據該光場數據估算該待測物的體積之步驟包括:藉由下式(1)來進行該待測物的體積(V)估算: V=fVol(:K,θ,D,H)............式(1)其中,K表示該待測物之平均截面積;θ表示該光場相機的一感應器的陀螺儀對該待測物所取得的夾角;D表示該感應器與該待測物之間的距離;H則表示該待測物的高度,H=cosθ‧D。 The method for estimating the carbohydrate content according to claim 10, wherein, in the case of capturing the image data of the object to be tested by using a light field camera, estimating the volume of the object to be tested based on the light field data The step includes: estimating the volume (V) of the object to be tested by the following formula (1): V=fVol(:K, θ, D, H) (1) where K represents the average cross-sectional area of the object to be tested; θ represents one of the light field cameras The angle between the gyro of the sensor and the object to be tested; D is the distance between the sensor and the object to be tested; H is the height of the object to be tested, H = cos θ ‧ D. 如申請專利範圍第10項所述的估算碳水化合物含量之方法,其中,在利用一光場相機擷取該待測物的影像資料,且該待測物為盛裝在一餐具內部的情況下,根據該光場數據估算該待測物的體積之步驟包括:藉由下式(2)來進行該待測物的體積(Vol)估算:Vol=ʃʃ[z 0(x,y)-z(x,y)]dxdy............式(2)其中,以該餐具的開口緣部作為參考平面而設定出X、Y及Z方向,z0(x,y)表示於Z方向上由該光場相機至該餐具底部的距離,z(x,y)表示於Z方向上由該光場相機至該待測物表面的距離。 The method for estimating the carbohydrate content according to claim 10, wherein, in the case of capturing the image data of the object to be tested by using a light field camera, and the object to be tested is contained in a tableware, The step of estimating the volume of the object to be tested based on the light field data includes: estimating a volume (Vol) of the object to be tested by the following formula (2): Vol = ʃʃ[ z 0 ( x , y )- z ( x , y )] dxdy ...... (2) where the X, Y and Z directions are set with the opening edge of the table as the reference plane, z 0 (x, y ) represents the distance from the light field camera to the bottom of the tableware in the Z direction, and z(x, y) represents the distance from the light field camera to the surface of the object to be tested in the Z direction.
TW103145951A 2014-12-27 2014-12-27 System and method for estimating carbohydrate content TWI551861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103145951A TWI551861B (en) 2014-12-27 2014-12-27 System and method for estimating carbohydrate content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103145951A TWI551861B (en) 2014-12-27 2014-12-27 System and method for estimating carbohydrate content

Publications (2)

Publication Number Publication Date
TW201623960A TW201623960A (en) 2016-07-01
TWI551861B true TWI551861B (en) 2016-10-01

Family

ID=56984672

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103145951A TWI551861B (en) 2014-12-27 2014-12-27 System and method for estimating carbohydrate content

Country Status (1)

Country Link
TW (1) TWI551861B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713487B2 (en) * 2018-06-29 2020-07-14 Pixart Imaging Inc. Object determining system and electronic apparatus applying the object determining system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368639A1 (en) * 2013-06-18 2014-12-18 Xerox Corporation Handheld cellular apparatus for volume estimation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368639A1 (en) * 2013-06-18 2014-12-18 Xerox Corporation Handheld cellular apparatus for volume estimation

Also Published As

Publication number Publication date
TW201623960A (en) 2016-07-01

Similar Documents

Publication Publication Date Title
JP7518878B2 (en) Health Tracking Devices
US11222422B2 (en) Hyperspectral imaging sensor
CN103542935B (en) For the Miniaturized multi-spectral that real-time tissue oxygenation is measured
Pouladzadeh et al. You are what you eat: So measure what you eat!
CN104778374A (en) Automatic dietary estimation device based on image processing and recognizing method
Almaghrabi et al. A novel method for measuring nutrition intake based on food image
US20230326016A1 (en) Artificial intelligence for detecting a medical condition using facial images
Makhsous et al. Dietsensor: Automatic dietary intake measurement using mobile 3D scanning sensor for diabetic patients
CN114360690A (en) Method and system for managing diet nutrition of chronic disease patient
CN110366731A (en) System, method and computer program for image capture for guided meal
KR20200126288A (en) A method, server, device and program for measuring amount of food using potable device
Navarro-Cabrera et al. Machine vision model using nail images for non-invasive detection of iron deficiency anemia in university students
TWI551861B (en) System and method for estimating carbohydrate content
KR102354702B1 (en) Urine test method using deep learning
Sadeq et al. Smartphone-based calorie estimation from food image using distance information
US10249214B1 (en) Personal wellness monitoring system
CN114359299A (en) Diet segmentation method and diet nutrition management method for chronic disease patients
Manandhar et al. SmartCal: Calorie Estimation of Local Nepali Cuisine with Deep Learning-Powered Food Detection
Konstantakopoulos et al. Weight Estimation of Mediterranean Food Images using Random Forest Regression Algorithm
CN115530773A (en) Cardiovascular disease evaluation and prevention system based on food intake of patient
Mishra et al. Food Detection & Calorie Estimation of Food & Beverages using Deep Learning.
AKPA Food Weight Estimation Based on Image Processing for Dietary Assessment
Chang Tongue image difference detection
WO2025228519A1 (en) Method for determining an anaemia condition
CN120452695A (en) Method, device and equipment for intelligent monitoring and reminder of patient eating behavior