TWI897267B - Method and device for drug identification using deep learning object detection technology - Google Patents
Method and device for drug identification using deep learning object detection technologyInfo
- Publication number
- TWI897267B TWI897267B TW113107479A TW113107479A TWI897267B TW I897267 B TWI897267 B TW I897267B TW 113107479 A TW113107479 A TW 113107479A TW 113107479 A TW113107479 A TW 113107479A TW I897267 B TWI897267 B TW I897267B
- Authority
- TW
- Taiwan
- Prior art keywords
- drug
- deep learning
- image
- object detection
- algorithm
- Prior art date
Links
Landscapes
- Image Analysis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
一種利用深度學習物件偵測技術輔助辨識藥品之方法及裝置,該裝置包含一攝影機,用以取得影像,提供深度學習與物件分析;以及一人工智慧伺服器,進行影像、圖像分析比對,並將機率回饋予使用者。本裝置應連結醫療資訊系統,作業時由藥事人員於醫療資訊系統中呼叫處方資料,並將處方資料送交人工智慧伺服器,同時人工智慧伺服器開始擷取攝影機所拍攝之調劑行為影像,啟動攝影機影像與人工智慧伺服器中藥品影像資料庫動態辨識比對,將比對結果與機率傳送至醫療資訊系統介面,回報辨識結果,輔助藥事人員判斷藥品的正確與否。而後,藥事人員將於醫療資訊系統提供適當反饋,用以再訓練人工智慧伺服器。 A method and device for using deep learning object detection technology to assist in drug identification. The device includes a camera for acquiring images, providing deep learning and object analysis, and an artificial intelligence server for performing image and picture analysis and comparison, and providing probability feedback to the user. This device should be connected to a medical information system. During operation, pharmacists access prescription data within the medical information system and send it to the AI server. Simultaneously, the AI server begins capturing video of the dispensing process captured by a camera and dynamically compares the camera image with the drug image database on the AI server. The comparison results and probabilities are then transmitted to the medical information system interface, where they are reported back to assist pharmacists in determining the correctness of the drug. Pharmacists then provide appropriate feedback to the medical information system to further train the AI server.
Description
本發明係有關於一種利用深度學習物件偵測技術輔助辨識藥品之方法及裝置,尤指涉及一種針對藥品調劑過程進行輔助判斷,降低人為錯誤的發生率者。 This invention relates to a method and device for utilizing deep learning object detection technology to assist in drug identification, particularly to assisting in the judgment of drug dispensing processes and reducing the incidence of human error.
藥物的正確性係藥事作業中最重要的監測指標之一,但因為作業量大,更容易造成跡近錯失的發生,藥事單位一直致力於提高藥品調劑的正確性。此外,醫療人員「給藥錯誤」亦是醫療上常見的糾紛,嚴重者,甚至會造成病患的生命受到危害。 Medication accuracy is one of the most important monitoring indicators in pharmaceutical operations. However, due to the large workload, near-misses are more likely to occur. Pharmacy departments are constantly striving to improve the accuracy of medication dispensing. Furthermore, medication errors by medical personnel are a common medical dispute, and in severe cases, can even endanger patients' lives.
依據藥品優良調劑作業準則第一章第22條「藥事人員於交付藥品時,應核對藥袋或標籤內容、藥品種類、藥品數量及處方指示之正確性」,此處所規範行為人為藥事人員,並非品質控管限制或良率限制,法律責任歸屬為行為人,並無法將責任歸咎於設備,是故所有先進設備都應屬於輔助判斷,而非提供正確與錯誤的直接結果。 According to Article 22 of Chapter 1 of the Code of Practice for Good Pharmaceutical Dispensing, "Pharmacy personnel shall verify the accuracy of the contents of the medicine bag or label, the type of medicine, the quantity of medicine, and the prescription instructions when delivering medicines." This regulation specifies that the person who performs the regulation is the pharmacy personnel, not the quality control or yield control restrictions. Legal liability rests with the person who performs the regulation, not the equipment. Therefore, all advanced equipment should be used to assist in judgment, rather than providing direct results of correctness or error.
目前關於藥品辨識之相關專利,如臺灣I779596號專利案,此技術將判斷藥品正確性之主控權交由系統判斷,然而特徵比對所得資料應為一機率,而非單純的正確或不正確,再者,在繁忙的醫療作業中,靜態拍攝是一項難以達成的任務,更何況是需要讓所有藥品在不得重疊的狀況下拍攝,是故此技術可能難以於實際醫療場域中應用。又如臺灣I730508號專利案,此技術同樣將判斷藥品正確性之主控權交由系統判斷,然而特徵比對所得資料應為一機率,而非單純的正確或不正確,再者,逐一藥袋進行辨識,需大幅改變現行藥 事人員調劑作業行為,可能需以整體作業效率降低換取藥物辨識之可能性,是故此技術可能難以於實際醫療場域中應用。 Currently, patents related to drug identification, such as Taiwan Patent No. I779596, place control over drug authenticity determination in the hands of the system. However, the data obtained from feature matching should be a probability, not simply true or false. Furthermore, static photography is difficult to achieve in busy medical settings, especially when all drugs need to be photographed without overlapping. Therefore, this technology may be difficult to apply in actual medical settings. Another example is Taiwan Patent No. I730508. This technology similarly places the responsibility of determining drug authenticity in the hands of the system. However, the data obtained from feature matching should be a probability, not simply a true or false statement. Furthermore, identifying each medication bag individually would require significant changes to current pharmacist dispensing procedures, potentially reducing overall efficiency in exchange for the possibility of drug identification. Therefore, this technology may be difficult to apply in actual medical settings.
鑑於現行藥品辨識多以靜態辨識為主,流程上與藥事行為有不小的衝突,在需要增加工作負荷量的情況下,系統將難以被持續使用並永續維護。故,一般習用者係無法符合使用者於實際使用時之所需。 Given that current drug identification systems primarily rely on static identification, the process significantly conflicts with pharmaceutical practices. Given the increased workload, the system will be difficult to sustainably use and maintain. Therefore, the general user experience will not meet actual user needs.
本發明之主要目的係在於,克服習知技藝所遭遇之上述問題並提供一種利用既有醫療資訊系統(Hospital Information System,HIS)中的處方資訊,以該處方所有藥物品項做為判斷依據,提供物件吻合率予使用者,在不影響藥事人員工作型態與效率的情況下,增加一層輔助檢查層,藉此提升藥事人員調劑作業的正確率,降低病人安全通報系統(Taiwan Patient-safety Reporting system,TPR)中藥物事件的發生率,並以使用者反饋做為深度學習之驗證基準,持續優化裝置判斷能力之利用深度學習物件偵測技術輔助辨識藥品之方法及裝置。 The primary objective of this invention is to overcome the aforementioned problems encountered by learning technologies and provide a method and device for utilizing prescription information within existing hospital information systems (HISs). This method utilizes all drug items in the prescription as a basis for judgment, providing users with an object match rate. This method adds a layer of auxiliary verification without impacting the work style and efficiency of pharmacy personnel, thereby improving the accuracy of pharmacy personnel's dispensing operations and reducing the incidence of drug incidents in the Taiwan Patient-Safety Reporting System (TPR). Furthermore, the method and device utilize deep learning object detection technology to assist in drug identification, using user feedback as a validation benchmark for deep learning, to continuously optimize the device's judgment capabilities.
為達以上之目的,本發明係一種利用深度學習物件偵測技術輔助辨識藥品之方法,其至少包含下列步驟:步驟一:由藥事人員於一HIS之電腦讀取處方箋上條碼後,於該電腦螢幕的作業畫面中顯示與該處方箋相關的處方資料,並將該處方資料傳送至一配備深度學習辨識單元之人工智慧伺服器;步驟二:該藥事人員將依據該處方箋裝有至少一藥品的至少一藥袋置於一攝影機可視區中進行人工檢視該藥袋中所有藥品,同時該攝影機將該藥袋中所有藥品的影像資料傳送至該人工智慧伺服器;步驟三:該人工智慧伺服器利用該處方資料連結至一藥品特徵資料庫中提取該處方箋之所有藥品之影像特徵資料,該深度學習辨識單元以深度學習演算法將其與該影像資料進行吻合度機率分析比對,計算出各藥品之吻合率並傳送至該HIS,使其於該電腦螢幕的作業畫面中顯 示該藥袋中所有藥品已比對完成及其吻合率;步驟四:該藥事人員以各該藥品之吻合率做為輔助判斷藥袋中藥品的正確與否,完成該處方資料及該至少一藥袋審視作業;以及步驟五:該藥事人員依審視狀況於該HIS中回饋吻合率之正確性予該深度學習辨識單元,再次訓練該深度學習演算法。 To achieve the above objectives, the present invention is a method for assisting in drug identification using deep learning object detection technology, which comprises at least the following steps: Step 1: After a pharmacy staff reads a barcode on a prescription on a computer of an HIS, the pharmacy staff displays the prescription data related to the prescription on the operation screen of the computer screen, and transmits the prescription data to an artificial intelligence server equipped with a deep learning recognition unit; Step 2: The pharmacy staff places at least one medicine bag containing at least one medicine according to the prescription bag in the visual area of a camera to manually inspect all the medicines in the medicine bag, and the camera transmits the image data of all the medicines in the medicine bag to the artificial intelligence server; Step 3: The artificial intelligence server uses the prescription bag to identify the medicine bag. The data is linked to a drug feature database to extract the image feature data of all drugs in the prescription. The deep learning recognition unit uses a deep learning algorithm to perform a probability analysis and comparison with the image data, calculates the matching rate of each drug and transmits it to the HIS, which displays the matching rate of all drugs in the medicine bag on the computer screen. Complete and match the rate; Step 4: The pharmacy personnel uses the match rate of each drug as an auxiliary judgment to determine whether the drug in the bag is correct, completing the review of the prescription data and the at least one bag; and Step 5: Based on the review results, the pharmacy personnel feedback the accuracy of the match rate to the deep learning recognition unit in the HIS to retrain the deep learning algorithm.
於本發明上述實施例中,該藥袋係一正面含病患及藥品資訊,背面為透明塑膠膜,可將藥品置入其中,且可利用該攝影機拍攝取得清楚影像者。 In the above-mentioned embodiment of the present invention, the medicine bag has a front side containing patient and medicine information and a back side made of a transparent plastic film, into which the medicine can be placed and which can be photographed by the camera to obtain a clear image.
於本發明上述實施例中,該攝影機係具備影格速率120FPS以上之高速攝影機。 In the above-mentioned embodiment of the present invention, the camera is a high-speed camera with a frame rate of 120 FPS or above.
於本發明上述實施例中,該步驟一係由該藥事人員使用連線至該HIS的一條碼讀取器進行該處方箋上條碼讀取。 In the above embodiment of the present invention, the first step is for the pharmacy staff to use a barcode reader connected to the HIS to read the barcode on the prescription.
於本發明上述實施例中,該步驟二中該所有藥品的影像資料係依據藥品數量作影像或連續影像呈現。 In the above embodiment of the present invention, the image data of all drugs in step 2 is presented as an image or continuous image based on the quantity of the drugs.
於本發明上述實施例中,該步驟三係使用影像或整段連續影像與該處方箋之所有藥品進行比對,判斷該影像或連續影像中出現該處方箋之各項藥品的機率。 In the above-mentioned embodiment of the present invention, step three uses the image or the entire continuous image to compare with all the drugs in the prescription to determine the probability of each drug in the prescription appearing in the image or continuous image.
於本發明上述實施例中,該步驟四當比對後之吻合率偏低時,該HIS係於吻合率偏低之藥品品項予以提示。 In the above embodiment of the present invention, in step 4, when the matching rate after comparison is low, the HIS will prompt the drug item with the low matching rate.
於本發明上述實施例中,該步驟五係由該藥事人員依審視狀況給予該人工智慧伺服器正向或反向之回饋,以再次訓練該深度學習辨識單元。 In the above-mentioned embodiment of the present invention, step five involves the pharmacist providing positive or negative feedback to the artificial intelligence server based on the review results to retrain the deep learning recognition unit.
於本發明上述實施例中,該步驟五更包括當該深度學習辨識單元出現誤判或當藥品外觀有部份改變時,該藥事人員係同步回饋該深度學習辨識單元,使其重新學習並修正特徵檔。 In the above-mentioned embodiment of the present invention, step five further includes the step of providing feedback to the deep learning recognition unit when the deep learning recognition unit makes an incorrect judgment or when the appearance of the drug partially changes, so that the pharmacy personnel can relearn and correct the feature file.
於本發明上述實施例中,該深度學習演算法包含基於Auto-Encoder架構的演算法、基於Multi-Stage架構之演算法、基於U-Net架構之演算法、基於 GAN架構之演算法、及專門用於去模糊的卷積神經網路U-Net XY-Deblur演算法的至少其中之一。 In the above-described embodiment of the present invention, the deep learning algorithm includes at least one of an algorithm based on an auto-encoder architecture, an algorithm based on a multi-stage architecture, an algorithm based on a U-Net architecture, an algorithm based on a GAN architecture, and the convolutional neural network (U-Net XY-Deblur) algorithm specifically for deblurring.
於本發明上述實施例中,該卷積神經網路U-Net XY-Deblur演算法係採用一編碼器與二解碼器作為架構,並透過旋轉與共享參數的方式提高去模糊性能,其公式如下:
於本發明上述實施例中,該人工智慧伺服器係利用物件偵測演算法偵測該待辨識藥品的影像資料中藥品所在的畫面區域(Region of Interest,ROI),並以YOLACT(You Only Look At CoefficienTs)演算模型對該ROI區域進行實例分割,運算出每一個錨定(anchor)的類別信心度、邊界框(bounding box)位置、遮罩(mask)的係數以及圖片中每一個目標物件的遮罩,達到將前景藥品區域從背景影像中分離,最後再以孿生卷積神經網路(Siamese Convolutional Neural Network,SCNN)作為辨識架構進行比對,該兩個CNN共享相同的參數,利用度量學習(metric learning)提升特徵的分辨率,輸入為該待辨識藥品的影像資料與該預定處方藥品的影像特徵資料,透過該兩個CNN同時連接一連接函數(connection function),將該待辨識藥品的影像資料與該預定處方藥品的影像特徵資料連接起來,最後利用一代價函數(cost function)計算出該待辨識藥品與該預定處方藥品的相似度。 In the above embodiment of the present invention, the artificial intelligence server uses an object detection algorithm to detect the image region (ROI) where the drug is located in the image data of the drug to be identified, and uses the YOLACT (You Only Look At CoefficienTs) algorithm model to perform instance segmentation on the ROI area, calculates the category confidence of each anchor, the position of the bounding box, the coefficient of the mask, and the mask of each target object in the image, so as to separate the foreground drug area from the background image. Finally, a Siamese Convolutional Neural Network (SCNN) is used as the recognition architecture for comparison. The two CNNs share the same parameters and use metric learning to achieve the best classification accuracy. The CNN uses a neural network to improve feature resolution through deep neural network learning. The input is the image data of the drug to be identified and the image feature data of the prescribed drug. A connection function is used to connect the two CNNs, linking the image data of the drug to be identified with the image feature data of the prescribed drug. Finally, a cost function is used to calculate the similarity between the drug to be identified and the prescribed drug.
於本發明上述實施例中,用於該YOLACT演算模型的該物件偵測演算法包含區域卷積神經網路(Region-based CNN,R-CNN)、快速R-CNN(Fast R-CNN)、更快速R-CNN(Faster R-CNN)、遮罩R-CNN(Mask R-CNN)、YOLO(You Only Look Once)、YOLOv2、YOLOv3、YOLOv7、單次多框偵測器(Single Shot MultiBox Detector,SSD)、及RetinaNet的至少其中之一。 In the above-mentioned embodiment of the present invention, the object detection algorithm used in the YOLACT algorithm model includes at least one of Region-based CNN (R-CNN), Fast R-CNN, Faster R-CNN, Mask R-CNN, YOLO (You Only Look Once), YOLOv2, YOLOv3, YOLOv7, Single Shot MultiBox Detector (SSD), and RetinaNet.
為達以上之目的,本發明更係一種利用深度學習物件偵測技術輔 助辨識藥品之裝置,係包括:一攝影機,係以影格速率120 FPS以上進行拍攝藥事人員調劑、覆核之藥品作業畫面,主要用於收集依據一處方箋列印的至少一藥袋中所有藥品的影像資料;以及一人工智慧伺服器,連接該攝影機與一藥品特徵資料庫,該人工智慧伺服器具有一深度學習辨識單元,該深度學習辨識單元接收並儲存所有藥品的影像資料,分析、比對該藥品特徵資料庫中一預定處方藥品的影像特徵資料與當下一待辨識藥品的影像資料之吻合度,該深度學習辨識單元包含:一藥品取像與影像去模糊模組,係與該攝影機併行處理,以深度學習演算法對該待辨識藥品的影像資料進行去模糊處理;一高反光影像還原模組,連接該藥品取像與影像去模糊模組,係以偏光鏡提升該藥品取像與影像去模糊模組的運算速度,使其可以配合該藥事人員作業速度;一目標物偵測與實例分割(Instance Segmentation)模組,連接該藥品取像與影像去模糊模組,係在進行藥品比對之前,先以物件偵測演算法偵測該待辨識藥品的影像資料中藥品所在的ROI區域,並以YOLACT演算模型對該ROI區域進行實例分割;及一藥品影像再辨識(Re-Identification)模組,連接該目標物偵測與實例分割模組,係接收已進行實例分割將前景藥品區域從背景影像中分離的該待辨識藥品的影像資料,以兩個CNN組成的SCNN作為辨識架構進行比對,該兩個CNN共享相同的參數,利用度量學習提升特徵的分辨率,再將該兩個CNN同時連接一連接函數,把該待辨識藥品的影像資料與該預定處方藥品的影像特徵資料連接起來,最後利用一代價函數計算出該待辨識藥品與該預定處方藥品的相似度。 To achieve the above objectives, the present invention further provides a device that utilizes deep learning object detection technology to assist in drug identification. The device comprises: a camera that captures images of pharmacists dispensing and reviewing medications at a frame rate of 120 FPS or higher, primarily used to collect image data of all medications in at least one medicine bag printed according to a prescription; and an artificial intelligence server connected to the camera and a drug feature database. The artificial intelligence server includes a deep learning recognition unit that receives and stores image data of all medications, analyzes and compares image feature data of a predetermined prescription medication in the medication feature database with the current medication to be identified. The deep learning recognition unit includes: a drug imaging and image deblurring module, which is processed in parallel with the camera and uses a deep learning algorithm to deblur the image data of the drug to be identified; a high-reflective image restoration module, which is connected to the drug imaging and image deblurring modules and uses a polarizing filter to improve the computing speed of the drug imaging and image deblurring modules so that they can match the working speed of the pharmacist; an object detection and instance segmentation module. The pharmaceutical image segmentation module is connected to the pharmaceutical image acquisition and image deblurring module. Before performing pharmaceutical comparison, the object detection algorithm is used to detect the ROI region where the pharmaceutical is located in the image data of the pharmaceutical to be identified, and the YOLACT algorithm model is used to perform instance segmentation on the ROI region. The pharmaceutical image re-identification module is connected to the target object detection and instance segmentation module. The module receives the segmented pharmaceutical image data and Image data of the drug to be identified, where the foreground drug region is separated from the background image, is compared using a SCNN consisting of two CNNs as the recognition architecture. The two CNNs share the same parameters and utilize metric learning to enhance feature resolution. A connection function is then connected to the two CNNs, linking the image data of the drug to be identified with the image feature data of the prescribed drug. Finally, a cost function is used to calculate the similarity between the drug to be identified and the prescribed drug.
於本發明上述實施例中,該深度學習演算法包含基於Auto-Encoder架構的演算法、基於Multi-Stage架構之演算法、基於U-Net架構之演算法、基於GAN架構之演算法、及專門用於去模糊的卷積神經網路U-Net XY-Deblur演算法的至少其中之一。 In the above-mentioned embodiment of the present invention, the deep learning algorithm includes at least one of an algorithm based on an Auto-Encoder architecture, an algorithm based on a Multi-Stage architecture, an algorithm based on a U-Net architecture, an algorithm based on a GAN architecture, and a convolutional neural network (U-Net XY-Deblur) algorithm specifically for deblurring.
於本發明上述實施例中,該卷積神經網路U-Net XY-Deblur演算法 係採用一編碼器與二解碼器作為架構,更透過旋轉來進一步提高其去模糊性能。 In the above-described embodiment of the present invention, the convolutional neural network (U-Net) XY-Deblur algorithm employs an encoder and two decoders as its architecture, further enhancing its deblurring performance through rotation.
於本發明上述實施例中,用於該YOLACT演算模型的該物件偵測演算法包含R-CNN、Fast R-CNN、Faster R-CNN、Mask R-CNN、YOLO、YOLOv2、YOLOv3、YOLOv7、SSD、及RetinaNet的至少其中之一。 In the above-mentioned embodiment of the present invention, the object detection algorithm used in the YOLACT algorithm model includes at least one of R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN, YOLO, YOLOv2, YOLOv3, YOLOv7, SSD, and RetinaNet.
於本發明上述實施例中,該ROI區域進行實例分割係利用該YOLACT演算模型,以兩個併行的第一子網路分支與第二子網路分支運算實現即時實例分割,該第一子網路分支係預測頭(Prediction Head),用以運算出每一個錨定的類別信心度、邊界框位置、以及遮罩的係數,該第二子網路分支係原型網路(Protonet),用以生成圖片中每一個目標物件的mask,藉以將前景藥品區域從背景影像中分離。 In the above-mentioned embodiment of the present invention, the ROI region instance segmentation utilizes the YOLACT algorithm model, with two parallel sub-network branches, a first sub-network branch and a second sub-network branch, to achieve real-time instance segmentation. The first sub-network branch is the prediction head, which calculates the class confidence, bounding box position, and mask coefficients for each anchor. The second sub-network branch is the prototype network, which generates a mask for each target object in the image, thereby separating the foreground drug area from the background image.
於本發明上述實施例中,該人工智慧伺服器係接收來自一HIS之電腦讀取該處方箋所包含藥品的處方資料,利用該處方資料連結至該藥品特徵資料庫中提取該處方箋之所有藥品之影像特徵資料,以深度學習演算法將該預定處方藥品的影像特徵資料與當下該待辨識藥品的影像資料與進行吻合度機率分析比對,從而計算出所有藥品之吻合率並傳送回該HIS之電腦顯示。 In the above-described embodiment of the present invention, the AI server receives prescription data from a HIS computer and reads the prescription data for the drugs contained in the prescription. Using this prescription data, the server links to the drug feature database to extract the image feature data of all drugs in the prescription. Using a deep learning algorithm, the server then compares the image feature data of the predetermined prescription drug with the image data of the drug to be identified, performing a probability analysis to calculate the match rate for all drugs and transmits it back to the HIS computer for display.
100:裝置 100: Device
1:電腦 1: Computer
11:螢幕 11: Screen
2:攝影機 2: Camera
21:可視區 21: Viewing area
3:人工智慧伺服器 3: Artificial Intelligence Server
31:深度學習辨識單元 31: Deep Learning Recognition Unit
311:藥品取像與影像去模糊模組 311: Pharmaceutical Imaging and Image Deblurring Module
312:高反光影像還原模組 312: Highly reflective image restoration module
313:目標物偵測與實例分割模組 313: Target Detection and Instance Segmentation Module
314:藥品影像再辨識模組 314: Pharmaceutical Image Re-identification Module
4:藥品特徵資料庫 4: Drug Characteristics Database
5:處方箋 5: Prescription
6、6a:藥袋 6.6a: Medicine Bag
61、61A、61B、61C:藥品 61, 61A, 61B, 61C: Pharmaceuticals
7:條碼讀取器 7: Barcode reader
Z:藥事人員 Z:Pharmacy personnel
第1圖,係本發明基本實施的功能方塊示意圖。 Figure 1 is a functional block diagram of the basic implementation of the present invention.
第2圖,係本發明第一具體實例之裝置架構及其方法流程示意圖。 Figure 2 is a schematic diagram of the device architecture and method flow of the first embodiment of the present invention.
第3圖,係本發明第二具體實例之裝置架構及其方法流程示意圖。 Figure 3 is a schematic diagram of the device architecture and method flow of the second embodiment of the present invention.
第4圖,係本發明將吻合率判斷結果回饋至HIS之示意圖。 Figure 4 is a schematic diagram showing the present invention feeding back the matching rate judgment results to the HIS.
請參閱『第1圖』所示,係本發明基本實施的功能方塊示意圖。 如圖所示:本發明係一種利用深度學習物件偵測技術輔助辨識藥品之裝置100,係包括一可與醫院資訊系統(Hospital Information System,HIS)連接之電腦1、一攝影機2、及一人工智慧伺服器3所構成。 Please refer to Figure 1, which is a functional block diagram of a basic implementation of the present invention. As shown in the figure, the present invention is a device 100 that utilizes deep learning object detection technology to assist in drug identification. The device comprises a computer 1 connected to a hospital information system (HIS), a camera 2, and an artificial intelligence server 3.
上述所提之HIS為既有設備,可提供醫療相關資訊,如病患資料與處方資料等。 The HIS mentioned above is an existing device that can provide medical information such as patient data and prescription data.
該攝影機2係拍攝藥事人員調劑、覆核之藥品作業畫面,主要用於收集依據一處方箋5列印的至少一藥袋6中所有藥品61的影像資料。其中,動態取像的過程中,拍攝物或拍攝者的相對運動會造成影像的模糊,主因係當攝影機曝光時間過長,會造成物體在曝光期間持續移動並於影像中留下物體像素。因此,本發明所提攝影機2係以影格速率120FPS以上之高速攝影機進行拍攝,可有效減少曝光時間與物體像素的殘影。 The camera 2 captures images of pharmacists dispensing and reviewing medications. It primarily collects image data of all medications 61 in at least one medicine bag 6 printed according to a prescription 5. During dynamic imaging, relative motion of the subject or photographer can cause image blur. This is primarily due to excessive camera exposure time, which causes the subject to continuously move during the exposure, leaving behind object pixels in the image. Therefore, the camera 2 of the present invention uses a high-speed camera with a frame rate of 120 FPS or higher, effectively reducing exposure time and object pixel artifacts.
該人工智慧伺服器3連接該攝影機2與一藥品特徵資料庫4。該人工智慧伺服器3具有一深度學習辨識單元31,該深度學習辨識單元31接收並儲存所有藥品61的影像資料,分析、比對該藥品特徵資料庫4中一預定處方藥品的影像特徵資料與當下一待辨識藥品61的影像資料之吻合度。該深度學習辨識單元31包含一藥品取像與影像去模糊模組311、一高反光影像還原模組312、一目標物偵測與實例分割(Instance Segmentation)模組313、及一藥品影像再辨識(Re-Identification)模組314。其中:
該藥品取像與影像去模糊模組311係與該攝影機2併行處理,以深度學習演算法對該待辨識藥品的影像資料進行去模糊處理。該深度學習演算法可為下列任一者:基於Auto-Encoder架構的演算法、基於Multi-Stage架構之演算法、基於U-Net架構之演算法、基於GAN架構之演算法、及專門用於去模糊的卷積神經網路U-Net XY-Deblur演算法。其中,該卷積神經網路U-Net XY-Deblur演算法係採用一編碼器與二解碼器作為架構,並透過旋轉與共享參數的方式提
高去模糊性能,其公式如下:
該高反光影像還原模組312連接該藥品取像與影像去模糊模組311。因本裝置須配合藥師執業實際情境設定,考量藥師作業速度,如經過深度神經網路運算,可能會造成裝置運算速度跟不上藥師作業速度,故此處以偏光鏡作為硬體解決方案,從而提升該藥品取像與影像去模糊模組311的運算速度,使其可以配合該藥事人員的作業速度。 The highly reflective image restoration module 312 is connected to the drug imaging and image deblurring module 311. Because this device must be configured to work within the pharmacist's actual working environment, and considering the pharmacist's operating speed, deep neural network processing may cause the device's computing speed to lag behind the pharmacist's. Therefore, a polarizing filter is used as a hardware solution to increase the computing speed of the drug imaging and image deblurring module 311, enabling it to keep pace with the pharmacist's working speed.
該目標物偵測與實例分割模組313連接該藥品取像與影像去模糊模組311,係在進行藥品比對之前,需先以物件偵測演算法偵測該待辨識藥品的影像資料中藥品所在的畫面區域(Region of Interest,ROI),再將該ROI區域進行比對。為解決該ROI區域受背景干擾的影響,係將該ROI區域進行實例分割,將前景藥品區域從背景影像中分離,利用YOLACT(You Only Look At CoefficienTs)演算模型達到即時(real-time)的效果,以兩個併行的第一子網路分支與第二子網路分支運算來實現即時實例分割;該第一子網路分支係預測頭(Prediction Head),負責運算出每一個錨定(anchor)的類別信心度、邊界框(bounding box)位置、以及遮罩(mask)的係數,該第二子網路分支係原型網路(Protonet),負責生成圖片中每一個目標物件的mask。其中,該物件偵測演算法可為下列任一者:區域卷積神經網路(Region-based CNNs,R-CNN)系列之R-CNN、快速R-CNN(Fast R-CNN)、更快速R-CNN(Faster R-CNN)、遮罩R-CNN(Mask R-CNN);或者YOLO(You Only Look Once)系列之YOLO、YOLOv2、YOLOv3、YOLOv7;或者單次多框偵測器(Single Shot MultiBox Detector,SSD);或者RetinaNet。 The target object detection and instance segmentation module 313 is connected to the drug imaging and image deblurring module 311. Before performing drug comparison, it is necessary to first use an object detection algorithm to detect the image region (Region of Interest, ROI) where the drug is located in the image data of the drug to be identified, and then compare the ROI area. To address the impact of background interference on the ROI region, the ROI region is instance-segmented to separate the foreground drug area from the background image. The YOLACT (You Only Look At CoefficienTs) algorithm model is used to achieve real-time results. Two parallel sub-network branches, the first sub-network branch and the second sub-network branch, are used to implement real-time instance segmentation. The first sub-network branch is the prediction head, responsible for calculating the category confidence, bounding box position, and mask coefficients of each anchor. The second sub-network branch is the prototype network, responsible for generating a mask for each target object in the image. The object detection algorithm may be any of the following: R-CNN, Fast R-CNN, Faster R-CNN, or Mask R-CNN from the Region-based CNNs (R-CNN) series; or YOLO, YOLOv2, YOLOv3, or YOLOv7 from the YOLO (You Only Look Once) series; or Single Shot MultiBox Detector (SSD); or RetinaNet.
該藥品影像再辨識模組314連接該目標物偵測與實例分割模組313,係接收已進行實例分割將前景藥品區域從背景影像中分離的該待辨 識藥品的影像資料,以兩個CNN組成的孿生卷積神經網路(Siamese Convolutional Neural Network,SCNN)作為辨識架構進行比對,該兩個CNN共享相同的參數,利用度量學習(metric learning)將CNN取得的低階特徵(low-level)提升至高階特徵(high-level),使得特徵的分辨率上升,再將該兩個CNN同時連接一連接函數(connection function),把該待辨識藥品的影像資料與該預定處方藥品的影像特徵資料連接起來,最後利用一代價函數(cost function)計算出該待辨識藥品與該預定處方藥品的相似度。其中,本發明以識別(Identification)的方法訓練辨識模型,效果優於以驗證(Verification)訓練,經過度量學習並排序後,可以達到更高的正確率。 The drug image re-identification module 314 is connected to the object detection and instance segmentation module 313. It receives the image data of the drug to be identified, which has been instance segmented to separate the foreground drug area from the background image. It then uses a Siamese Convolutional Neural Network (SCNN) consisting of two CNNs as the recognition architecture for comparison. The two CNNs share the same parameters and use metric learning to upgrade the low-level features obtained by the CNNs to high-level features, thereby increasing the feature resolution. The two CNNs are then connected to a connection function to connect the image data of the drug to be identified with the image feature data of the prescribed drug. Finally, a cost function is used to calculate the cost function. function) to calculate the similarity between the drug to be identified and the prescribed drug. The present invention uses an identification method to train the recognition model, which is more effective than verification training. After metric learning and ranking, a higher accuracy rate can be achieved.
目前該HIS多已將病患基本資料及處方資料建置於相關資料庫中,並於調劑作業過程中將其呼出進行人工再次比對。本裝置將於一藥事人員呼出一病患處方資料時啟動,該HIS將於原系統頁面顯示該病患處方資料供藥事人員人工覆核,同時將處方資料送至該人工智慧伺服器3,由該深度學習辨識單元31提供此筆處方所有藥品之辨識資料,與該攝影機2畫面進行特徵比對,完成比對後,將吻合度機率送交回該HIS,使其於作業畫面中顯示此藥品已比對完成及其吻合率,當藥事人員完成藥品人工覆核後,該深度學習辨識單元31亦同時完成逐筆比對作業,當所有藥品皆完成人工與深度學習辨識覆核後,始得於該HIS執行交付藥品,達到輔助辨識效果。 Currently, most HISs have already built patient basic information and prescription data into relevant databases, and will call them out for manual re-comparison during the dispensing process. This device activates when a pharmacist calls up a patient's prescription data. The HIS displays the patient's prescription data on the original system page for manual review by the pharmacist. Simultaneously, the prescription data is sent to the artificial intelligence server 3. The deep learning recognition unit 31 provides identification data for all drugs in the prescription and performs a feature comparison with the camera 2 image. After the comparison is complete, the probability of matching is sent back to the HIS, which displays the drug matching completion and the matching rate on the work interface. Once the pharmacist completes the manual review of the drug, the deep learning recognition unit 31 also completes the matching process for each drug. Only after all drugs have completed manual and deep learning recognition verification can they be delivered to the HIS, achieving assisted identification.
請參閱『第2圖~第4圖』所示,係分別為本發明第一具體實例之示意圖、本發明第二具體實例之示意圖、及本發明將吻合率判斷結果回饋至HIS之示意圖。如圖所示:本發明係一種利用深度學習物件偵測技術輔助辨識藥品之方法,可由第1圖所示之裝置100執行,以下即搭配第1圖所示的元件說明第2圖~第4圖各步驟的細節。操作步驟如下: 首先,在步驟s11中,於HIS之電腦1讀取處方箋5上條碼後,於 該電腦1螢幕11的作業畫面中顯示與該處方箋5相關的處方資料,並將該處方資料傳送至配備深度學習辨識單元31之人工智慧伺服器3。 Please refer to Figures 2 through 4, which respectively illustrate the first embodiment of the present invention, the second embodiment of the present invention, and the present invention's method of feeding back the match rate determination results to the HIS. As shown in the figures, the present invention is a method for assisting in drug identification using deep learning object detection technology. This method can be implemented by the device 100 shown in Figure 1. The following details the steps in Figures 2 through 4, using the components shown in Figure 1. The operation steps are as follows: First, in step s11, after the HIS computer 1 reads the barcode on prescription sheet 5, the prescription data associated with prescription sheet 5 is displayed on the work interface of computer 1's screen 11 and transmitted to the artificial intelligence server 3 equipped with a deep learning recognition unit 31.
請參照第2圖,該步驟s11係由藥事人員Z使用連線至該HIS之電腦1的一條碼讀取器7進行該處方箋5上條碼讀取,處方相關資訊即顯示於該HIS之電腦1的螢幕11,此時該HIS之電腦1同步將處方資訊送交至該人工智慧伺服器3中,用以提取此處方藥品之影像特徵資料。 Referring to Figure 2, in step s11, pharmacy staff member Z uses a barcode reader 7 connected to the HIS computer 1 to read the barcode on the prescription sheet 5. The prescription information is displayed on the screen 11 of the HIS computer 1. The HIS computer 1 then simultaneously sends the prescription information to the artificial intelligence server 3 for extracting the image feature data of the prescribed medication.
在步驟s12中,該藥事人員Z將依據該處方箋5裝有至少一藥品61的至少一藥袋6置於一攝影機2可視區21中檢視該藥袋6中所有藥品61,同時該攝影機2將該藥袋6中所有藥品61的影像資料傳送至該人工智慧伺服器3。 In step s12, the pharmacy staff member Z places at least one medicine bag 6 containing at least one medicine 61 according to the prescription 5 in the viewing area 21 of a camera 2 to inspect all the medicines 61 in the medicine bag 6. Simultaneously, the camera 2 transmits image data of all the medicines 61 in the medicine bag 6 to the artificial intelligence server 3.
請參照第2圖,該步驟s12由該藥事人員Z拿取該藥袋6並置於該攝影機2可視區21中進行人工檢視藥品(Med),於此同時,該攝影機2將其拍攝到之影像資料傳送至該人工智慧伺服器3中。 Referring to Figure 2, in step s12, pharmacy staff member Z takes the medicine bag 6 and places it in the viewing area 21 of the camera 2 for manual inspection of the medicine (Med). Simultaneously, the camera 2 transmits the captured image data to the artificial intelligence server 3.
在步驟s13中,該人工智慧伺服器3利用該處方資料連結至一藥品特徵資料庫4中提取該處方箋5之所有藥品61之影像特徵資料,該深度學習辨識單元31以深度學習演算法將其與該影像資料進行吻合度機率分析比對,計算出各藥品61之吻合率並傳送至該HIS,使其於該電腦1螢幕11的作業畫面中顯示該藥袋6中所有藥品61已比對完成及其吻合率。 In step s13, the AI server 3 uses the prescription data to connect to a drug feature database 4 and extract the image feature data of all drugs 61 in the prescription bag 5. The deep learning recognition unit 31 uses a deep learning algorithm to perform a probability analysis and comparison of the image feature data, calculates the matching rate of each drug 61, and transmits it to the HIS, which then displays on the work screen 11 of the computer 1 that all drugs 61 in the medicine bag 6 have been matched and their matching rates.
請參照第2圖,該步驟s13該人工智慧伺服器3接收到該攝影機2所傳送之影像資料,並將其與處方藥品的影像特徵資料進行吻合度機率分析,並將吻合率傳送至該HIS之電腦1。 Referring to Figure 2, in step s13, the AI server 3 receives the image data transmitted by the camera 2, performs a probability analysis on the image data and the image feature data of the prescription drug, and transmits the matching rate to the HIS computer 1.
請參照第3圖,該步驟s13該人工智慧伺服器3將接收來自該HIS之電腦1的處方資料,且同一時間可能接收多筆資料,於此圖中,因本筆處方箋5具有三個藥袋6a,故該藥事人員Z可依序或不依序於該攝影機2可視區 21中進行人工檢視藥品(Med A、Med B、Med C)61A、61B、61C,於此同時,該人工智慧伺服器3亦接收到該攝影機2所傳送之連續影像資料,並將其與該處方箋5中之三項藥品影像特徵資料進行吻合度機率分析,並將所得的吻合率傳送至該HIS之電腦1。 Referring to Figure 3, in step s13, the AI server 3 receives prescription data from the HIS computer 1, potentially receiving multiple data sets simultaneously. In this figure, because this prescription 5 contains three medicine bags 6a, pharmacy staff member Z can manually inspect the medications (Med A, Med B, Med C) 61A, 61B, and 61C in the camera 2's viewing area 21, either sequentially or non-sequentially. Simultaneously, the AI server 3 receives the continuous image data transmitted by the camera 2 and performs a probability match analysis on it with the three medication image feature data in the prescription 5. The resulting match rate is then transmitted to the HIS computer 1.
在步驟s14中,該藥事人員Z以各藥品61A、61B、61C之吻合率做為輔助判斷之依據,完成該處方資料及該藥袋6a審視作業。 In step s14, pharmacy staff member Z uses the matching rates of each drug 61A, 61B, and 61C as an auxiliary basis for judgment and completes the review of the prescription data and the medicine bag 6a.
請參照第4圖,該步驟s14該藥事人員Z可於讀取處方箋5後啟動吻合率判斷之HIS之電腦1查看,並依此資料輔助判斷藥袋6a中藥品61A、61B、61C的正確與否。本實施例中藥品61A吻合率為90%、藥品61B吻合率為20%、藥品61C吻合率為70%,即代表此段檢視藥品61A、61B、61C的連續影像中,含有藥品61A的可能性為90%、藥品61B的可能性為20%、藥品61C的可能性為70%,而非代表藥品61A之藥袋6a中含有藥品61A的可能性為90%、藥品61B之藥袋6a中含有藥品61B的可能性為20%、藥品61C之藥袋6a中含有藥品61C的可能性為70%。 Please refer to Figure 4. In step s14, the pharmacist Z can start the HIS computer 1 for matching rate judgment after reading the prescription 5, and use this data to assist in judging whether the medicines 61A, 61B, and 61C in the medicine bag 6a are correct. In this embodiment, the matching rates for drug 61A are 90%, for drug 61B is 20%, and for drug 61C is 70%. This means that in the continuous images of drugs 61A, 61B, and 61C, there is a 90% probability that drug 61A is present, a 20% probability that drug 61B is present, and a 70% probability that drug 61C is present. This does not mean that there is a 90% probability that drug 61A is present in drug bag 6a, a 20% probability that drug 61B is present in drug bag 6a, and a 70% probability that drug 61C is present in drug bag 6a.
在步驟s15中,該藥事人員Z於該HIS中回饋吻合率之正確性予該人工智慧伺服器3,再次訓練該深度學習辨識單元31。 In step s15, the pharmacist Z feeds back the accuracy of the matching rate in the HIS to the artificial intelligence server 3 to train the deep learning recognition unit 31 again.
請參照第4圖,該步驟s15該藥事人員Z完成處方及藥袋審視作業後,依審視狀況給予該HIS之電腦1適當回饋。本實施例中,如經藥事人員Z再次檢視藥品61A、61B、61C後,確認所有藥品61A、61B、61C皆與處方箋5即藥袋6a相符,則於HIS之電腦1給予「藥物確認」訊息,而此訊息將回報至該人工智慧伺服器3中,再次訓練該深度學習辨識單元31。 Referring to Figure 4, in step s15, after pharmacy officer Z completes the review of the prescription and medicine bag, he or she provides appropriate feedback to the HIS computer 1 based on the review results. In this embodiment, if pharmacy officer Z re-inspects medications 61A, 61B, and 61C and confirms that all medications 61A, 61B, and 61C match the prescription sheet 5, i.e., the medicine bag 6a, then the HIS computer 1 will generate a "Drug Confirmed" message. This message will be fed back to the artificial intelligence server 3 to further train the deep learning recognition unit 31.
請參照第4圖,該步驟s15如該藥事人員Z再次檢視藥品61A、61B、61C後,發現藥品61B之藥袋6a中藥品(第二藥品61B)有誤,則於HIS之電腦1給予「藥物錯誤」訊息,而此訊息將回報至該人工智慧伺 服器3中,再次訓練該深度學習辨識單元31。 Referring to Figure 4 , in step s15, if pharmacy employee Z re-inspects drugs 61A, 61B, and 61C and discovers an error with the drug (second drug 61B) in drug bag 6a, the HIS computer 1 will generate a "drug error" message. This message will be reported back to the artificial intelligence server 3 to retrain the deep learning recognition unit 31.
於本發明之一較佳具體實施例中,該藥袋6係一正面含病患及藥品資訊,背面為透明塑膠膜,可將藥品61置入其中,且可利用該攝影機2拍攝取得清楚影像者。 In a preferred embodiment of the present invention, the medicine bag 6 has a front side containing patient and medicine information and a back side made of a transparent plastic film, into which the medicine 61 can be placed and which can be photographed by the camera 2 to obtain a clear image.
於本發明之一較佳具體實施例中,當藥品特徵比對後之吻合率偏低時,該HIS將提示該藥事人員「吻合率低,請再次確認藥品是否無誤?」;當該深度學習辨識單元31出現誤判或當藥品外觀有部份改變時,該藥事人員可於此時同步回饋該深度學習辨識單元31,使其重新學習並修正特徵檔,以達到持續使用、持續維護。 In a preferred embodiment of the present invention, if the match rate after drug feature comparison is low, the HIS will prompt the pharmacist, "Low match rate, please reconfirm whether the drug is correct?" If the deep learning recognition unit 31 makes a misjudgment or if the drug's appearance has partially changed, the pharmacist can provide feedback to the deep learning recognition unit 31, causing it to relearn and modify the feature file, thereby achieving continuous use and continuous maintenance.
於本發明之一較佳具體實施例中,本方法主要目的係在最低的流程改變下,提供影像判斷之吻合率予使用者,並於吻合率偏低之藥品品項予以提示。 In a preferred embodiment of the present invention, the primary purpose of this method is to provide users with an image-based diagnostic accuracy rate with minimal process changes, and to provide prompts for medications with low accuracy rates.
於本發明之一較佳具體實施例中,本方法係配合該HIS,使用時,將自該HIS取得一批藥品資料及影像特徵,並將其與一段影像資料中之所有藥品進行匹配,計算各藥品出現之吻合率予使用者。 In a preferred embodiment of the present invention, the method is used in conjunction with the HIS. When used, a batch of drug data and image features are obtained from the HIS and matched with all drugs in a segment of image data. The matching rate of each drug's appearance is calculated and provided to the user.
於本發明之一較佳具體實施例中,本方法係僅提供吻合率,最終之判斷仍應由符合法規之藥事人員或醫療人員決定。 In a preferred embodiment of the present invention, this method only provides a matching rate, and the final judgment should still be made by pharmaceutical personnel or medical personnel in accordance with regulations.
於本發明之一較佳具體實施例中,本方法將於每次藥事人員或醫療人員決定並回饋後,再次訓練該深度學習辨識單元31,進而提升深度學習之效能。 In a preferred embodiment of the present invention, the method retrains the deep learning recognition unit 31 each time the pharmacy or medical staff makes a decision and provides feedback, thereby improving the effectiveness of deep learning.
藉此,本發明係一種利用深度學習物件偵測技術輔助辨識藥品之方法及裝置,用以輔助目前藥事人員執行藥品優良調劑作業準則第3條所稱之調劑行為,降低其人為錯誤的發生率。其技術特點如下: This invention provides a method and device that utilizes deep learning object detection technology to assist in drug identification. This method is intended to assist pharmacists in performing the dispensing practices outlined in Article 3 of the Guidelines for Good Drug Dispensing Practices, thereby reducing the incidence of human error. Its technical features are as follows:
1.本裝置將針對藥品調劑過程進行輔助判斷,可降低人為錯誤的發生率。 1. This device will assist in the judgment of the drug mixing process, reducing the incidence of human error.
2.本發明所呈現之吻合率即為提供藥事人員更快速且更正確的輔助判斷,用以減輕藥事人員的工作壓力,提升整體效率,在醫療人員短缺且量能緊繃的老年化社會中,有其特殊性與必要性。 2. The matching rate demonstrated by this invention provides pharmacists with faster and more accurate diagnostic assistance, thereby reducing their workload and improving overall efficiency. This is particularly important and necessary in an aging society characterized by a shortage of medical personnel and limited capacity.
3.本裝置結合醫院資訊管理系統與深度學習物件偵測技術,由醫院資訊管理系統將判斷範圍縮小,再利用深度學習物件偵測技術完成辨識,使其精確度與效率提升,並能在不改變目前的藥事行為下,完成輔助辨識效果。 3. This device combines the hospital information management system with deep learning object detection technology. The hospital information management system narrows the judgment scope, and deep learning object detection technology is then used to complete the identification, improving its accuracy and efficiency. It can also achieve assisted identification without changing current pharmaceutical practices.
綜上所述,本發明係一種利用深度學習物件偵測技術輔助辨識藥品之方法及裝置,可有效改善習用之種種缺點,係利用既有醫療資訊系統(Hospital Information System,HIS)中的處方資訊,以該處方所有藥物品項做為判斷依據,提供物件吻合率予使用者,在不影響藥事人員工作型態與效率的情況下,增加一層輔助檢查層,藉此提升藥事人員調劑作業的正確率,降低病人安全通報系統(Taiwan Patient-safety Reporting system,TPR)中藥物事件的發生率,並以使用者反饋做為深度學習之驗證基準,持續優化裝置判斷能力,進而使本發明之產生能更進步、更實用、更符合使用者之所須,確已符合發明專利申請之要件,爰依法提出專利申請。 In summary, the present invention is a method and device for assisting in drug identification using deep learning object detection technology, which can effectively improve the various shortcomings of drug use. It uses the prescription information in the existing Hospital Information System (HIS) and all the drug items in the prescription as the basis for judgment, providing the user with the object matching rate. Without affecting the work style and efficiency of pharmacists, it adds a layer of auxiliary inspection, thereby improving the accuracy of pharmacists' dispensing operations and reducing the patient safety reporting system (Taiwan Patient-safety Reporting System). The incidence rate of drug events in the TPR system (TPR) is being measured, and user feedback is being used as a validation benchmark for deep learning to continuously optimize the device's judgment capabilities. This will ensure that the invention is more advanced, more practical, and better meets user needs. This invention has indeed met the requirements for an invention patent application, and a patent application has been filed in accordance with the law.
惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍;故,凡依本發明申請專利範圍及發明說明書內容所作之簡單的等效變化與修飾,皆應仍屬本發明專利涵蓋之範圍內。 However, the above description is merely a preferred embodiment of the present invention and should not be used to limit the scope of implementation of the present invention. Therefore, any simple equivalent changes and modifications made within the scope of the patent application and the contents of the invention specification should still fall within the scope of coverage of the present patent.
100:裝置 100: Device
1:電腦 1: Computer
2:攝影機 2: Camera
3:人工智慧伺服器 3: Artificial Intelligence Server
31:深度學習辨識單元 31: Deep Learning Recognition Unit
311:藥品取像與影像去模糊模組 311: Pharmaceutical Imaging and Image Deblurring Module
312:高反光影像還原模組 312: Highly reflective image restoration module
313:目標物偵測與實例分割模組 313: Target Detection and Instance Segmentation Module
314:藥品影像再辨識模組 314: Pharmaceutical Image Re-identification Module
4:藥品特徵資料庫 4: Drug Characteristics Database
Claims (17)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113107479A TWI897267B (en) | 2024-03-01 | 2024-03-01 | Method and device for drug identification using deep learning object detection technology |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113107479A TWI897267B (en) | 2024-03-01 | 2024-03-01 | Method and device for drug identification using deep learning object detection technology |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI897267B true TWI897267B (en) | 2025-09-11 |
| TW202536884A TW202536884A (en) | 2025-09-16 |
Family
ID=97831433
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW113107479A TWI897267B (en) | 2024-03-01 | 2024-03-01 | Method and device for drug identification using deep learning object detection technology |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI897267B (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107545150A (en) * | 2017-10-13 | 2018-01-05 | 张晨 | Medicine identifying system and its recognition methods based on deep learning |
| CN109190680A (en) * | 2018-08-11 | 2019-01-11 | 复旦大学 | The detection and classification method of Medicines image based on deep learning |
| CN111382622A (en) * | 2018-12-28 | 2020-07-07 | 泰芯科技(杭州)有限公司 | Medicine identification system based on deep learning and implementation method thereof |
| CN112668409A (en) * | 2020-12-14 | 2021-04-16 | 合肥富煌君达高科信息技术有限公司 | Visual measurement system and method for identifying medicine type by using same |
| TW202125529A (en) * | 2019-12-27 | 2021-07-01 | 廣達電腦股份有限公司 | Medical image recognition system and medical image recognition method |
| KR20220117012A (en) * | 2021-02-16 | 2022-08-23 | 양은영 | Method of providing drug information based on artificial intelligence |
-
2024
- 2024-03-01 TW TW113107479A patent/TWI897267B/en active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107545150A (en) * | 2017-10-13 | 2018-01-05 | 张晨 | Medicine identifying system and its recognition methods based on deep learning |
| CN109190680A (en) * | 2018-08-11 | 2019-01-11 | 复旦大学 | The detection and classification method of Medicines image based on deep learning |
| CN111382622A (en) * | 2018-12-28 | 2020-07-07 | 泰芯科技(杭州)有限公司 | Medicine identification system based on deep learning and implementation method thereof |
| TW202125529A (en) * | 2019-12-27 | 2021-07-01 | 廣達電腦股份有限公司 | Medical image recognition system and medical image recognition method |
| CN112668409A (en) * | 2020-12-14 | 2021-04-16 | 合肥富煌君达高科信息技术有限公司 | Visual measurement system and method for identifying medicine type by using same |
| KR20220117012A (en) * | 2021-02-16 | 2022-08-23 | 양은영 | Method of providing drug information based on artificial intelligence |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202536884A (en) | 2025-09-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11963846B2 (en) | Systems and methods for integrity analysis of clinical data | |
| US12106848B2 (en) | Systems and methods for integrity analysis of clinical data | |
| Hasan et al. | Deep learning based detection and segmentation of COVID-19 & pneumonia on chest X-ray image | |
| Ahmedt-Aristizabal et al. | Deep facial analysis: A new phase I epilepsy evaluation using computer vision | |
| US9842257B2 (en) | Automated pharmaceutical pill identification | |
| US20070239482A1 (en) | Vision Based Data Acquisition System and Method For Acquiring Medical and Other Information | |
| US20200151422A1 (en) | Apparatus and Method for Determination of Medication Location | |
| WO2022011342A9 (en) | Systems and methods for integrity analysis of clinical data | |
| CN113012783A (en) | Medicine rechecking method and device, computer equipment and storage medium | |
| CN113298089A (en) | Venous transfusion liquid level detection method based on image processing | |
| Suksawatchon et al. | Shape recognition using unconstrained pill images based on deep convolution network | |
| Jayasingh et al. | Medical image diagnosis using deep learning classifiers for COVID-19 | |
| CN118116540A (en) | Intelligent medicine management system, medicine management method and storage medium | |
| TWI897267B (en) | Method and device for drug identification using deep learning object detection technology | |
| Li | DA-Net: A classification-guided network for dental anomaly detection from dental and maxillofacial images | |
| CN115578370B (en) | Brain image-based metabolic region abnormality detection method and device | |
| KR102598969B1 (en) | Pill identification system | |
| Roy | Identification in Drug Prescription Using Artificial Intelligence | |
| Banumathi et al. | Diagnosis of dental deformities in cephalometry images using support vector machine | |
| CN114445851A (en) | Video-based conversation scene abnormity detection method, terminal device and storage medium | |
| Chen et al. | An Augmented Reality-based Chemotherapeutic Drug Dispensing Assistant System Using Deep Learning Techniques | |
| TWI695347B (en) | Method and system for sorting and identifying medication via its label and/or package | |
| Abaza | High performance image processing techniques in automated identification systems | |
| Ganesh et al. | Triple Sensors Integrated Deep Learning based Network for Detection of Defects in Medicines | |
| CN118334639B (en) | Medicine rechecking method and system |