CN109637669B - Method, device and storage medium for generating treatment plan based on deep learning - Google Patents
Method, device and storage medium for generating treatment plan based on deep learning Download PDFInfo
- Publication number
- CN109637669B CN109637669B CN201811407145.8A CN201811407145A CN109637669B CN 109637669 B CN109637669 B CN 109637669B CN 201811407145 A CN201811407145 A CN 201811407145A CN 109637669 B CN109637669 B CN 109637669B
- Authority
- CN
- China
- Prior art keywords
- treatment plan
- patient
- deep learning
- symptoms
- diagnostic information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Public Health (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- Pathology (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Description
技术领域technical field
本发明涉及人工智能领域,尤其涉及一种基于深度学习的治疗方案的生成方法、基于深度学习的治疗方案的生成装置以及计算机可读存储介质。The present invention relates to the field of artificial intelligence, in particular to a method for generating a treatment plan based on deep learning, a device for generating a treatment plan based on deep learning, and a computer-readable storage medium.
背景技术Background technique
目前,在医院或医疗机构,每天都会产生大量电子病历文本,电子病历是一种专业性很强的医疗文本,是病人在医院诊断治疗全过程的原始记录,它包含有病人的病程记录、检查检验结果、医嘱、手术记录、护理记录等等信息。At present, in hospitals or medical institutions, a large number of electronic medical records are generated every day. Electronic medical records are highly professional medical texts, which are the original records of the whole process of diagnosis and treatment of patients in the hospital. Test results, doctor's orders, surgery records, nursing records and other information.
近年来由于大数据及人工智能技术发展迅速,人们开始将机器学习相关技术运用到辅助诊断或治疗领域中,以智能生成及推荐对病人的治疗方法,帮助医生快速制定治疗方案。In recent years, due to the rapid development of big data and artificial intelligence technology, people have begun to apply machine learning related technologies to the field of auxiliary diagnosis or treatment, intelligently generate and recommend treatment methods for patients, and help doctors quickly formulate treatment plans.
但在现有的治疗方案智能生成的技术中,没有考虑到病人连续时间内诊断信息之间的联系,也只能预测病人当次就诊的治疗方案,无法对病人在整个治疗周期中的多次治疗方案进行全面的预测。However, in the existing technology of intelligent generation of treatment plans, the connection between the patient's diagnostic information in a continuous period of time is not considered, and the treatment plan of the patient's current visit can only be predicted, and it is impossible to predict the patient's multiple times in the entire treatment cycle. Treatment options are comprehensively predicted.
发明内容Contents of the invention
本发明的主要目的在于提供一种基于深度学习的治疗方案的生成方法、基于深度学习的治疗方案的生成装置以及计算机可读存储介质,解决现有技术中无法对病人在整个治疗周期中的多次治疗方案进行全面的预测的技术问题。The main purpose of the present invention is to provide a method for generating a treatment plan based on deep learning, a device for generating a treatment plan based on deep learning, and a computer-readable storage medium, so as to solve the problem that the prior art cannot be used to treat patients during the entire treatment cycle. It is a technical problem to make a comprehensive prediction of the sub-treatment plan.
为实现上述目的,本发明提供一种基于深度学习的治疗方案的生成方法,所述基于深度学习的治疗方案的生成方法包括以下步骤:To achieve the above object, the present invention provides a method for generating a treatment plan based on deep learning, the method for generating a treatment plan based on deep learning includes the following steps:
获取待处理的病人的诊断信息;Obtain diagnostic information of pending patients;
将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果;Input the diagnosis information into the deep neural network model for processing, and obtain the prediction result of the current treatment plan of the patient to be treated;
将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果。The prediction result of the current treatment plan is input into the sequence model for processing, and the prediction result of the future treatment plan of the patient to be treated is obtained.
优选地,所述将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果的步骤包括:Preferably, the step of inputting the diagnosis information into the deep neural network model for processing, and obtaining the prediction result of the current treatment plan of the patient to be treated includes:
将所述诊断信息输入深度神经网络模型进行处理,得到所述诊断信息对应的隐向量;Inputting the diagnostic information into a deep neural network model for processing to obtain a hidden vector corresponding to the diagnostic information;
将所述诊断信息对应的隐向量输入自注意力机制层进行处理,获得所述隐向量对应的权重;Inputting the hidden vector corresponding to the diagnostic information into the self-attention mechanism layer for processing, and obtaining the weight corresponding to the hidden vector;
根据所述隐向量和所述权重得到加权后的隐向量;Obtaining a weighted hidden vector according to the hidden vector and the weight;
根据所述加权后的隐向量得到所述待处理的病人的本次治疗方案的预测结果。A prediction result of the current treatment plan of the patient to be treated is obtained according to the weighted hidden vector.
优选地,将所述诊断信息对应的隐向量输入自注意力机制层进行处理,获得所述隐向量对应的权重的步骤包括:Preferably, the hidden vector corresponding to the diagnostic information is input into the self-attention mechanism layer for processing, and the step of obtaining the weight corresponding to the hidden vector includes:
将所述诊断信息对应的隐向量输入自注意力机制层进行处理;Inputting the latent vector corresponding to the diagnostic information into the self-attention mechanism layer for processing;
所述自注意力机制层根据所述诊断信息的等级信息学习到所述隐向量对应的权重。The self-attention mechanism layer learns the weight corresponding to the hidden vector according to the level information of the diagnosis information.
优选地,所述诊断信息的等级信息包括:主要诊断、其他诊断、损伤诊断和其他诊断。Preferably, the level information of the diagnosis information includes: main diagnosis, other diagnosis, injury diagnosis and other diagnosis.
优选地,所述深度神经网络模型包括多层的长短期记忆网络或者多层的门控递归单元网络,根据预设数量的病人的诊断信息和与所述病人的诊断信息对应的治疗方案对所述深度神经网络模型和所述序列到序列模型进行联合训练。Preferably, the deep neural network model includes a multi-layer long-short-term memory network or a multi-layer gated recurrent unit network, according to the diagnosis information of a preset number of patients and the treatment plan corresponding to the diagnosis information of the patients. The deep neural network model and the sequence-to-sequence model are jointly trained.
优选地,所述预设数量的病人的诊断信息为入院次数不小于预设次数的病人的诊断信息。Preferably, the diagnostic information of the preset number of patients is the diagnostic information of patients whose admission times are not less than the preset times.
优选地,所述将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果的步骤包括:Preferably, the step of inputting the prediction result of the current treatment plan into a sequence model for processing, and obtaining the prediction result of the future treatment plan of the patient to be treated includes:
将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理病人的下一次治疗方案的预测结果;inputting the prediction result of the current treatment plan into the sequence model for processing, and obtaining the prediction result of the next treatment plan of the patient to be treated;
获取所述待处理病人的治疗方案的预测结果的个数是否达到预设数目;Obtain whether the number of predicted results of the treatment plan of the patient to be processed reaches a preset number;
若所述待处理病人的治疗方案的预测结果的个数未达到所述预设数目,将所述下一次治疗方案的预测结果作为所述本次治疗方案的预测结果,返回执行所述将所述本次治疗方案的预测结果输入序列到序列模型进行处理的步骤。If the number of predicted results of the treatment plan of the patient to be treated does not reach the preset number, use the predicted result of the next treatment plan as the predicted result of the current treatment plan, and return to execute the Describe the steps of inputting the prediction results of this treatment plan into the sequence model for processing.
优选地,所述获取待处理的病人的诊断信息的步骤之后还包括:Preferably, after the step of obtaining the diagnosis information of the patient to be processed, the step further includes:
根据国际疾病编码标准获取所述病人的诊断信息对应的编码向量;Acquiring the coding vector corresponding to the patient's diagnosis information according to the international disease coding standard;
将所述病人的诊断信息对应的编码向量输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果。The encoding vector corresponding to the diagnosis information of the patient is input into the deep neural network model for processing, and the prediction result of the current treatment plan of the patient to be processed is obtained.
为实现上述目的,本发明还提供一种基于深度学习的治疗方案的生成装置,所述基于深度学习的治疗方案的生成装置包括:To achieve the above object, the present invention also provides a device for generating a treatment plan based on deep learning, the device for generating a treatment plan based on deep learning includes:
所述基于深度学习的治疗方案的生成装置包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的基于深度学习的治疗方案的生成程序,所述基于深度学习的治疗方案的生成程序被所述处理器执行时实现如上述基于深度学习的治疗方案的生成方法的步骤。The generation device of the treatment plan based on deep learning includes a memory, a processor, and a generation program of a treatment plan based on deep learning that is stored on the memory and can run on the processor, and the treatment plan based on deep learning When the plan generation program is executed by the processor, the steps of the above-mentioned method for generating a treatment plan based on deep learning are realized.
为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有基于深度学习的治疗方案的生成程序,所述基于深度学习的治疗方案的生成程序被处理器执行时实现如上述基于深度学习的治疗方案的生成方法的步骤。In order to achieve the above object, the present invention also provides a computer-readable storage medium, on which a generation program of a treatment plan based on deep learning is stored, and the generation program of a treatment plan based on deep learning is processed When the device is executed, the steps of the above-mentioned method for generating a treatment plan based on deep learning are realized.
本发明提供的基于深度学习的治疗方案的生成方法、基于深度学习的治疗方案的生成装置以及计算机可读存储介质,获取待处理的病人的诊断信息;将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果;将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果。这样,通过具有时间联结的前馈型深度神经网络生成病人的治疗方案,并通过序列到序列模型预测病人未来的治疗方案,提高了预测得到的病人治疗方案的准确率。The method for generating a treatment plan based on deep learning, the device for generating a treatment plan based on deep learning, and the computer-readable storage medium provided by the present invention acquire the diagnostic information of the patient to be treated; input the diagnostic information into the deep neural network model for Processing to obtain the prediction result of the current treatment plan of the patient to be processed; input the prediction result of the current treatment plan to the sequence model for processing, and obtain the prediction result of the future treatment plan of the patient to be processed . In this way, the patient's treatment plan is generated through the feedforward deep neural network with time connection, and the patient's future treatment plan is predicted through the sequence-to-sequence model, which improves the accuracy of the predicted patient's treatment plan.
附图说明Description of drawings
图1为本发明实施例方案涉及的实施例终端的硬件运行环境示意图;FIG. 1 is a schematic diagram of the hardware operating environment of the embodiment terminal involved in the solution of the embodiment of the present invention;
图2为本发明基于深度学习的治疗方案的生成方法第一实施例的流程示意图;FIG. 2 is a schematic flowchart of the first embodiment of the method for generating a treatment plan based on deep learning in the present invention;
图3为本发明基于深度学习的治疗方案的生成方法第二实施例的流程示意图。FIG. 3 is a schematic flowchart of a second embodiment of the method for generating a treatment plan based on deep learning in the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose of the present invention, functional characteristics and advantages will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
本发明提供一种基于深度学习的治疗方案的生成方法,通过具有时间联结的前馈型深度神经网络生成病人的治疗方案,并通过序列到序列模型预测病人未来的治疗方案,提高了预测得到的病人治疗方案的准确率。The invention provides a method for generating a treatment plan based on deep learning, which generates a patient's treatment plan through a time-connected feed-forward deep neural network, and predicts the patient's future treatment plan through a sequence-to-sequence model, which improves the prediction accuracy. Accuracy of patient treatment plan.
如图1所示,图1是本发明实施例方案涉及的实施例终端的硬件运行环境示意图;As shown in FIG. 1, FIG. 1 is a schematic diagram of the hardware operating environment of the embodiment terminal involved in the embodiment solution of the present invention;
本发明实施例终端可以是基于深度学习的治疗方案的生成装置,也可以是服务器。The terminal in this embodiment of the present invention may be a device for generating a treatment plan based on deep learning, or a server.
如图1所示,该终端可以包括:处理器1001,例如CPU,存储器1002,通信总线1003。其中,通信总线1003用于实现该终端中各组成部件之间的连接通信。存储器1002可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1002可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1 , the terminal may include: a processor 1001 , such as a CPU, a memory 1002 , and a communication bus 1003 . Wherein, the communication bus 1003 is used to realize the connection and communication between the components in the terminal. The memory 1002 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory. Optionally, the memory 1002 may also be a storage device independent of the foregoing processor 1001 .
本领域技术人员可以理解,图1中示出的终端的结构并不构成对本发明实施例终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the structure of the terminal shown in Figure 1 does not constitute a limitation on the terminal of the embodiment of the present invention, and may include more or less components than those shown in the figure, or combine some components, or different components layout.
如图1所示,作为一种计算机存储介质的存储器1002中可以包括基于深度学习的治疗方案的生成程序。As shown in FIG. 1 , a memory 1002 as a computer storage medium may include a program for generating a treatment plan based on deep learning.
在图1所示的终端中,处理器1001可以用于调用存储器1002中存储的基于深度学习的治疗方案的生成程序,并执行以下操作:In the terminal shown in FIG. 1 , the processor 1001 can be used to call the generation program of the treatment plan based on deep learning stored in the memory 1002, and perform the following operations:
获取待处理的病人的诊断信息;Obtain diagnostic information of pending patients;
将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果;Input the diagnosis information into the deep neural network model for processing, and obtain the prediction result of the current treatment plan of the patient to be treated;
将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果。The prediction result of the current treatment plan is input into the sequence model for processing, and the prediction result of the future treatment plan of the patient to be treated is obtained.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
将所述诊断信息输入深度神经网络模型进行处理,得到所述诊断信息对应的隐向量;Inputting the diagnostic information into a deep neural network model for processing to obtain a hidden vector corresponding to the diagnostic information;
将所述诊断信息对应的隐向量输入自注意力机制层进行处理,获得所述隐向量对应的权重;Inputting the hidden vector corresponding to the diagnostic information into the self-attention mechanism layer for processing, and obtaining the weight corresponding to the hidden vector;
根据所述隐向量和所述权重得到加权后的隐向量;Obtaining a weighted hidden vector according to the hidden vector and the weight;
根据所述加权后的隐向量得到所述待处理的病人的本次治疗方案的预测结果。A prediction result of the current treatment plan of the patient to be treated is obtained according to the weighted hidden vector.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
将所述诊断信息对应的隐向量输入自注意力机制层进行处理;Inputting the latent vector corresponding to the diagnostic information into the self-attention mechanism layer for processing;
将所述诊断信息对应的隐向量输入自注意力机制层进行处理;Inputting the latent vector corresponding to the diagnostic information into the self-attention mechanism layer for processing;
所述自注意力机制层根据所述诊断信息的等级信息学习到所述隐向量对应的权重。The self-attention mechanism layer learns the weight corresponding to the hidden vector according to the level information of the diagnosis information.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
所述诊断信息的等级信息包括:主要诊断、其他诊断、损伤诊断和其他诊断。The level information of the diagnosis information includes: main diagnosis, other diagnosis, injury diagnosis and other diagnosis.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
所述深度神经网络模型包括多层的长短期记忆网络或者多层的门控递归单元网络,根据预设数量的病人的诊断信息和与所述病人的诊断信息对应的治疗方案对所述深度神经网络模型和所述序列到序列模型进行联合训练。The deep neural network model includes a multi-layer long-short-term memory network or a multi-layer gated recurrent unit network. The network model is jointly trained with the sequence-to-sequence model.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
所述预设数量的病人的诊断信息为入院次数不小于预设次数的病人的诊断信息。The diagnostic information of the preset number of patients is the diagnostic information of patients whose admission times are not less than the preset times.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理病人的下一次治疗方案的预测结果;inputting the prediction result of the current treatment plan into the sequence model for processing, and obtaining the prediction result of the next treatment plan of the patient to be treated;
获取所述待处理病人的治疗方案的预测结果的个数是否达到预设数目;Obtain whether the number of predicted results of the treatment plan of the patient to be processed reaches a preset number;
若所述待处理病人的治疗方案的预测结果的个数未达到所述预设数目,将所述下一次治疗方案的预测结果作为所述本次治疗方案的预测结果,返回执行所述将所述本次治疗方案的预测结果输入序列到序列模型进行处理的步骤。If the number of predicted results of the treatment plan of the patient to be treated does not reach the preset number, use the predicted result of the next treatment plan as the predicted result of the current treatment plan, and return to execute the Describe the steps of inputting the prediction results of this treatment plan into the sequence model for processing.
进一步地,处理器1001可以调用存储器1002中存储的基于深度学习的治疗方案的生成程序,还执行以下操作:Further, the processor 1001 can call the generation program of the treatment plan based on deep learning stored in the memory 1002, and also perform the following operations:
根据国际疾病编码标准获取所述病人的诊断信息对应的编码向量;Acquiring the coding vector corresponding to the patient's diagnosis information according to the international disease coding standard;
将所述病人的诊断信息对应的编码向量输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果。The encoding vector corresponding to the diagnosis information of the patient is input into the deep neural network model for processing, and the prediction result of the current treatment plan of the patient to be processed is obtained.
参照图2,在一实施例中,所述基于深度学习的治疗方案的生成方法包括:Referring to Fig. 2, in one embodiment, the generating method of the treatment plan based on deep learning comprises:
步骤S10、获取待处理的病人的诊断信息。Step S10, obtaining the diagnosis information of the patient to be processed.
所述待处理的病人的诊断信息来源于病人的电子病历,电子病历可以是门诊病历或者入院信息记录。应当理解的是,在一份诊断信息中,只有该病一次诊断记录对应的诊断数据,诊断数据可以包括诊断的疾病名称和疾病相关病征,如发烧、发热、喉咙痛、耳鸣、流鼻涕、口腔溃疡、心率不齐、胸闷、头晕等。The diagnostic information of the patient to be processed comes from the patient's electronic medical record, which may be an outpatient medical record or an admission information record. It should be understood that in one diagnosis information, there is only one diagnosis record corresponding to the diagnosis data of the disease, and the diagnosis data may include the name of the diagnosed disease and disease-related symptoms, such as fever, fever, sore throat, tinnitus, runny nose, oral Ulcers, irregular heartbeat, chest tightness, dizziness, etc.
步骤S20、将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果。Step S20, input the diagnosis information into the deep neural network model for processing, and obtain the prediction result of the current treatment plan of the patient to be treated.
首先,可以根据预先训练好的词向量获取待处理的病人的诊断信息对应的词向量。具体地,可以利用预先构建的连续词袋模型(CBOW)提取所述待处理的病人的诊断信息对应的词向量。CBOW模型的训练输入是某一个特征词的上下文相关的词对应的词向量,而输出就是该特征词的词向量。比如,在诊断信息中,存在一个诊断数据中的一个子数据为“喉咙痛”、“流鼻涕”、“头晕”,通过CBOW模型可以提取到的编码向量为“感冒”。First, the word vector corresponding to the diagnosis information of the patient to be processed can be obtained according to the pre-trained word vector. Specifically, a pre-built continuous bag-of-words model (CBOW) can be used to extract word vectors corresponding to the diagnostic information of the patient to be processed. The training input of the CBOW model is the word vector corresponding to the context-related word of a certain feature word, and the output is the word vector of the feature word. For example, in the diagnostic information, there is a sub-data in the diagnostic data as "sore throat", "runny nose", and "dizziness", and the coded vector that can be extracted through the CBOW model is "cold".
具体地,在构建CBOW模型时,对于重要的模型参数初始化,设置学习率为0.015,设置迭代次数为5,设置最小词频为5,设置窗口大小为6,设置特征向量维数为64,设置批量大小为500,设置降采样阈值为1e-3。Specifically, when constructing the CBOW model, for the initialization of important model parameters, set the learning rate to 0.015, set the number of iterations to 5, set the minimum word frequency to 5, set the window size to 6, set the feature vector dimension to 64, and set the batch With a size of 500, set the downsampling threshold to 1e-3.
进一步地,由于在不同的诊断信息中,对于同一种诊断存在不同的词语表达,可以先将诊断信息根据国际疾病分类标准(ICD-9)转换成诊断编码,再利用独热编码(one-hot)算法或者CBOW模型将诊断编码转换成词向量。ICD-9是根据疾病的某些特征,按照规则将疾病分类,并用编码的方法来表示的系统。Furthermore, since there are different word expressions for the same diagnosis in different diagnostic information, the diagnostic information can be converted into a diagnostic code according to the International Classification of Diseases (ICD-9) first, and then one-hot coding (one-hot coding) can be used to ) algorithm or CBOW model converts the diagnostic encoding into word vectors. ICD-9 is a system that classifies diseases according to certain characteristics according to the rules, and expresses them with coding methods.
这样,通过预先构建的CBOW连续词袋模型获取诊断信息或诊断编码的词向量,能够实现诊断信息中原始诊断记录与具体疾病名称和疾病相关病征的对应,以及将对应的疾病名称和疾病相关病征转换为深度神经网络模型能接受数值形式,即所述词向量。In this way, the pre-built CBOW continuous bag-of-words model is used to obtain diagnostic information or word vectors of diagnostic codes, which can realize the correspondence between the original diagnosis record in the diagnostic information and the specific disease name and disease-related symptoms, and the corresponding disease name and disease-related symptoms Converting to a deep neural network model can accept a numerical form, that is, the word vector.
本实施例中深度神经网络模型包括多层的LSTM长短期记忆网络(Long/shortterm memory),或者包括多层的GRU门控递归单元网络(Gated recurrent units),优选地,所述深度神经网络模型包括多层的GRU门控递归单元网络。需要说明的是,LSTM网络和GRU网络均是具有时间联结的前馈神经网络,输入向量输入到LSTM网络或GRU网络的顺序,将会影响神经网络的训练结果。比如,相比先输入“曲奇饼”再输入“牛奶”,先输入“牛奶”再输入“曲奇饼”后,神经网络模型所输出的结果可能不同。In this embodiment, the deep neural network model includes a multi-layer LSTM long-term short-term memory network (Long/shortterm memory), or includes a multi-layer GRU gated recurrent unit network (Gated recurrent units), preferably, the deep neural network model A network of GRU-gated recurrent units consisting of multiple layers. It should be noted that both the LSTM network and the GRU network are feed-forward neural networks with time connections, and the order in which the input vectors are input to the LSTM network or the GRU network will affect the training results of the neural network. For example, the output of a neural network model may be different after inputting "milk" and then "cookies" compared to inputting "cookies" first and then "milk".
因此,根据一份诊断信息生成的词向量序列中词向量的先后次序和该诊断信息对应的分词的先后次序是一致的,可以将该词向量序列看做是具有一定的时间序列数据,依次将该词向量序列中的词向量输入到LSTM网络或GRU网络中进行处理,LSTM网络或GRU网络会依次输出和输入的词向量一一对应的隐向量,最终深度神经网络模型的全连接层根据这些词向量对应的隐向量得到对该份诊断信息对应的治疗方案的预测结果。Therefore, the order of the word vectors in the word vector sequence generated according to a piece of diagnostic information is consistent with the sequence of the word segmentation corresponding to the diagnostic information. The word vector sequence can be regarded as having a certain time series data. The word vectors in the word vector sequence are input to the LSTM network or GRU network for processing, and the LSTM network or GRU network will sequentially output hidden vectors corresponding to the input word vectors one by one. Finally, the fully connected layer of the deep neural network model is based on these The hidden vector corresponding to the word vector obtains the prediction result of the treatment plan corresponding to the diagnostic information.
需要说明的是,待处理的病人的诊断信息对应的治疗方案的预测结果为多标签概率向量,该多标签概率向量的维数为预设的治疗方案的个数,其中,治疗方案包括药物治疗、打针、输液、手术或其他治疗手段。一个标签对应一种治疗方案,一个标签概率为待处理的病人本次就诊获得的该种治疗方案的概率值,该概率值能够反映该种治疗方案的可靠度或可采纳度。It should be noted that the prediction result of the treatment plan corresponding to the diagnosis information of the patient to be processed is a multi-label probability vector, and the dimension of the multi-label probability vector is the number of preset treatment plans, where the treatment plan includes drug treatment , injections, infusions, surgery or other treatments. A label corresponds to a treatment plan, and a label probability is the probability value of the treatment plan obtained by the patient to be treated in this visit, and the probability value can reflect the reliability or admissibility of the treatment plan.
步骤S30、将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果。Step S30, inputting the prediction result of the current treatment plan into the sequence model for processing, and obtaining the prediction result of the future treatment plan of the patient to be treated.
可以将步骤S20得到的本次治疗方案的预测结果输入到序列到序列模型中,以得到所述待处理的病人的未来治疗方案的预测结果。所述序列到序列模型为端到端(End-to-End)的序列到序列(Sequence-to-Sequence)模型,Sequence-to-Sequence模型一般是通过编码-解码(Encoder-Decoder)框架实现,Encoder和Decoder部分可以是任意的文字,语音,图像,视频数据,Sequence-to-Sequence模型可以采用CNN卷积神经网络、RNN循环神经网络、LSTM网络、GRU网络等深度学习网络构建,优选地,本实施例采用RNN循环神经网络构建所述Sequence-to-Sequence模型。The prediction result of the current treatment plan obtained in step S20 can be input into the sequence-to-sequence model to obtain the prediction result of the future treatment plan of the patient to be treated. The sequence-to-sequence model is an end-to-end (End-to-End) sequence-to-sequence (Sequence-to-Sequence) model, and the Sequence-to-Sequence model is generally implemented through an encoding-decoding (Encoder-Decoder) framework, The Encoder and Decoder parts can be arbitrary text, voice, image, video data, and the Sequence-to-Sequence model can be constructed using deep learning networks such as CNN convolutional neural network, RNN cyclic neural network, LSTM network, and GRU network. Preferably, In this embodiment, the RNN recurrent neural network is used to construct the Sequence-to-Sequence model.
将得到的待处理病人的本次治疗方案的预测结果即多标签概率向量作为一组治疗序列输入到一个RNN网络中,用于提取该多标签概率向量的序列信息,得到一个定长的预测编码向量,这个RNN称为encoder(编码器),将该预测编码向量输入到另一个RNN中,解码成得到一个解码向量,这个RNN一般称为decoder(解码器),这样就直接得到了从一组治疗序列到接下来一组治疗序列的直接映射,通过decoder解码便可得到一个治疗序列,所述治疗序列为该病人下一次的治疗方案的预测结果,也是一个概率向量。Input the prediction result of the current treatment plan of the patient to be treated, that is, the multi-label probability vector, into a RNN network as a set of treatment sequences, and use it to extract the sequence information of the multi-label probability vector to obtain a fixed-length prediction code Vector, this RNN is called an encoder (encoder), and the predicted encoding vector is input into another RNN, and decoded to obtain a decoding vector, this RNN is generally called a decoder (decoder), so that directly obtained from a set of The direct mapping of the treatment sequence to the next set of treatment sequences can be decoded by the decoder to obtain a treatment sequence, which is the predicted result of the patient's next treatment plan and is also a probability vector.
进一步地,可以将病人下一次的治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理病人的再下一次治疗方案的预测结果。Further, the prediction result of the patient's next treatment plan can be input into the sequence model for processing, and the prediction result of the next treatment plan of the patient to be treated can be obtained.
可以理解地,还可以预先设定待处理病人的治疗方案的预测结果的预设数目,每当序列到序列模型得到新的一个治疗方案的预测结果时,计算该待处理病人的治疗方案的预测结果的个数,若该待处理病人的治疗方案的预测结果的个数未达到所述预设数目,将当前得到的治疗方案的预测结果再次输入到序列到序列模型,以获取新的治疗方案。例如,可能当次生成的治疗方案与下次生成的治疗方案,对应的治疗手段均是使用同一种药物,但是下次生成的治疗方案的用药量可能会比当次生成的治疗方案的用药量少。Understandably, it is also possible to pre-set the preset number of prediction results of the treatment plan of the patient to be processed, and calculate the prediction of the treatment plan of the patient to be processed whenever the sequence-to-sequence model obtains a new prediction result of a treatment plan The number of results, if the number of prediction results of the treatment plan of the patient to be treated does not reach the preset number, the prediction result of the currently obtained treatment plan is re-input into the sequence-to-sequence model to obtain a new treatment plan . For example, the treatment plan generated this time may use the same drug as the treatment plan generated next time, but the dosage of the treatment plan generated next time may be higher than the dosage of the treatment plan generated this time. few.
在本实施例中,通过序列到序列模型获得一组治疗序列(即一个治疗方案)到下一组治疗序列的直接映射为一种创新的技术方案,该方案的实施不需要依赖外部的特征和额外的训练过程,可以得到一个病人在整个治疗周期中的治疗路径,可以避免对病人治疗路径相似度度量的繁琐的处理,为病人提供递进的治疗方案参考建议从而达到更好地辅助治疗的目的。In this embodiment, it is an innovative technical solution to obtain a direct mapping from a set of treatment sequences (that is, a treatment plan) to the next set of treatment sequences through the sequence-to-sequence model, and the implementation of this plan does not need to rely on external features and The additional training process can obtain the treatment path of a patient in the entire treatment cycle, which can avoid the cumbersome processing of the similarity measurement of the patient's treatment path, and provide the patient with progressive treatment plan reference suggestions to achieve better adjuvant treatment. Purpose.
在本实施例中,根据预设数量的诊断信息和与所述诊断信息对应的治疗方案对深度神经网络模型和序列到序列模型进行联合训练。In this embodiment, the deep neural network model and the sequence-to-sequence model are jointly trained according to a preset amount of diagnostic information and a treatment plan corresponding to the diagnostic information.
具体地,先确定序列到序列模型需要预测的治疗方案数目,根据需要预测的治疗方案数目确定训练数据样本中每个病人诊断信息至少需要包含几次诊断记录,例如需要预测的治疗方案数目为三次,则训练数据样本中每个病人诊断信息至少需要包含三次诊断记录;训练时,先将待处理病人的第一次诊断信息输入深度神经网模型进行处理,得到对应的第一次治疗方案的预测结果,此时根据第一次治疗方案的预测结果和真实的第一次治疗方案进行误差计算和深度神经网络模型中的参数更新;继续将预测的第一次治疗方案的预测结果输入到序列到序列模型中进行处理,得到第二次治疗方案的预测结果,此时根据第二次治疗方案的预测结果和真实的第二次治疗方案进行误差计算和序列到序列模型中的参数更新;判断治疗方案的预测结果数目是否到达了预设数目,若未达到则将第二次治疗方案的预测结果返回输入到序列到序列模型,重复上述预测、误差计算和序列到序列模型参数更新的步骤。Specifically, first determine the number of treatment plans that need to be predicted by the sequence-to-sequence model. According to the number of treatment plans that need to be predicted, it is determined that the diagnosis information of each patient in the training data sample needs to contain at least several diagnostic records. For example, the number of treatment plans that need to be predicted is three times. , the diagnostic information of each patient in the training data sample needs to contain at least three diagnostic records; during training, first input the first diagnostic information of the patient to be processed into the deep neural network model for processing, and obtain the prediction of the corresponding first treatment plan As a result, at this time, the error calculation and the parameter update in the deep neural network model are performed according to the predicted results of the first treatment plan and the real first treatment plan; continue to input the predicted results of the first treatment plan into the sequence to The sequence model is processed to obtain the prediction result of the second treatment plan. At this time, the error calculation and the parameter update in the sequence-to-sequence model are performed according to the prediction result of the second treatment plan and the real second treatment plan; judgment of treatment Whether the number of predicted results of the plan reaches the preset number, if not, return the predicted result of the second treatment plan to the sequence-to-sequence model, and repeat the above steps of prediction, error calculation, and sequence-to-sequence model parameter update.
在预先构建的神经网络模型中,包括一个多层的LSTM网络或者GRU网络,具体地,设置网络层数初始化为20,每层隐藏单元数初始化为64。将获取到的训练数据经过神经网络模型和序列到序列模型的多次迭代,即可实现神经网络模型和序列到序列模型的训练生成。In the pre-built neural network model, a multi-layer LSTM network or GRU network is included. Specifically, the number of network layers is initialized to 20, and the number of hidden units in each layer is initialized to 64. The training and generation of the neural network model and the sequence-to-sequence model can be realized by passing the obtained training data through multiple iterations of the neural network model and the sequence-to-sequence model.
需要说明的是,为了提高模型训练的效率和准确率,在对训练数据中诊断信息对应的治疗方案进行预处理时,仅筛选出前1500种最常见的药物和治疗手段,每种药物和治疗手段都由一个编码表示,以作为神经网络模型中的用于生成治疗方案的基础数据库。It should be noted that, in order to improve the efficiency and accuracy of model training, only the top 1,500 most common drugs and treatments are screened out when preprocessing the treatment plan corresponding to the diagnostic information in the training data. All are represented by a code, which serves as the basic database for generating treatment plans in the neural network model.
在一实施例中,获取待处理的病人的诊断信息;将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果;将所述本次治疗方案的预测结果输入序列到序列模型进行处理,得到所述待处理的病人的未来治疗方案的预测结果。这样,通过具有时间联结的前馈型深度神经网络生成病人的当次的治疗方案,并通过序列到序列模型预测病人未来的治疗方案,为病人提供递进的治疗方案参考建议从而达到更好地辅助治疗的目的。In one embodiment, the diagnostic information of the patient to be treated is obtained; the diagnostic information is input into the deep neural network model for processing, and the prediction result of the current treatment plan of the patient to be processed is obtained; the current treatment The prediction result of the plan is input into the sequence model for processing, and the prediction result of the future treatment plan of the patient to be treated is obtained. In this way, the current treatment plan of the patient is generated through the feed-forward deep neural network with time connection, and the future treatment plan of the patient is predicted through the sequence-to-sequence model, and the patient is provided with progressive treatment plan reference suggestions to achieve better results. purpose of adjuvant therapy.
在第二实施例中,如图3所示,在上述图2所示的实施例基础上,所述将所述诊断信息输入深度神经网络模型进行处理,得到所述待处理的病人的本次治疗方案的预测结果的步骤包括:In the second embodiment, as shown in Figure 3, on the basis of the embodiment shown in Figure 2 above, the diagnosis information is input into the deep neural network model for processing, and the patient's current diagnosis information of the patient to be processed is obtained. The steps of predicting the outcome of the treatment regimen include:
步骤S40、将所述诊断信息输入深度神经网络模型进行处理,得到所述诊断信息对应的隐向量。Step S40, input the diagnosis information into the deep neural network model for processing, and obtain the hidden vector corresponding to the diagnosis information.
步骤S41、将所述诊断信息对应的隐向量输入自注意力机制层进行处理,获得所述隐向量对应的权重。Step S41 , input the latent vector corresponding to the diagnosis information into the self-attention mechanism layer for processing, and obtain the weight corresponding to the latent vector.
步骤S42、根据所述隐向量和所述权重得到加权后的隐向量。Step S42. Obtain a weighted hidden vector according to the hidden vector and the weight.
步骤S43、根据所述加权后的隐向量得到所述待处理的病人的本次治疗方案的预测结果。Step S43, obtaining the prediction result of the current treatment plan of the patient to be treated according to the weighted hidden vector.
本实施例中,通过预先构建的深度神经网络模型,或者训练完成深度神经网络模型,将诊断信息对应的词向量作为深度神经网络模型的输入向量,基于输入向量,深度神经网络模型对应输出与该输入向量对应的输出向量,作为与所述诊断信息对应的隐向量,所述隐向量为所述诊断信息对应的特征表示。In this embodiment, through the pre-built deep neural network model, or the deep neural network model after training, the word vector corresponding to the diagnostic information is used as the input vector of the deep neural network model, and based on the input vector, the corresponding output of the deep neural network model is the same as the An output vector corresponding to the input vector is used as a hidden vector corresponding to the diagnostic information, and the hidden vector is a feature representation corresponding to the diagnostic information.
由于诊断信息中的病人的病征具有不同病情的轻重程度,因此,可以根据诊断信息中的病征,将诊断信息划分为病征一级、病征二级、病征三级和病征四级这四个病征等级,即所述等级信息,其中,病征四级对应为主要诊断的病征,病征三级对应为其他诊断的病征,病征二级为损伤类的病征,病征一级为其他常见的轻微病征,重要程度依次降低。Since the symptoms of the patients in the diagnostic information have different degrees of severity, the diagnostic information can be divided into four symptom levels: first-level symptoms, second-level symptoms, third-level symptoms, and fourth-level symptoms according to the symptoms in the diagnostic information. , that is, the grade information, wherein, the fourth grade of symptoms corresponds to the symptoms of the main diagnosis, the third grade of symptoms corresponds to the symptoms of other diagnoses, the second grade of symptoms corresponds to the symptoms of injuries, the first grade of symptoms corresponds to other common mild symptoms, and the degree of importance Decrease in turn.
具体地,在根据隐向量生成治疗方案前,通过引入自注意力机制(Self-AttentionMechanism)作为权重层,将隐向量输入到权重层中,让自注意力机制根据各隐向量归类的病征等级,进行自主学习,生成与病征等级对应的权重,将权重赋予对应的隐向量。当然,在构建深度神经网络模型时,也可以是构建包括所述权重层的深度神经网络模型。Specifically, before generating a treatment plan based on the hidden vectors, the self-attention mechanism (Self-Attention Mechanism) is introduced as the weight layer, and the hidden vectors are input into the weight layer, so that the self-attention mechanism classifies the symptoms according to the level of each hidden vector , carry out autonomous learning, generate weights corresponding to the level of symptoms, and assign the weights to the corresponding hidden vectors. Of course, when constructing the deep neural network model, it is also possible to construct the deep neural network model including the weight layer.
需要说明的是,自注意力机制和传统注意力机制的不同在于,传统注意力机制本质上是一种对齐的操作,即需要引入注意力机制的句子与外部的信息进行对齐,而自注意力机制不需要引入外部的信息来更新参数。自注意力机制在序列学习任务上具有巨大的提升作用,自注意力机制通过对源数据序列进行数据加权变换,可以有效提高序列对序列模型的系统表现。It should be noted that the difference between the self-attention mechanism and the traditional attention mechanism is that the traditional attention mechanism is essentially an alignment operation, that is, the sentence that needs to introduce the attention mechanism is aligned with the external information, while the self-attention mechanism The mechanism does not need to introduce external information to update parameters. The self-attention mechanism has a huge improvement effect on the sequence learning task. The self-attention mechanism can effectively improve the system performance of the sequence-to-sequence model by performing data weighted transformation on the source data sequence.
具体地,在深度神经网络模型输出的长度为n的隐向量序列H可以表示为:Specifically, the hidden vector sequence H of length n output by the deep neural network model can be expressed as:
H=(h1,h2,…,hn)H=(h 1 ,h 2 ,…,h n )
自注意力机制的权重公式为:The weight formula of the self-attention mechanism is:
a=softmax(Ws2tanh(Ws1HT))a=softmax(W s2 tanh(W s1 H T ))
其中,a为与隐向量序列H对应的权重序列,HT为阵列翻转转置后的隐向量序列H,Ws1和Ws2都是自注意力机制模型中的模型参数,在训练迭代中不断更新优化。Among them, a is the weight sequence corresponding to the hidden vector sequence H, H T is the hidden vector sequence H after the array is flipped and transposed, W s1 and W s2 are the model parameters in the self-attention mechanism model, which are continuously Update optimization.
经过自注意力机制,赋予了权重值的隐向量序列M可以表示为:After the self-attention mechanism, the hidden vector sequence M endowed with weight values can be expressed as:
M=aHM=aH
将赋予了权重值的隐向量经过全连接层的处理得到待处理病人的诊断信息对应的治疗方案预测结果,将该预测结果输入到序列到序列模型中生成病人的治疗方案时,权重值越大的隐向量,自然会引起序列到序列模型更多的“重视”,可以提高序列到序列模型生成的所述待处理的病人的未来治疗方案的准确性。The hidden vector assigned the weight value is processed by the fully connected layer to obtain the prediction result of the treatment plan corresponding to the diagnosis information of the patient to be processed. When the prediction result is input into the sequence-to-sequence model to generate the treatment plan of the patient, the greater the weight value The hidden vector will naturally cause the sequence-to-sequence model to pay more attention, which can improve the accuracy of the future treatment plan of the patient to be treated generated by the sequence-to-sequence model.
在一实施例中,将所述诊断信息输入深度神经网络模型进行处理,得到所述诊断信息对应的隐向量;将所述诊断信息对应的隐向量输入自注意力机制层进行处理,获得所述隐向量对应的权重;根据所述隐向量和所述权重得到加权后的隐向量;根据所述加权后的隐向量得到所述待处理的病人的本次治疗方案的预测结果。这样,通过在生成治疗方案的过程中根据各隐向量所对应的诊断信息的病征等级,在自注意力机制的自主学习下,对隐向量赋予对病征等级对应的权重值,实现根据病人病征轻重程度生成治疗方案,提高了生成的治疗方案的准确性。In one embodiment, the diagnostic information is input into a deep neural network model for processing to obtain a hidden vector corresponding to the diagnostic information; the hidden vector corresponding to the diagnostic information is input into a self-attention mechanism layer for processing to obtain the A weight corresponding to the hidden vector; a weighted hidden vector is obtained according to the hidden vector and the weight; a prediction result of the current treatment plan of the patient to be treated is obtained according to the weighted hidden vector. In this way, in the process of generating the treatment plan, according to the symptom level of the diagnostic information corresponding to each latent vector, under the autonomous learning of the self-attention mechanism, the weight value corresponding to the symptom level is assigned to the latent vector, so as to realize the diagnosis according to the severity of the patient's symptoms. The degree of generation of treatment plan improves the accuracy of the generated treatment plan.
此外,本发明还提出一种基于深度学习的治疗方案的生成装置,所述基于深度学习的治疗方案的生成装置包括存储器、处理器及存储在存储器上并可在处理器上运行的基于深度学习的治疗方案的生成程序,所述处理器执行所述基于深度学习的治疗方案的生成程序时实现如以上实施例所述的基于深度学习的治疗方案的生成方法的步骤。In addition, the present invention also proposes a device for generating a treatment plan based on deep learning. The device for generating a treatment plan based on deep learning includes a memory, a processor, and a device based on deep learning that is stored on the memory and can run on the processor. The generation program of the treatment plan, when the processor executes the generation program of the treatment plan based on deep learning, the steps of the method for generating the treatment plan based on deep learning as described in the above embodiments are realized.
此外,本发明还提出一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括基于深度学习的治疗方案的生成程序,所述基于深度学习的治疗方案的生成程序被处理器执行时实现如以上实施例所述的基于深度学习的治疗方案的生成方法的步骤。In addition, the present invention also proposes a computer-readable storage medium, which is characterized in that the computer-readable storage medium includes a generation program of a treatment plan based on deep learning, and the generation program of a treatment plan based on deep learning is processed by a processor During execution, the steps of the method for generating a treatment plan based on deep learning as described in the above embodiments are realized.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the above embodiments of the present invention are for description only, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是电视机,手机,计算机,基于深度学习的治疗方案的生成装置,空调器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on such an understanding, the technical solution of the present invention can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM) , magnetic disk, optical disk), including several instructions to make a terminal device (which can be a TV, a mobile phone, a computer, a device for generating a treatment plan based on deep learning, an air conditioner, or a network device, etc.) execute each The method described in the examples.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description of the present invention and the contents of the accompanying drawings, or directly or indirectly used in other related technical fields , are all included in the scope of patent protection of the present invention in the same way.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811407145.8A CN109637669B (en) | 2018-11-22 | 2018-11-22 | Method, device and storage medium for generating treatment plan based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811407145.8A CN109637669B (en) | 2018-11-22 | 2018-11-22 | Method, device and storage medium for generating treatment plan based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109637669A CN109637669A (en) | 2019-04-16 |
CN109637669B true CN109637669B (en) | 2023-07-18 |
Family
ID=66068934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811407145.8A Active CN109637669B (en) | 2018-11-22 | 2018-11-22 | Method, device and storage medium for generating treatment plan based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109637669B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110176311A (en) * | 2019-05-17 | 2019-08-27 | 北京印刷学院 | A kind of automatic medical proposal recommending method and system based on confrontation neural network |
CN110110059B (en) * | 2019-05-20 | 2021-06-29 | 挂号网(杭州)科技有限公司 | Medical dialogue system intention identification and classification method based on deep learning |
CN110880362B (en) * | 2019-11-12 | 2022-10-11 | 南京航空航天大学 | Large-scale medical data knowledge mining and treatment scheme recommending system |
CN111192693B (en) * | 2019-12-19 | 2021-07-27 | 山东大学 | A method and system for diagnosis coding correction based on drug combination |
CN111341437B (en) * | 2020-02-21 | 2022-02-11 | 山东大学齐鲁医院 | Digestive tract disease judgment auxiliary system based on tongue image |
CN111815487B (en) * | 2020-06-28 | 2024-02-27 | 珠海中科先进技术研究院有限公司 | Deep learning-based health education assessment method, device and medium |
CN111701150B (en) * | 2020-07-02 | 2022-06-17 | 中国科学院苏州生物医学工程技术研究所 | Intelligent light diagnosis and treatment equipment |
CN114649071A (en) * | 2020-12-18 | 2022-06-21 | 中电药明数据科技(成都)有限公司 | Real world data-based peptic ulcer treatment scheme prediction system |
TWI825467B (en) * | 2021-08-23 | 2023-12-11 | 緯創資通股份有限公司 | Data analysis system and data analysis method |
CN115115620B (en) * | 2022-08-23 | 2022-12-13 | 安徽中医药大学 | Pneumonia lesion simulation method and system based on deep learning |
CN116013503B (en) * | 2022-12-27 | 2024-02-20 | 北京大学长沙计算与数字经济研究院 | Dental treatment plan determination method, electronic device and storage medium |
CN116407176A (en) * | 2023-04-14 | 2023-07-11 | 广州奈瑞儿医疗器械有限公司 | A treatment method and device for resetting facial fat pad |
CN116798630B (en) * | 2023-07-05 | 2024-03-08 | 广州视景医疗软件有限公司 | Myopia physiotherapy compliance prediction method, device and medium based on machine learning |
CN119964811A (en) * | 2025-04-08 | 2025-05-09 | 成都市双流区妇幼保健院 | An intelligent obstetric nursing screening auxiliary system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778014A (en) * | 2016-12-29 | 2017-05-31 | 浙江大学 | A kind of risk Forecasting Methodology based on Recognition with Recurrent Neural Network |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1613088A (en) * | 2001-11-02 | 2005-05-04 | 美国西门子医疗解决公司 | Patient Data Mining for Cardiac Screening |
AU2011207344B2 (en) * | 2010-01-21 | 2015-05-21 | Asthma Signals, Inc. | Early warning method and system for chronic disease management |
DE102014209627B3 (en) * | 2014-05-21 | 2015-10-29 | Cytocentrics Bioscience Gmbh | In-vitro diagnostics for model-based therapy planning |
US10046177B2 (en) * | 2014-06-18 | 2018-08-14 | Elekta Ab | System and method for automatic treatment planning |
US10299735B2 (en) * | 2014-08-22 | 2019-05-28 | Pulse Tectonics Llc | Automated diagnosis based at least in part on pulse waveforms |
CN107145746A (en) * | 2017-05-09 | 2017-09-08 | 北京大数医达科技有限公司 | The intelligent analysis method and system of a kind of state of an illness description |
CN108717866B (en) * | 2018-04-03 | 2022-10-11 | 中国医学科学院肿瘤医院 | Method, device, equipment and storage medium for predicting radiotherapy plan dose distribution |
CN108766563A (en) * | 2018-05-25 | 2018-11-06 | 戴建荣 | Radiotherapy prediction of result method and system based on dosage group |
-
2018
- 2018-11-22 CN CN201811407145.8A patent/CN109637669B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778014A (en) * | 2016-12-29 | 2017-05-31 | 浙江大学 | A kind of risk Forecasting Methodology based on Recognition with Recurrent Neural Network |
Also Published As
Publication number | Publication date |
---|---|
CN109637669A (en) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109637669B (en) | Method, device and storage medium for generating treatment plan based on deep learning | |
US11810671B2 (en) | System and method for providing health information | |
CN109978022B (en) | A kind of medical text information processing method and device, storage medium | |
Guan et al. | A method for generating synthetic electronic medical record text | |
CN108417272B (en) | Similar case recommendation method and device with time sequence constraints | |
US20200227175A1 (en) | Document improvement prioritization using automated generated codes | |
CN113674858B (en) | Intelligent inspection method, device, equipment and storage medium for on-line medical prescription medication | |
CN117574921A (en) | A Chinese dental intelligent diagnosis and treatment question and answer method, electronic device and storage medium | |
CN111128391B (en) | Information processing apparatus, method and storage medium | |
CN118277573B (en) | Pre-hospital emergency text classification labeling method based on ChatGLM model, electronic equipment, storage medium and computer program product | |
WO2022227164A1 (en) | Artificial intelligence-based data processing method and apparatus, device, and medium | |
Chi et al. | Predicting the mortality and readmission of in-hospital cardiac arrest patients with electronic health records: a machine learning approach | |
CN114627993A (en) | Information prediction method, information prediction device, storage medium and computer equipment | |
CN115687642A (en) | Traditional Chinese medicine diagnosis and treatment knowledge discovery method based on clinical knowledge graph representation learning | |
CN114708976A (en) | Method, device, device and storage medium for auxiliary diagnosis technology | |
Yan et al. | EIRAD: An evidence-based dialogue system with highly interpretable reasoning path for automatic diagnosis | |
Lin et al. | Interpretable deep learning system for identifying critical patients through the prediction of triage level, hospitalization, and length of stay: Prospective study | |
CN115862862A (en) | Disease prediction method, device and computer-readable storage medium | |
Guo et al. | RDKG: a reinforcement learning framework for disease diagnosis on knowledge graph | |
WO2024242745A1 (en) | Multi-modal health data analysis and response generation system | |
CN113658688A (en) | Clinical decision support method based on word segmentation-free deep learning | |
CN111081325B (en) | Medical data processing method and device | |
Liu et al. | Duka: A dual-keyless-attention model for multi-modality EHR data fusion and organ failure prediction | |
Chen et al. | Hm-mds: A human-machine collaboration based online medical diagnosis system | |
Wang et al. | End-to-end pre-trained dialogue system for automatic diagnosis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |