[go: up one dir, main page]

CN112819811B - Image analysis method and related device, electronic device, and storage medium - Google Patents

Image analysis method and related device, electronic device, and storage medium Download PDF

Info

Publication number
CN112819811B
CN112819811B CN202110209414.5A CN202110209414A CN112819811B CN 112819811 B CN112819811 B CN 112819811B CN 202110209414 A CN202110209414 A CN 202110209414A CN 112819811 B CN112819811 B CN 112819811B
Authority
CN
China
Prior art keywords
image
detection
region
bone
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110209414.5A
Other languages
Chinese (zh)
Other versions
CN112819811A (en
Inventor
谢帅宁
赵亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shangtang Shancui Medical Technology Co ltd
Original Assignee
Shanghai Shangtang Shancui Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shangtang Shancui Medical Technology Co ltd filed Critical Shanghai Shangtang Shancui Medical Technology Co ltd
Priority to CN202110209414.5A priority Critical patent/CN112819811B/en
Publication of CN112819811A publication Critical patent/CN112819811A/en
Application granted granted Critical
Publication of CN112819811B publication Critical patent/CN112819811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image analysis method, a related device, electronic equipment and a storage medium, wherein the image analysis method comprises the steps of obtaining a medical image of an object to be detected, extracting features of the medical image to obtain a feature image, carrying out first detection on the feature image to obtain a first detection result, wherein the first detection result indicates the possibility of a target area in the medical image, and carrying out second detection on the feature image based on the first detection result to obtain a second detection result, and the second detection result comprises attribute information of the target area. The scheme can improve the speed and accuracy of image analysis, and can be particularly applied to detecting the first detection result and the second detection result of the fracture region in the medical image, and can improve the speed and accuracy of fracture region detection.

Description

Image analysis method and related device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image analysis method and related apparatus, electronic device, and storage medium.
Background
Depending on the medical image such as CT (Computed Tomography ), the medical staff can analyze the target area such as the fracture area in the medical image according to clinical knowledge and experience. For example, a healthcare worker may analyze whether a fracture exists in the medical image, the type of fracture, and the like. However, by manually performing image analysis, there are often problems in that the analysis speed is slow, and it is easily subjectively affected, making it difficult to ensure analysis accuracy. In view of this, how to improve the speed and accuracy of image analysis is a problem to be solved.
Disclosure of Invention
The application provides an image analysis method, a related device, electronic equipment and a storage medium.
The first aspect of the application provides an image analysis method, which comprises the steps of obtaining a medical image of an object to be detected, extracting features of the medical image to obtain a feature image, carrying out first detection on the feature image to obtain a first detection result, wherein the first detection result indicates the possibility of a target area in the medical image, carrying out second detection on the feature image based on the first detection result to obtain a second detection result, and the second detection result comprises attribute information of the target area.
Therefore, by acquiring the medical image of the object to be measured and performing feature extraction on the medical image to obtain a feature image, thereby performing first detection on the feature image to obtain a first detection result, wherein the first detection result indicates the possibility of the existence of the target region in the medical image, and performing second detection on the feature image based on the first detection result to obtain a second detection result, and the second detection result includes the attribute information of the target region, image analysis by manpower can be avoided, so that the image analysis speed and accuracy can be advantageously improved, and furthermore, since both the first detection on the possibility of the existence of the target region and the second detection on the attribute information of the target region are performed based on the feature image, that is, the first detection and the second detection share the feature image, the calculation amount can be reduced compared with the two tasks which are performed completely independently. Therefore, the speed and the accuracy of image analysis can be improved by the mode.
The target area comprises a fracture area, and before the medical image is subjected to feature extraction to obtain a feature image, the method comprises the steps of carrying out third detection on the medical image to obtain a bone area containing bone pixel points, carrying out feature extraction on the medical image to obtain a feature image, wherein the method comprises the steps of carrying out area extraction on the medical image based on the bone pixel points in the bone area to obtain at least one area image, wherein the center of the area image is the bone pixel point, and carrying out feature extraction on the at least one area image to obtain the feature image.
Therefore, the target area comprises a fracture area, and the medical image is subjected to third detection to obtain a bone area containing bone pixel points before the medical image is subjected to feature extraction, so that at least one area image is obtained by carrying out area extraction on the medical image based on the bone pixel points in the bone area, the center of the area image is the bone pixel points, and the feature extraction is respectively carried out on the at least one area image to obtain the feature image, so that the feature extraction can be focused on the bone area, the interference of other areas on fracture detection can be reduced, the detection of missing of slight fracture can be reduced, and the accuracy of fracture detection is further improved.
Before the medical image is subjected to region extraction to obtain at least one region image based on the bone pixel points in the bone region, the method further comprises the steps of obtaining the bone pixel points on a skeleton line of the bone region, taking the bone pixel points as candidate pixel points, and selecting at least one candidate pixel point as a target pixel point, wherein the center of the region image is the target pixel point.
Therefore, by acquiring the bone pixel points on the skeleton line of the bone region as candidate pixel points, selecting at least one candidate pixel point as a target pixel point, and taking the center of the region image as the target pixel point, feature extraction can be focused on the bone region, the fracture detection accuracy can be improved, and at the same time, the number of region images can be reduced, so that the calculation amount can be reduced, and the fracture detection speed can be improved.
Before the feature extraction is performed on at least one region image to obtain a feature image, the method further comprises the steps of performing fourth detection on the at least one region image to obtain a bone abnormality condition of the region image, and performing feature extraction on the at least one region image to obtain a feature image, wherein the feature extraction is performed on the region image with the bone abnormality condition meeting the preset condition to obtain the feature image.
Therefore, before the feature extraction is performed on at least one region image respectively to obtain the feature image, the fourth detection is performed on the at least one region image respectively to obtain the bone abnormal condition of the region image, and the feature extraction is performed on the region image with the bone abnormal condition meeting the preset condition to obtain the feature image, so that the detection of the fracture from thick to thin can be realized, namely, the region image with the bone abnormal condition meeting the preset condition is firstly screened preliminarily, the detection of the region image with the bone abnormal condition not meeting the preset condition can be avoided, and on the basis, the subsequent fine screening is performed on the screened region image, so that the fracture detection speed can be further improved.
The bone abnormality condition comprises a bone abnormality score, and the preset condition comprises that the bone abnormality score is larger than a preset score threshold.
Therefore, the bone abnormality is set to include the bone abnormality score, and the preset condition is set to include that the bone abnormality score is larger than the preset score threshold, so that detection of an area image with low bone abnormality degree can be avoided, and subsequent fine screening of an area image with high bone abnormality degree can be performed, and the fracture detection speed can be improved.
The method comprises the steps of extracting features of a medical image to obtain a feature image, wherein the feature extraction comprises the steps of extracting the features of the medical image by utilizing a feature extraction sub-network of an image analysis model to obtain the feature image, and/or the feature image is subjected to first detection to obtain a first detection result, the feature image is subjected to first detection by utilizing a first detection sub-network of the image analysis model to obtain the first detection result, and/or the feature image is subjected to second detection to obtain the second detection result, and the feature image is subjected to second detection by utilizing a second detection sub-network of the image analysis model to obtain the second detection result.
Therefore, the feature extraction of the medical image can be facilitated by the feature extraction sub-network of the image analysis model, the first detection of the feature image can be facilitated by the first detection sub-network of the image analysis model, and the first detection efficiency can be facilitated, and the second detection of the feature image can be facilitated by the second detection sub-network of the image analysis model, and the second detection efficiency can be facilitated.
The training method of the image analysis model comprises the steps of obtaining a sample medical image, wherein the sample medical image is marked with a first mark and a second mark, the first mark represents whether a sample target area exists in the sample medical image, the second mark represents sample attribute information of the sample target area, a feature extraction sub-network of the image analysis model is utilized to conduct feature extraction on the sample medical image to obtain a sample feature image, a first detection sub-network of the image analysis model is utilized to conduct first detection on the sample feature image to obtain a first prediction result, a second detection sub-network of the image analysis model is utilized to conduct second detection on the feature image to obtain a second prediction result, the first prediction result represents the possibility that the sample target area exists in the sample medical image, the second prediction result comprises prediction attribute information of the sample target area, and network parameters of the image analysis model are adjusted based on differences between the first mark and the first prediction result and differences between the second mark and the second prediction result.
Therefore, the sample medical image is obtained, the sample medical image is marked with the first mark and the second mark, the first mark represents whether a sample target area exists in the sample medical image, the second mark represents sample attribute information of the sample target area, and the characteristic extraction sub-network of the image analysis model is utilized to extract characteristics of the sample medical image to obtain a sample characteristic image, so that the first detection sub-network and the second detection sub-network of the image analysis model are utilized to perform first detection on the sample characteristic image to obtain a first prediction result, the second detection sub-network of the image analysis model is utilized to perform second detection on the characteristic image to obtain a second prediction result, the first prediction result represents the possibility that the sample target area exists in the sample medical image, the second prediction result comprises prediction attribute information of the sample target area, and further, based on the difference between the first mark and the first prediction result and the difference between the second mark and the second prediction result, network parameters of the image analysis model are adjusted, and therefore, the first detection sub-network and the second detection sub-network can be trained jointly in the training process of the image analysis model, and the second detection sub-network can be beneficial to the task and the mutual monitoring task can be improved.
The first detection result comprises a probability value of the target area in the medical image, and the probability value is larger, so that the probability of the target area is higher.
Setting the first detection result as a probability value of the existence of the target region in the medical image, wherein the probability value is larger, the probability of the existence of the target region is higher, the probability of the target region can be quantified, and therefore a user can conveniently judge whether the target region exists according to the probability value.
The application provides an image analysis device which comprises an image acquisition module, a characteristic extraction module, a first detection module and a second detection module, wherein the image acquisition module is used for acquiring a medical image of an object to be detected, the characteristic extraction module is used for carrying out characteristic extraction on the medical image to obtain a characteristic image, the first detection module is used for carrying out first detection on the characteristic image to obtain a first detection result, the first detection result is used for indicating the possibility of a target area in the medical image, the second detection module is used for carrying out second detection on the characteristic image based on the first detection result to obtain a second detection result, and the second detection result comprises attribute information of the target area.
A third aspect of the present application provides an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image analysis method of the first aspect.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the image analysis method of the first aspect described above.
According to the scheme, the medical image of the object to be detected is obtained, the characteristic image is obtained through characteristic extraction of the medical image, the characteristic image is subjected to first detection, a first detection result is obtained, the first detection result indicates the possibility of the existence of the target area in the medical image, the characteristic image is subjected to second detection based on the first detection result, the second detection result is obtained, and the second detection result comprises the attribute information of the target area, so that image analysis can be avoided manually, the image analysis speed and accuracy can be improved favorably, and in addition, since the first detection on the possibility of the existence of the target area and the second detection on the attribute information of the target area are both performed based on the characteristic image, namely, the first detection task and the second detection task share the characteristic image, compared with the situation that two tasks are completely independently performed, the calculation amount can be reduced. Therefore, the speed and the accuracy of image analysis can be improved by the mode.
Drawings
FIG. 1 is a flow chart of an embodiment of an image analysis method according to the present application;
FIG. 2 is a schematic diagram of a framework of one embodiment of an image analysis model;
FIG. 3 is a flow chart of another embodiment of the image analysis method of the present application;
FIG. 4 is a flow diagram of one embodiment of a training process for an image analysis model;
FIG. 5 is a schematic diagram of an embodiment of an image analysis apparatus according to the present application;
FIG. 6 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 7 is a schematic diagram of a frame of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean that a exists alone, while a and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of an image analysis method according to the present application. Specifically, the method may include the steps of:
step S11, acquiring a medical image of the object to be tested.
In one implementation scenario, the object to be tested may be different objects according to the actual application situation, which is not limited herein. For example, in the scenario of injury identification, reimbursement, sentency, etc., the subject to be tested may include a victim, or in the physical examination scenario, the subject to be tested may include a subject person to be examined, which is not exemplified herein. Further, the object to be measured may not be limited to a human, but may include zoo animals, pets, and the like.
In one implementation scenario, the target area to be detected in the medical image may include, but is not limited to, a fracture area, and may be specifically set according to the actual application needs, which is not limited herein.
In one implementation scenario, taking the case that the target area includes a fracture area as an example, the medical image may also be an image of the object to be detected at a different location according to the location of the target area to be detected. For example, the medical image may be a chest medical image of the subject, in which case the fracture may be a rib fracture, i.e. rib fracture detection may be performed using the chest medical image, or the medical image may be a hand medical image of the subject, in which case the fracture may be a hand bone fracture, i.e. hand bone fracture detection may be performed using the hand medical image, and the like, and so on, which is not exemplified here.
In one implementation, the medical image may include, but is not limited to, a CT image, an MR (Magnetic Resonance ) image.
In another implementation scenario, the medical image may be a three-dimensional image or may be a two-dimensional image, for example, the two-dimensional image may be an axial slice of a CT, without limitation.
In a specific implementation scenario, the medical image may be specifically configured as a three-dimensional image, so that spatial information is prevented from being lost due to only using the axial slice, which is beneficial to acquiring changes on the continuous slice during the detection process, and improving the detection accuracy.
And step S12, extracting the characteristics of the medical image to obtain a characteristic image.
In one implementation scenario, in order to improve the detection efficiency, an image analysis model may be trained in advance, and the image analysis model may include a feature extraction sub-network, so that feature extraction may be performed on the medical image by using the feature extraction sub-network of the image analysis model to obtain a feature image.
In one particular implementation scenario, the feature extraction subnetwork may include a convolutional layer (Convolutional Layer), a pooling layer (Pooling Layer).
In another specific implementation scenario, the image analysis model may be obtained by training a sample medical image, and in particular, reference may be made to the following training procedure embodiment for the image analysis model, which is not described herein in detail.
In another implementation scenario, taking an example that the target area includes a fracture area, before feature extraction is performed, a medical image may be detected to obtain a bone area including bone pixels, then, based on the bone pixels in the bone area, area extraction is performed on the medical image to obtain at least one area image, the center of the area image is the bone pixels, and feature extraction is performed on the at least one area image to obtain a feature image. Therefore, the feature extraction can be focused on the bone region, so that the interference of other regions on fracture detection can be reduced, the detection omission of slight fracture can be reduced, and the accuracy of fracture detection is further improved.
In one particular implementation, a medical image may be detected based on a threshold and shape to obtain a bone region containing bone pixels. Taking rib fracture detection as an example, a medical image can be converted into a binary image based on a preset threshold value, so that a bone structure (such as a rib cage, a sternum, a clavicle and a scapula) is extracted, and since the sternum and the scapula are similar to a plate and the rib is similar to a tube, on the basis, the plate degree (the higher the plate degree is, the greater the possibility of representing the plate bone) can be calculated based on Hessian analysis, and the plate bone is enhanced, so that the sternum and the scapula are obtained, the detected sternum and the detected scapula are removed, and finally the rib structure is obtained.
In another specific implementation scenario, in order to improve the detection efficiency of the bone region, the image analysis model may further include a bone region detection sub-network, so that the bone region detection sub-network may be directly used to detect the medical image, and a bone region including bone pixel points is obtained. Specifically, the bone region detection subnetwork may include, but is not limited to, U-Net, V-Net, and the like. Still taking rib fracture detection as an example, a sample medical image marked with an actual rib region can be obtained in advance, and the sample medical image is detected by utilizing a rib region detection sub-network to obtain a predicted rib region containing rib pixel points, so that the difference between the actual rib region and the predicted rib region can be utilized to adjust network parameters of the rib region detection sub-network. Other situations can be similar and are not exemplified here.
In still another specific implementation scenario, the size of the area image is a preset size, and the preset size may be set according to the actual situation, which is not limited herein. For example, in the case where the medical image is a three-dimensional image, the region image is an image of a preset length (e.g., 10), a preset width (e.g., 10), and a preset height (e.g., 10).
In still another specific implementation scenario, before feature extraction is performed on at least one region image, bone pixel points located on a skeleton line of a bone region may be acquired first as candidate pixel points, at least one candidate pixel point is selected as a target pixel point, and a center of the region image is the target pixel point. The skeleton line of the bone region is a line where the center of each cross section of the bone region is located. In addition, one candidate pixel point may be selected as a target pixel point every a preset distance (e.g., 10) on the skeleton line. Therefore, feature extraction can be focused on the bone region, fracture detection accuracy can be improved, and at the same time, the number of region images can be reduced, so that the calculated amount can be reduced, and the fracture detection speed can be improved.
In another specific implementation scenario, before feature extraction is performed on at least one region image, at least one region image may be detected first to obtain a bone abnormality of the region image, so that feature extraction can be performed on only the region image in which the bone abnormality meets a preset condition to obtain a feature image. Therefore, the fracture detection from thick to thin can be realized, namely, the region images with the abnormal condition of the bone meeting the preset condition are firstly screened preliminarily, so that the detection of the region images with the abnormal condition of the bone not meeting the preset condition can be avoided, and on the basis, the screened region images are subjected to subsequent fine screening, so that the fracture detection speed can be further improved. Specifically, the bone abnormality conditions such as fracture, bone deformity, hyperosteogeny and the like, which deviate from the normal bones, are bone abnormality, and in practical application, the bone abnormality conditions may include a bone abnormality score, and the greater the bone abnormality score, the higher the bone abnormality degree, and the preset conditions may be set to include that the bone abnormality score is greater than a preset score threshold. In addition, in order to improve the efficiency of bone abnormality detection, the image analysis model may further include a bone abnormality detection sub-network, so that at least one region image may be detected by using the bone abnormality detection sub-network, respectively, to obtain a bone abnormality of the region image.
And step S13, performing first detection on the characteristic image to obtain a first detection result.
In an embodiment of the present disclosure, the first detection result indicates a possibility that a target region exists in the medical image. As mentioned above, the target area may be specifically set according to the actual application requirement, and may include, but not limited to, a fracture area.
In one implementation scenario, the first detection result may include a probability value of the presence of the target region in the medical image, and the greater the probability value, the higher the likelihood of the presence of the target region. Still taking the example that the target region includes a fracture region, the greater the probability value, the higher the likelihood that the fracture region exists.
In one implementation scenario, in order to improve the efficiency of the first detection, the image analysis model may further include a first detection sub-network, so that the first detection sub-network may be used to perform the first detection on the feature image, to obtain a first detection result. In particular, the first detection subnetwork may include, but is not limited to, a convolutional layer, a pooling layer, a fully-connected layer, and is not limited herein.
And step S14, based on the first detection result, performing second detection on the characteristic image to obtain a second detection result.
In an embodiment of the disclosure, the second detection result includes attribute information of the target area. Still taking the example that the target region includes a fracture region, the attribute information of the fracture region may include a fracture type in particular.
In one implementation scenario, in order to improve the efficiency of the second detection, the image analysis model may further include a second detection sub-network, so that the second detection sub-network may be used to perform the second detection on the feature image, to obtain a second detection result. In particular, the second detection subnetwork may include, but is not limited to, a convolutional layer, a pooling layer, a fully-connected layer, and is not limited herein.
In another implementation scenario, the fracture type may be set according to the actual application needs. For example, the fracture types may include complete fracture, incomplete fracture, and further, the fracture types may include old fracture, new fracture, without limitation.
In yet another implementation scenario, taking still the example that the target region includes a fracture region, the attribute information of the target region may further include a fracture position. The fracture site may include, but is not limited to, the center of the fracture region, the size of the fracture region in particular. For example, if the center of the fracture region is a pixel (48,64,100) and the size of the fracture region is 10 x 10, the fracture region can be considered to be centered on the pixel (48,64,100), and the area with the size of 10 x 10 is a fracture area, and other situations can be similar, and are not exemplified here. Thus, by depending on the center and the size described above, the fracture region can be located.
In one implementation scenario, as described above, the first detection result may include a probability value of the target region existing in the medical image, and if the probability value is greater than a preset probability threshold, the second detection may be performed on the feature image to obtain a second detection result. In addition, in the case that the probability value is not greater than the preset probability threshold, it may be directly prompted that the target region does not exist in the medical image.
In one implementation scenario, the first detection result and the second detection result are combined to obtain a comprehensive detection result for reference of medical staff. For example, in the case that the target area includes a fracture area, when the first detection result shows that the possibility of the fracture area exists in the medical image is high, the comprehensive detection result may be given further based on the fracture type a included in the second detection result, that is, the object to be detected is likely to have the fracture type a, or when the first detection result shows that the possibility of the fracture area exists in the medical image is low, the comprehensive detection result may be given further based on the fracture type B included in the second detection result, that is, the fracture possibility is low, whether the fracture type B exists is recommended to be excluded, and the setting may be specifically performed according to the actual application requirement, which is not limited herein.
Referring to fig. 2 in combination, fig. 2 is a schematic diagram of an embodiment of an image analysis model. As shown in fig. 2, the image analysis model may include a feature extraction sub-network, a first detection sub-network and a second detection sub-network, where the medical image passes through the feature extraction sub-network of the image analysis model to obtain a feature image, and the feature image is sent to the first detection sub-network and the second detection sub-network, and the first detection sub-network outputs a first detection result, and the second detection sub-network outputs a second detection result.
According to the scheme, the medical image of the object to be detected is obtained, the characteristic image is obtained through characteristic extraction, the characteristic image is subjected to first detection, a first detection result is obtained, the first detection result indicates the possibility of the existence of the target area in the medical image, the characteristic image is subjected to second detection based on the first detection result, the second detection result is obtained, and the second detection result comprises the attribute information of the target area, so that image analysis can be avoided manually, the image analysis speed and accuracy can be improved favorably, and in addition, because the first detection about the possibility of the existence of the target area and the second detection about the attribute information of the target area are both performed based on the characteristic image, namely, the first detection task and the second detection task share the characteristic image, compared with the two tasks, the two tasks are completely independent, and the calculation amount can be reduced. Therefore, the image analysis speed and accuracy can be improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image analysis method according to another embodiment of the application. In embodiments of the present disclosure, the target region in the medical image may specifically include a fracture region. Specifically, the method comprises the following steps;
Step S31, acquiring a medical image of the object to be tested.
Reference may be made specifically to the relevant descriptions in the foregoing disclosed embodiments, and details are not repeated here.
And step S32, detecting the medical image to obtain a bone region containing bone pixel points.
The specific detection method of the bone region can be referred to the related description in the foregoing disclosed embodiments, and will not be repeated here.
In addition, as described in the foregoing disclosure, in order to improve the efficiency of bone region detection, an image analysis model may be trained in advance, and the image analysis model includes a bone region detection sub-network, so that the bone region detection sub-network may be used to detect a medical image, and a bone region including bone pixels may be obtained. For the bone region detection subnetwork, reference may be made specifically to the relevant descriptions in the foregoing disclosed embodiments, and details are not repeated here.
Step S33, acquiring bone pixel points positioned on a skeleton line of a bone region as candidate pixel points, and selecting at least one candidate pixel point as a target pixel point.
Reference may be made specifically to the relevant descriptions in the foregoing disclosed embodiments, and details are not repeated here.
And step S34, carrying out region extraction on the medical image based on the bone pixel points in the bone region to obtain at least one region image.
In the embodiment of the disclosure, the center of the area image is the target pixel. Reference may be made specifically to the relevant descriptions in the foregoing disclosed embodiments, and details are not repeated here.
And step S35, detecting at least one area image respectively to obtain the abnormal bone condition of the area image.
In order to improve the efficiency of bone abnormality detection, the image analysis model may further include a bone abnormality detection sub-network, so that at least one region image may be detected by using the bone abnormality detection sub-network, respectively, to obtain a bone abnormality of the region image.
In a specific implementation scenario, the bone abnormality detection sub-network specifically may include, but is not limited to, a convolution layer, a pooling layer and a full connection layer, i.e. the bone abnormality detection sub-network may be implemented through a lightweight network structure, which is beneficial to reducing network parameters and reducing training difficulty and cost.
In another specific implementation scenario, a sample medical image marked with an actual bone abnormality (e.g., may be marked with an actual bone abnormality score) may be obtained in advance, so that the sample medical image may be detected by using the bone abnormality detection sub-network to obtain a predicted bone abnormality (e.g., a predicted bone abnormality score), and further, a network parameter of the bone abnormality detection sub-network may be adjusted by using a difference between the actual bone abnormality and the predicted bone abnormality.
In yet another specific implementation scenario, the bone abnormality detection sub-network may also be independent of the image analysis model, and in addition, the bone region detection sub-network may also be independent of the image analysis model, i.e., as shown in fig. 2, the image analysis model may include only the feature extraction sub-network, the first detection sub-network, and the second detection sub-network.
And S36, extracting features of the region images with abnormal bone conditions meeting preset conditions to obtain feature images.
As described in the above disclosed examples, bone fracture, bone deformity, hyperosteogeny and the like deviate from normal bones and are all abnormal bones. In the embodiment of the disclosure, the bone abnormality comprises a bone abnormality score, the greater the bone abnormality score, the higher the bone abnormality degree, and the preset condition comprises that the bone abnormality score is greater than a preset score threshold.
In one implementation scenario, the preset score threshold may be set according to the actual application needs. For example, in the case of a relatively low requirement for fracture detection accuracy, the preset score threshold may be set to be relatively large so as to screen only the region images with a large bone abnormality score, i.e., the region images with a small bone abnormality score are not subjected to subsequent detection, or in the case of a relatively high requirement for fracture detection accuracy, the preset score threshold may be set to be relatively small so that the region images with a slightly large bone abnormality score are all screened for subsequent detection, which is not limited herein.
And step S37, performing first detection on the characteristic image to obtain a first detection result, and performing second detection on the characteristic image based on the first detection result to obtain a second detection result.
In an embodiment of the present disclosure, the first detection result indicates a possibility that a target region exists in the medical image, and the second detection result includes attribute information of the target region. Reference may be made specifically to the relevant descriptions in the foregoing disclosed embodiments, and details are not repeated here.
Different from the foregoing embodiment, a medical image of an object to be detected is obtained, and the medical image is detected, so as to obtain a bone region including bone pixel points, thereby obtaining bone pixel points located on a skeleton line of the bone region, as candidate pixel points, and selecting at least one candidate pixel point, as a target pixel point, based on the bone pixel points in the bone region, the medical image is subjected to region extraction to obtain at least one region image, and the center of the region image is a target pixel, so that subsequent feature extraction is focused on the bone region, thereby being beneficial to reducing interference of other regions on fracture detection, on the basis, at least one region image is detected, so as to obtain a bone abnormal condition of the region image, and feature extraction is performed on the region image of which the bone abnormal condition meets a preset condition, so as to obtain a feature image, thereby improving the fracture detection speed, finally, the feature image is subjected to first detection to obtain a first detection result, the feature image is subjected to second detection result based on the first detection result, and the second detection result is obtained, and the first detection result and the second detection result are respectively. Therefore, the above-described manner can improve the fracture detection speed and accuracy.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of a training process of an image analysis model. The method specifically comprises the following steps:
step S41, acquiring a sample medical image.
In an embodiment of the disclosure, the sample medical image is marked with a first marker and a second marker, the first marker represents whether a sample target area exists in the sample medical image, and the second marker represents sample attribute information of the sample target area. The meaning of the sample medical image may be specifically referred to the related description about the medical image in the foregoing disclosed embodiments, and will not be repeated here. In addition, the sample target area can be specifically set according to actual application requirements, and for example, the sample fracture area can be included. Taking the example that the sample target region includes a sample fracture region as an example, the sample attribute information of the sample target region may include a sample fracture type of the sample fracture region, etc., which is not limited herein.
And S42, carrying out feature extraction on the sample medical image by utilizing a feature extraction sub-network of the image analysis model to obtain a sample feature image.
In particular, reference may be made to the foregoing description of the feature extraction of the medical image by using the feature extraction sub-network in the foregoing disclosed embodiments, which is not repeated herein.
And step S43, performing first detection on the sample characteristic image by using a first detection sub-network of the image analysis model to obtain a first prediction result, and performing second detection on the characteristic image by using a second detection sub-network of the image analysis model to obtain a second prediction result.
In an embodiment of the present disclosure, the first prediction result indicates a likelihood that a sample target region exists in the sample medical image, and the second prediction result includes prediction attribute information of the sample target region. In one implementation scenario, the first prediction result may specifically include a first prediction probability value of the sample target area, where a higher first prediction probability value indicates a greater likelihood of the sample target area being present, and in another implementation scenario, taking the sample target area including the sample fracture area as an example, the prediction attribute information of the sample fracture area included in the second detection result may specifically include a second prediction probability value of at least one fracture type, for example, may include a second prediction probability value of a new fracture and a second prediction probability value of an old fracture, or may include a second prediction probability value of a complete fracture and a second prediction probability value of an incomplete fracture, which are not limited herein.
In particular, in the foregoing disclosure embodiment, the first detection is performed by using the first detection sub-network, and the second detection is performed by using the second detection sub-network, which are not described herein.
Step S44, adjusting network parameters of the image analysis model based on the difference between the first mark and the first prediction result and the difference between the second mark and the second prediction result.
In one implementation scenario, taking the sample target area including the sample fracture area as an example, the first flag may indicate whether the sample fracture area exists in the sample medical image, specifically, the first actual probability value may be 100% to indicate that the sample fracture area exists, and the first actual probability value may be 0% to indicate that the sample fracture area does not exist. In this case, a first loss value between the first actual probability value and the first predicted probability value may be calculated based on the cross entropy loss function.
In another implementation scenario, taking the sample target area including the sample fracture area as an example, the second actual result may include a fracture type of the sample fracture area, specifically, the second actual probability value may be 100% to indicate that a corresponding fracture type exists, the second actual probability value may be 0% to indicate that a corresponding fracture type does not exist, for example, the second actual probability value of a new fracture 0% and the second actual probability value of an old fracture 100% to indicate that the fracture type of the sample fracture area is an old fracture, or the second actual probability value of a complete fracture 0% and the second actual probability value of an incomplete fracture 100% to indicate that the fracture type of the sample fracture area is an incomplete fracture, which is not limited herein. In this case, a second loss value between the second actual probability value and the second predicted probability value may be calculated based on the cross entropy loss function.
In yet another implementation scenario, the first loss value and the second loss value may be weighted by a first preset weight and a second preset weight, respectively, to obtain weighted loss values, and network parameters of the image analysis model may be adjusted based on the weighted loss values.
In a specific implementation scenario, the first preset weight and the second preset weight may be set according to actual application needs. For example, in the case of comparing fracture type predictions of interest, the first preset weight may be set to be greater than the second preset weight, such as the first preset weight being set to 0.6 and the second preset weight being set to 0.4, or in the case of comparing fracture type predictions of interest, the first preset weight may be set to be less than the second preset weight, such as the first preset weight being set to 0.4 and the second preset weight being set to 0.6, without limitation.
In another specific implementation scenario, the parameters of the image analysis model may be specifically adjusted by using the weighted loss values in a random gradient descent (Stochastic GRADIENT DESCENT, SGD), a Batch gradient descent (Batch GRADIENT DESCENT, BGD), a small Batch gradient descent (Mini-Batch GRADIENT DESCENT, MBGD), where the Batch gradient descent refers to updating parameters by using all samples at each iteration, the random gradient descent refers to updating parameters by using one sample at each iteration, and the small Batch gradient descent refers to updating parameters by using one Batch of samples at each iteration, which is not described herein.
In yet another specific implementation scenario, a training end condition may also be set, and when the training end condition is satisfied, training of the image analysis model may be ended. Specifically, the training ending condition may include any one of a loss value less than a preset loss threshold, a current training number reaching a preset number threshold (e.g., 500 times, 1000 times, etc.), without limitation.
Different from the embodiment, the sample medical image is obtained, the sample medical image is marked with the first mark and the second mark, the first mark represents whether a sample target area exists in the sample medical image, the second mark represents sample attribute information of the sample target area, and the characteristic extraction sub-network of the image analysis model is utilized to perform characteristic extraction on the sample medical image to obtain the sample characteristic image, so that the first detection sub-network of the image analysis model is utilized to perform first detection on the sample characteristic image to obtain a first prediction result, the second detection sub-network of the image analysis model is utilized to perform second detection on the characteristic image to obtain a second prediction result, the first prediction result represents the possibility that the sample target area exists in the sample medical image, the second prediction result comprises prediction attribute information of the sample target area, and further, based on the difference between the first mark and the first prediction result, and the difference between the second mark and the second prediction result, network parameters of the image analysis model are adjusted, so that the first detection sub-network and the second detection sub-network are jointly trained in the training process of the image analysis model can be beneficial to the first detection task and the second detection task can be better to improve the accuracy of the image analysis task.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a frame of an image analysis apparatus 50 according to an embodiment of the application. The image analysis device 50 comprises an image acquisition module 51, a feature extraction module 52, a first detection module 53 and a second detection module 54, wherein the image acquisition module 51 is used for acquiring a medical image of an object to be detected, the feature extraction module 52 is used for extracting features of the medical image to obtain a feature image, the first detection module 53 is used for carrying out first detection on the feature image to obtain a first detection result, the first detection result represents the possibility of a target area in the medical image, and the second detection module 54 is used for carrying out second detection on the feature image based on the first detection result to obtain a second detection result, and the second detection result comprises attribute information of the target area.
According to the scheme, the medical image of the object to be detected is obtained, the characteristic image is obtained through characteristic extraction, the characteristic image is subjected to first detection, a first detection result is obtained, the first detection result indicates the possibility of the existence of the target area in the medical image, the characteristic image is subjected to second detection based on the first detection result, the second detection result is obtained, and the second detection result comprises the attribute information of the target area, so that image analysis can be avoided manually, the image analysis speed and accuracy can be improved favorably, and in addition, because the first detection about the possibility of the existence of the target area and the second detection about the attribute information of the target area are both performed based on the characteristic image, namely, the first detection task and the second detection task share the characteristic image, compared with the two tasks, the two tasks are completely independent, and the calculation amount can be reduced. Therefore, the image analysis speed and accuracy can be improved.
In some disclosed embodiments, the target region includes a fracture region, the image analysis device 50 further includes a third detection module for performing a third detection on the medical image to obtain a bone region including bone pixels, and the feature extraction module 52 includes a region extraction sub-module for performing region extraction on the medical image based on the bone pixels in the bone region to obtain at least one region image, wherein a center of the region image is the bone pixels, and the feature extraction sub-module 52 is configured to perform feature extraction on the at least one region image respectively to obtain a feature image.
In other words, the method includes the steps that the target region includes a fracture region, and the medical image is subjected to the third detection before the medical image is subjected to the feature extraction, so that a bone region including bone pixels is obtained, and therefore, based on the bone pixels in the bone region, at least one region image is obtained by the region extraction of the medical image, and the center of the region image is the bone pixels, and then the feature extraction is respectively performed on the at least one region image, so that the feature image is obtained, so that the feature extraction is focused on the bone region, the interference of other regions on fracture detection can be reduced, the detection of missing of slight fracture can be reduced, and the accuracy of fracture detection is further improved.
In some disclosed embodiments, the feature extraction module 52 further includes a pixel extraction sub-module configured to obtain a bone pixel located on a skeleton line of the bone region, as a candidate pixel, and select at least one candidate pixel as a target pixel, where a center of the region image is the target pixel.
Different from the foregoing embodiment, by acquiring the bone pixel points located on the skeleton line of the bone region as candidate pixel points, and selecting at least one candidate pixel point as a target pixel point, and the center of the region image as the target pixel point, feature extraction can be focused on the bone region, and fracture detection accuracy can be improved, and at the same time, the number of region images can be reduced, thereby reducing the calculation amount and improving the fracture detection speed.
In some disclosed embodiments, the feature extraction module 52 further includes an anomaly detection sub-module, configured to perform fourth detection on at least one area image respectively, to obtain a bone anomaly condition of the area image, and the feature extraction sub-module is specifically configured to perform feature extraction on the area image in which the bone anomaly condition meets a preset condition, to obtain a feature image.
In other words, the method includes that before feature extraction is performed on at least one region image respectively to obtain feature images, fourth detection is performed on at least one region image respectively to obtain bone abnormal conditions of the region images, feature extraction is performed on the region images with bone abnormal conditions meeting preset conditions to obtain feature images, so that 'coarse-to-fine' fracture detection can be achieved, namely, the region images with bone abnormal conditions meeting preset conditions can be initially screened, detection of the region images with bone abnormal conditions not meeting preset conditions can be avoided, and on the basis, follow-up fine screening is performed on the screened region images, so that further improvement of fracture detection speed can be facilitated.
In some disclosed embodiments, the bone abnormality comprises a bone abnormality score and the preset condition comprises the bone abnormality score being greater than a preset score threshold.
Different from the embodiment, the bone abnormality is set to include the bone abnormality score, and the preset condition is set to include that the bone abnormality score is larger than the preset score threshold, so that detection of an area image with low bone abnormality degree can be avoided, and subsequent fine screening of an area image with high bone abnormality degree can be performed, and the fracture detection speed can be improved.
In some disclosed embodiments, the feature extraction module 52 is specifically configured to perform feature extraction on the medical image by using a feature extraction sub-network of the image analysis model to obtain a feature image, the first detection module 53 is specifically configured to perform first detection on the feature image by using a first detection sub-network of the image analysis model to obtain a first detection result, and the second detection module 54 is specifically configured to perform second detection on the feature image by using a second detection sub-network of the image analysis model to obtain a second detection result.
Different from the embodiment, the feature extraction efficiency can be improved by extracting the features of the medical image through the feature extraction sub-network of the image analysis model, the first detection can be improved by detecting the feature image of the sub-network through the first detection of the image analysis model, and the second detection can be improved by detecting the feature image through the second detection sub-network of the image analysis model.
In some disclosed embodiments, the image analysis device 50 further comprises a sample acquisition module for acquiring a sample medical image, wherein the sample medical image is marked with a first mark and a second mark, the first mark represents whether a sample target area exists in the sample medical image, the second mark represents sample attribute information of the sample target area, the image analysis device 50 further comprises a sample extraction module for performing feature extraction on the sample medical image by using a feature extraction sub-network of the image analysis model to obtain a sample feature image, the image analysis device 50 further comprises a sample detection module for performing first detection on the sample feature image by using a first detection sub-network of the image analysis model to obtain a first prediction result, and performing second detection on the feature image by using a second detection sub-network of the image analysis model to obtain a second prediction result, wherein the first prediction result represents possibility that the sample target area exists in the sample medical image, the second prediction result comprises prediction attribute information of the sample target area, and the image analysis device 50 further comprises a parameter adjustment module for adjusting the parameter analysis model based on differences between the first mark and the first prediction result and the second prediction result.
Different from the embodiment, the sample medical image is obtained, the sample medical image is marked with the first mark and the second mark, the first mark represents whether a sample target area exists in the sample medical image, the second mark represents sample attribute information of the sample target area, and the characteristic extraction sub-network of the image analysis model is utilized to perform characteristic extraction on the sample medical image to obtain the sample characteristic image, so that the first detection sub-network of the image analysis model is utilized to perform first detection on the sample characteristic image to obtain a first prediction result, the second detection sub-network of the image analysis model is utilized to perform second detection on the characteristic image to obtain a second prediction result, the first prediction result represents the possibility that the sample target area exists in the sample medical image, the second prediction result comprises prediction attribute information of the sample target area, and further, based on the difference between the first mark and the first prediction result, and the difference between the second mark and the second prediction result, network parameters of the image analysis model are adjusted, so that the first detection sub-network and the second detection sub-network are jointly trained in the training process of the image analysis model can be beneficial to the first detection task and the second detection task can be better to improve the accuracy of the image analysis task.
In some disclosed embodiments, the medical image is a chest medical image of the subject to be tested, the target region includes a fracture region, the attribute information includes a fracture type of the fracture region, and/or the first detection result includes a probability value for the presence of the target region in the medical image, and the greater the probability value, the higher the likelihood of the presence of the target region.
Different from the embodiment, setting the medical image as the chest medical image of the object to be detected, setting the target area as the fracture type including the fracture area and setting the attribute information as the fracture type including the fracture area can be beneficial to realizing fracture detection, setting the first detection result as the probability value of the target area in the medical image, wherein the probability value is larger, the probability of the target area is higher, the probability of the target area can be beneficial to quantification, and therefore a user can conveniently judge whether the target area exists according to the probability value.
Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an electronic device 60 according to an embodiment of the application. The electronic device 60 comprises a memory 61 and a processor 62 coupled to each other, the processor 62 being adapted to execute program instructions stored in the memory 61 for implementing the steps of any of the image analysis method embodiments described above. In one specific implementation scenario, electronic device 60 may include, but is not limited to, a microcomputer, a server, and further, electronic device 60 may also include a mobile device such as a notebook computer, a tablet computer, etc., without limitation.
In particular, the processor 62 is adapted to control itself and the memory 61 to implement the steps of any of the image analysis method embodiments described above. The processor 62 may also be referred to as a CPU (Central Processing Unit ). The processor 62 may be an integrated circuit chip having signal processing capabilities. The Processor 62 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 62 may be commonly implemented by an integrated circuit chip.
By the aid of the scheme, the image analysis speed and accuracy can be improved.
Referring to FIG. 7, FIG. 7 is a block diagram of a computer readable storage medium 70 according to an embodiment of the application. The computer readable storage medium 70 stores program instructions 701 capable of being executed by a processor, the program instructions 701 for implementing the steps of any of the image analysis method embodiments described above.
By the aid of the scheme, the image analysis speed and accuracy can be improved.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.

Claims (10)

1. An image analysis method, comprising:
Acquiring a medical image of an object to be measured;
extracting the characteristics of the medical image by utilizing a characteristic extraction sub-network of the image analysis model to obtain a characteristic image;
performing a first detection on the characteristic image by using a first detection sub-network of the image analysis model to obtain a first detection result, wherein the first detection result comprises a probability value of a target area in the medical image, the probability value is larger, the probability of the target area is higher, and
And based on the first detection result, performing second detection on the characteristic image by using a second detection sub-network of the image analysis model to obtain a second detection result, wherein the second detection result comprises attribute information of the target area.
2. The method of claim 1, wherein the target region comprises a fracture region, and wherein prior to feature extraction of the medical image using the feature extraction sub-network of the image analysis model, the method comprises:
Performing a third detection on the medical image to obtain a bone region containing bone pixels, and
The step of extracting the features of the medical image to obtain a feature image comprises the following steps:
extracting at least one region image from the medical image based on the bone pixel points in the bone region, wherein the center of the region image is the bone pixel point;
and respectively carrying out feature extraction on the at least one region image to obtain the feature image.
3. The method of claim 2, wherein prior to the extracting at least one region image from the medical image based on bone pixels in the bone region, the method further comprises:
acquiring bone pixel points positioned on a skeleton line of the bone region as candidate pixel points, and selecting at least one candidate pixel point as a target pixel point;
the center of the area image is the target pixel point.
4. A method according to claim 2 or 3, wherein prior to said feature extraction of said at least one region image, respectively, the method further comprises:
Respectively performing fourth detection on the at least one region image to obtain bone abnormality of the region image, and
The step of extracting features of the at least one region image to obtain the feature image includes:
And extracting the characteristics of the region image of which the abnormal condition of the bone meets the preset condition to obtain the characteristic image.
5. The method of claim 4, wherein the bone abnormality condition includes a bone abnormality score and the predetermined condition includes the bone abnormality score being greater than a predetermined score threshold.
6. The method of claim 1, wherein the training of the image analysis model comprises:
Acquiring a sample medical image, wherein the sample medical image is marked with a first mark and a second mark, the first mark represents whether a sample target area exists in the sample medical image, and the second mark represents sample attribute information of the sample target area;
extracting features of the sample medical image by using a feature extraction sub-network of the image analysis model to obtain a sample feature image;
Performing first detection on the sample characteristic image by using a first detection sub-network of the image analysis model to obtain a first prediction result, and performing second detection on the characteristic image by using a second detection sub-network of the image analysis model to obtain a second prediction result, wherein the first prediction result represents the possibility that a sample target area exists in the sample medical image, and the second prediction result comprises prediction attribute information of the sample target area;
Network parameters of the image analysis model are adjusted based on differences between the first marker and the first predicted result and differences between the second marker and the second predicted result.
7. The method according to claim 1, wherein the medical image is a chest medical image of the subject, the target region comprises a fracture region, and the attribute information comprises a fracture type of the fracture region.
8. An image analysis apparatus, comprising:
the image acquisition module is used for acquiring a medical image of the object to be detected;
the feature extraction module is used for carrying out feature extraction on the medical image by utilizing a feature extraction sub-network of the image analysis model to obtain a feature image;
the first detection module is used for carrying out first detection on the characteristic image by utilizing a first detection sub-network of the image analysis model to obtain a first detection result, wherein the first detection result comprises a probability value of a target area in the medical image, and the probability value is larger, and the probability of the target area is higher;
and the second detection module is used for carrying out second detection on the characteristic image by utilizing a second detection sub-network of the image analysis model based on the first detection result to obtain a second detection result, wherein the second detection result comprises attribute information of the target area.
9. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image analysis method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the image analysis method of any of claims 1 to 7.
CN202110209414.5A 2021-02-24 2021-02-24 Image analysis method and related device, electronic device, and storage medium Active CN112819811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209414.5A CN112819811B (en) 2021-02-24 2021-02-24 Image analysis method and related device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209414.5A CN112819811B (en) 2021-02-24 2021-02-24 Image analysis method and related device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN112819811A CN112819811A (en) 2021-05-18
CN112819811B true CN112819811B (en) 2025-02-21

Family

ID=75865613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209414.5A Active CN112819811B (en) 2021-02-24 2021-02-24 Image analysis method and related device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112819811B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506313A (en) * 2021-07-07 2021-10-15 上海商汤智能科技有限公司 Image processing method and related device, electronic device, storage medium
CN113610809B (en) * 2021-08-09 2024-04-05 北京百度网讯科技有限公司 Fracture detection method, device, electronic device and storage medium
CN114049936A (en) * 2021-10-22 2022-02-15 上海商汤智能科技有限公司 Image detection method and related model training method, device and apparatus
CN116091469B (en) * 2023-01-31 2023-11-21 浙江医准智能科技有限公司 Fracture detection method, device, electronic equipment and medium
CN118053567B (en) * 2024-04-16 2024-07-19 浙江创享仪器研究院有限公司 Remote monitoring image analysis method and system for medical test instrument

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
CN110796656A (en) * 2019-11-01 2020-02-14 上海联影智能医疗科技有限公司 Image detection method, image detection device, computer equipment and storage medium
CN111507381A (en) * 2020-03-31 2020-08-07 上海商汤智能科技有限公司 Image recognition method and related device and equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8792698B2 (en) * 2008-02-25 2014-07-29 Hitachi Medical Corporation Medical imaging processing device, medical image processing method, and program
CN107330883A (en) * 2017-07-04 2017-11-07 南京信息工程大学 A kind of medical image lesion region positioning and sorting technique
CN108010021B (en) * 2017-11-30 2021-12-10 上海联影医疗科技股份有限公司 Medical image processing system and method
CN110738639B (en) * 2019-09-25 2024-03-01 上海联影智能医疗科技有限公司 Medical image detection result display method, device, equipment and storage medium
CN110807788B (en) * 2019-10-21 2023-07-21 腾讯科技(深圳)有限公司 Medical image processing method, medical image processing device, electronic equipment and computer storage medium
CN111402228B (en) * 2020-03-13 2021-05-07 腾讯科技(深圳)有限公司 Image detection method, device and computer readable storage medium
CN112116004B (en) * 2020-09-18 2021-11-02 推想医疗科技股份有限公司 Focus classification method and device and focus classification model training method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
CN110796656A (en) * 2019-11-01 2020-02-14 上海联影智能医疗科技有限公司 Image detection method, image detection device, computer equipment and storage medium
CN111507381A (en) * 2020-03-31 2020-08-07 上海商汤智能科技有限公司 Image recognition method and related device and equipment

Also Published As

Publication number Publication date
CN112819811A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112819811B (en) Image analysis method and related device, electronic device, and storage medium
US11468564B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN106709917B (en) Neural network model training method, device and system
KR20200095504A (en) 3D medical image analysis method and system for identifying vertebral fractures
CN110197474B (en) Image processing method and device and training method of neural network model
US9811904B2 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
CN101208042A (en) Abnormal shadow candidate detection method and abnormal shadow candidate detection device
US11741694B2 (en) Spinal fracture detection in x-ray images
CN110555860B (en) Method for labeling rib areas in medical image, electronic equipment and storage medium
CN114730451A (en) Magnetic Resonance (MR) image artifact determination for Image Quality (IQ) normalization and system health prediction using texture analysis
EP4241239A1 (en) Methods and systems for analyzing ultrasound images
CN114240874A (en) Bone age assessment method, device and computer-readable storage medium based on deep convolutional neural network and feature fusion
KR20200029218A (en) A system for measuring bone age
JP4849449B2 (en) Medical image diagnosis support device
US10307124B2 (en) Image display device, method, and program for determining common regions in images
Bhatia et al. Proposed algorithm to blotch grey matter from tumored and non tumored brain MRI images
CN111462203B (en) DR focus evolution analysis device and method
US20230005148A1 (en) Image analysis method, image analysis device, image analysis system, control program, and recording medium
Kalyan et al. Automatic Classification of human gender using X-ray images with Fuzzy C means and Convolution Neural Network
CN116823752B (en) Brain network construction method, system, medium and equipment based on mechanical parameters
CN111192679A (en) Method and device for processing image data exception and storage medium
US12243224B2 (en) Medical image analysis method, medical image analysis apparatus, and medical image analysis system for quantifying joint condition
CN113610825A (en) Method and system for identifying ribs of intraoperative image
RU2813480C1 (en) Method of processing magnetic resonance imaging images for generating training data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240611

Address after: 200233, Units 6-01, 6-49, 6-80, 6th Floor, No. 1900 Hongmei Road, Xuhui District, Shanghai

Applicant after: Shanghai Shangtang Shancui Medical Technology Co.,Ltd.

Country or region after: China

Address before: Room 1605a, building 3, 391 Guiping Road, Xuhui District, Shanghai

Applicant before: SHANGHAI SENSETIME INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant