[go: up one dir, main page]

AU2021292112A1 - Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition - Google Patents

Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition Download PDF

Info

Publication number
AU2021292112A1
AU2021292112A1 AU2021292112A AU2021292112A AU2021292112A1 AU 2021292112 A1 AU2021292112 A1 AU 2021292112A1 AU 2021292112 A AU2021292112 A AU 2021292112A AU 2021292112 A AU2021292112 A AU 2021292112A AU 2021292112 A1 AU2021292112 A1 AU 2021292112A1
Authority
AU
Australia
Prior art keywords
robot
recognition
collection
medical
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2021292112A
Inventor
Sicong TAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010556720.1A external-priority patent/CN111973228A/en
Priority claimed from CN202010780479.0A external-priority patent/CN111916195A/en
Application filed by Individual filed Critical Individual
Publication of AU2021292112A1 publication Critical patent/AU2021292112A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

An integrated robot and platform for ultrasound image data acquisition, analysis, and recognition, said platform using a combination of robot technology, an image acquisition apparatus (50), and medical big-data image recognition technology. A robot arm (60) communicates with an ultrasound device to achieve remote control of the motion of the robot arm for the collection and autonomous collection of image data, analysis of the collected data, intelligent identification of diseases, robot voice interaction, and like functions. By means of the combination of a camera (20) mounted on a robot platform and the image acquisition apparatus (50), the intelligent recognition of the position of the external organs and medical image recognition of the internal organs are combined so as to position the internal organs with dual-accuracy, collect images and classify images with high precision, intelligently recognize image abnormalities, and intelligently diagnose common diseases in the organs. A neural network method and a machine learning method are used to classify images and intelligently identify diseases in the organs. Voice guidance is used for autonomous remote data acquisition. Remote inquiry, acquisition, analysis, intelligent diagnosis of common diseases in the organs, and effective, thorough examination of abnormal symptoms are conducted.

Description

Description
INTEGRATED ROBOT AND PLATFORM FOR ULTRASOUND IMAGE DATA ACQUISITION ANALYSIS AND RECOGNITION TECHNICAL FIELD
The present invention belongs to the technical field of artificial intelligence and medical robot technology, and relates to medical data analysis and medical image intelligent recognition system.
BACKGROUNDTECHNOLOGY
In the recent years, in the medical field and in the industrial field, robot apparatuses have been widely applied in order to perform tasks faster with a high degree of accuracy and to solve the problems of low physical examination efficiency, inaccurate data collection and medical image recognition mistake. It is more effective to recognition ultrasonic image and CT image to diagnose disease. A robot is configured by machine vision device, ultrasonic examination device, CT examination device and another examination devices which are used to collect physical examination data and medical images. Medical images classification and self-controlled collection are performed by medical examination robot, remote controlled collection and recognition medical images are more intelligent. Classify medical data, screening abnormal data, classification and identification, intelligent feedback abnormality and disease results are effectively recognition, according to recognition results, disease are detection and warning. Physical examination problems are solved more intelligently and more effectively.
SUMMARY OF INVENTION The present invention is to provide a robot apparatus and a physical examination system for the artificial intelligence robot platform. A robot apparatus and physical examination robot main system are with control function module which is configured to control another joint function modules. The control function module communicates with machine vision devices, ultrasonic examination devices, CT examination devices, another examination devices, and medical data analysis model, medical images recognition model. It is more effect to collect medical images of heart, breast, abdominal organs ,to perform self-controlled collection,remote-controlled collection and speech function modules recognition. It is more effect to analysis, to classification and to recognition medical images of heart, breast, abdominal organs, to perform diagnose intelligently. A robot main system which is with control function modules configured to control drive devices of another joint function modules which communicate with machine vision devices function modules, medical image collection devices, examination devices and medical data analysis model. The robot main system is also used to control robot arms, motion planning model,speech function modules models which are used to interaction between Robot and users.
100002 1 2010.2
Description
A machine vision device of joint function modules and sensors collection devices are used to collect physical examination data and medical images. Speech function module and speech function modules recognition model is used for interaction between robot and users by functions of speech function modules command,speech function modules recognition,speech function modules transfer text,speech function modules induction. Laser and movement model, which is used to navigate, to mapping, to move by self controlled mode. Medical data analysis function module is used to analysis physical examination data , to detect all of the warning data and disease data. Medical image classification function module and recognition function module are configured to classify ultrasound medical image and intra-organ ultrasound image. Medical image collection function module which is used to collect ultrasound examination images, CT examination images and another medical data and medical images by physical examination apparatus. Robot arm function modules are configured to plan motion function modules and to perform motion interaction between robot and users for collection. Robot main system communicates with machine vision device function modules, sensor function modules and sensor data collection models, medical image collection function modules, medical data analysis function module, medical image recognition function module. Robot main system communicates with motion planning function module and speech function module to remote control robot arm ,to motion interaction and to collect medical data more intelligently. Further, robot main system is configured to implement main control, to perform data collection, medical image classification,speech function modules interaction and motion interaction between robot main system and users to implement self-controlled collection of medical data and self-controlled analysis and self-controlled warning disease, self-controlled recognition and remote diagnose intelligently. Further, machine vision device devices of joint function modules which are used for face recognition and color marks recognition of collection zone, which are used to collect medical images. Further, speech function modules recognition function modules perform speech function modules, recognition function, speech function modules command control function and speech function modules induction function. Further, motion planning function modules perform plan motion function, motion interaction function between robot main system and users, medical data collection function, image collection function which are performed by robot arm and motion planning model. Extract outlines features from in vitro feature location model which is including a set of features and feature positions of color mark, shoulder, waist, lower limb joint, face. Classify medical images of visceral organs, recognize medical images of joint and feature organs which includes breast,abdominal organs and another feature organs. Locate the collection positions by positions recognition of joints and feature organs. Collect the medical images by improved neural network method, collect the medical images in the located collection range. 100002 2 2010.2
Description
Robot arm function modules and motion planning method are proposed to perform self-controlled collection and another motions. Robot main system communicates with robot arms by remote-controlled mode to perform physical examination data collection
. Collection method for collecting data and images of visceral organs and tracking head target, is proposed. The description will proceed in the following steps. STEP1: Set target. STEP2: Set parameters of target which are including target,left arm link and joint ,right arm link and joint. STEP3: Set parameters of communication target. STEP4: Public target and parameters of the target which are including positions of target and marks of the position. STEPS: Mark the positions of the target. STEP6: Set parameters of head target which are including head Id, positions of target and the values of X location,Y location ,Z location. STEP7: Set time stamp. STEP8:Set the marks of positions as origin location(0,0,0) and the values of X location
, Y location , Z location.
A novel locating method for combining face recognition, color mark recognition of collection zones, visceral organs recognition and the external collection position recognition .The Locating and collecting method is extracting a set of facial features which are including color features, shape pattern features, outlines features. The Locating and collecting method is as bellowing steps. The descriptions will proceed in the following orders. Step: Initialize point cloud nodes. Step2: Set parameters of public robot arm nodes which are including target , the marks of target position. Step3: Set parameters of subscribe machine vision device nodes which are including point cloud nodes,point cloud lists. Step4: Define point cloud lists and return the lists. Step5: Define the nearest points and transfer the points into points array cloud lists and return the lists. Step6: Compute COG. Step7: Confirm parameters of point cloud nodes and return information of point cloud nodes. Step8: Set position parameters as points. Step9: Set positions of target. Step10: Set parameters of targets which are including marks of target position.time stamp, targets of head ID, COG position of target, values of X location, Y location, Z location .
Step1l: Public the target of robot arm nodes.
100002 3 2010.2
Description
The collection method of target is as bellowing steps, the description will proceed in the following order. Step: Set collection zone location, allowable error range of position and attitude. Step2: Re -plan collection motion is permitted when motion planning is failed. Step3: Set parameters of targets location. Step4: Set the time limits of motion planning each time. Step5: Set parameters of targets zone location which are including locations of medical collection zone , seats , medical accessory machine. position of medical bed, arm, and leg placement , and set the height of the medical bed, the position of the arm placement area, and the leg placement area. Step6: Set parameters of collection zone location,chairs location, medical accessory machine location mark them in the demo, which include medical bed ID,collection zone color ,medical bed pose, left arm ID, left arm pose, right arm ID, right arm, left leg ID, left leg pose, right leg ID, and right leg pose, position of standard point
. medical bed, an arm, and leg position physical examination DEMO (comprising: a medical bed ID, (area.pose.position.x,area.pose.position.y,area.pose.position. z,area.pose.orientation;w.seat.pose.position.x,seat.pose.position.yseat.pose.posit ion.z, seat.pose.orientation.w; accessory machine.pose.position.x, accessory machine.pose.position.y, accessory machine.pose.position.z, accessory machine.pose.orientation.w) Set above parameters in the medical examination demo. Step7: Set colors and AR tags and other special markers for medical beds and arms, leg positions. Step8: Set position target, that is, a moving position (a human body position marking room lying color label, a left side lying color label, and a right side lying color label. Step9: Set color labels in the demo. Step10: Set color labels in the demos which are including left-side label horizontal color, a right side-lying label color, and other special marks. Initial demos, monitor demos and the differ between former demo and later demo. Set public color labels of lying demo, left detritus demo, right detritus demo. Step 11: Set the color to DEMO includes: initialize the planning scene object, monitor the scene difference, set the color, publishing the color label lying scene color, the left side lying scene color, the right side lying scene color and other special marks.
Further, A novel multi-recognition method which combines face recognition, color mark recognition, visceral organs recognition. The novel recognition method is as bellowing steps. The descriptions will proceed in the following orders. S1, Design a novel face model. S2 , Extract features of facial model, color labels and external locations of the human organ which are including facial features, color features and joints features. S3 , Extract outlines features from in vitro feature location model which are including a set of features and feature positions of color mark, shoulder, waist, lower limb joint, face. S4, Input feature value of examination items .
100002 4 2010.2
Description
S5, Improve weight optimizer, train images and output values. S6 , According to output values, output the external position information of visceral organs collection area, return the location information of visceral organs, the joint image and the color mark image.
Further, a improved machine learning methods for classification medical images of visceral organs. The descriptions will proceed in the following orders. S1, Design novel visceral organs model. S2, Extract outlines features of visceral organs model which are including a set of facial features such as color features, shape pattern features, outlines features. S3, Input examination items features value. S4, Improve weight optimization, train images and output values. S5, According to output value, classify images of visceral organs more accurately and return location of visceral organs, classification images of visceral organs which are including. heart, breast, lung, liver, gall bladder, spleen, kidney, womb, prostate.
A improved deep neural network method for recognition visceral organs and recognition disease of each visceral organ, the method is as bellowing. The descriptions will proceed in the following orders. S1, Input each visceral organ model. S2, Extract disease features of each visceral organ disease model which are including a set of disease features of each visceral organ, color features, shape pattern features, texture features, outlines features of disease in visceral organ. These in vitro feature positions which include shoulder joint, breast and nipple, tummy button, feature genital organs, lumbar joint and blood vessels. S3 , Extract outline features of each visceral organ and Extract features of in vitro feature position , calculate the feature values. S4, Input feature value of an internal organ image of a human body corresponding to an external feature value of each organ by improved deep neural network method.Improve weight optimizer, train images and output values. S5, According to output value, classify visceral organs and return visceral organs recognition results. Input disease features models and features values. S6, Improve weight optimizer, train images and output values. S7 , According to output value, classify and recognize diseases of each visceral organ more accurately and recognize diseases of organs.
Medical images collection devices and image collection model are used to collect medical data and medical images of visceral organs. Propose outline model and feature model of visceral organs to classify medical images of visceral organs, recognize medical images by improved machine learning method. According to recognition results , robot main system communicates with robot arms to locate and to move, and to collect the medical images. It is more effective for classification and recognition of visceral organs and disease. A novel method for self-controlled location, classification and recognition of visceral organs 100002 5 2010.2
Description
in the medical image collection model is as bellowing. The descriptions will proceed in the following orders. Medical images collection devices and images collection models are used to collect medical data and medical images of visceral organs. Propose position model and feature model of visceral organs to classification medical images of visceral organs, recognition medical images by improved machine learning method. It is more effective to classification medical images and warning the disease data by improved machine learning method. Propose outline model and features model of visceral organs, according to recognition results, robot main system communicates with robot arms to locate, to move, and to collect the medical images. It is more effective for classification and recognition of visceral organs and for diseases recognition of visceral organs. Propose outline model and feature model of visceral organs for medical images classification and for medical images recognition.
A novel method of self-locate position of visceral organs, classification and recognition visceral organs in the medical image collection model is as bellowing steps. The descriptions will proceed in the following orders. S1, Machine version function module public the positions of in vitro feature location S2, According to the positions of in vitro feature location, robot main system,robot arm and ultrasonic examination device subtract the positions public by machine version function module. S3 , Robot main system,robot arm and ultrasonic examination device subtract the positions, robot arm moves to the positions and scans the collection zone according to motion planing model. Ultrasonic scanner and ultrasonic examination device public image information, robot main system and machine version function module subtract the image information. S4, Robot main system and robot arm input the features of outlines of inner organs and the feature value of organs, Improve weight optimizer, train images and output values by improved deep neural network method. S5, According to output value, classify visceral organs and return recognition results. output the value and disease result. Return the result information to the robot main system and admin.
TECHNICAL IMPLEMENTATION SOLUTION It is more effect to perform tasks high degree of accuracy and recognition medical image tasks. It is more effect to perform high-accuracy medical data analysis and disease detection. It is more high accuracy to recognition disease of heart, breast, lung, liver, gall bladder, spleen, kidney, womb, prostate. A robot apparatus is a multi-link structure in which are connected with collection devices which are used to analysis medical data and recognition medical image, and which are used to collect physical examination data and images of visceral organs. Robot device is including robot main system function module, machine vision device function module, image collection function module, robot arm motion function module,speech function module, ultrasonic examination function module,speech function modules interaction function module. The robot device is used to control another nodes. 100002 6 2010.2
Description
Medical data collection and Medical image collection are used to classification medical image. Motion interaction to implement medical data self-controlled collection and self-controlled analysis and self-controlled warning disease and self-controlled diagnose in robot main system or remote controlled examination by motion planning. Physical examination by controlling each joint function module of the robot apparatus. This patent invention performs physical examination by medical robot apparatus which perform medical data analysis and disease detection more accurately and which warn diseases and recognition images as accuracy as possible. The technical solutions and implementation methods of the present application are to solve the above technical problem.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a structural diagram of physical examination robot for medical data collection and analysis .
FIG. 2 is a collection model diagram shown as machine vision device and medical image collection function module .
FIG. 3 is a position location diagram for medical image collection of human body
. FIG. 1 is shown as following marks : 100 - robot main system; 101 -speech function module; 102 - medical image collection function module; 103 - robotic arm motion planning function module; 104 - machine vision device function modules; FIG.2 is shown as following marks: 10--robot main system, 20--machine vision device, 30--speech function module, 40--laser, 50--image collection function module, 60--robot arm, 100--face, 300-in vitro feature positions FIG.3 is shown as following marks: 200-color mark, 400-joint, 601-heart, 602-breast, 603-lung, 604-liver, gall bladderspleen, 605-kidney, 606- womb, 607-prostate.
AN APPLICATION CASE 1: FIG.1. illustrates a medical data collection, analysis robot apparatus and a physical examination system . The robot device includes the following model with reference to FIG. 1. robot main system10 is used to communicate with machine vision device function module, image collection function module, image collection function module and robot arm function module to perform collection tasks and motion planning tasks. robot main system10 communicates with speech function module 30, speech function modules recognition model 30 to perform speech function modules interaction between robot main system10 and users. A machine vision device 20 and speech function module 30, image collection function module 50, medical ultrasound examination devices and CT examination devices are used to collect medical image. Robot motion planning collection device is used to plan motion. Motion planning function module of robot arm 103 connect to speech function module30. 100002 7 2010.2
Description
Robot motion planning function module and data collection devices are used to plan motion. Collect physical examination data which is including medical data and medical image to classify image, locate the locations of heart, breast, lung , livergall bladder, spleen, kidney, womb, prostate. A robot arm of joint function modules connect to robot motion planning function module and collection devices which are used to plan motion. An application case of a robot apparatus, robot main system10 connect to deep machine vision device function module 20 and robot arm function module 60. robot main system10 communicates with speech function module 30. robot main system10 is connected to robot arm and image collection function module 50. robot main system10 is connected to speech function module 30 and communicate with each other. An application case of using a robot apparatus, robot main system10 is connected to deep machine vision device function module 20 which is used to collect face image and to speech function modules interaction. machine vision device function module 20 is used to collect face images. robot main system10 publishes the images and data. robot main system10 communicates with image recognition node to recognition face,color labels and joints. Return location information of face, color labels and joints which sent back to robot main system. According to location information, robot arm move to the location of body collection zone. A robot arm of joint function modules to connect a plurality of links. Robot motion planning collection device is used to motion planning. A drive control function module configured to control driving of the arm joint function modules based on states of the joint function modules and arm motion planning and motion interaction to collect data effectively by using arm package in the robot main system10.
AN APPLICATION CASE 2: A novel method for location and recognition is proposed as Figure3. A novel method for combing face recognition, location of body organs collection, in vitro feature positions in vitro feature positions, and color label recognition is proposed. Design face image model 100 and image recognition model, extract features of face, color label 200, location of body collection zone which are including color label features ,features of face, features of joint 400, location features of body organs. The location features of body organs are including the RGB value,the joint location value of shoulder,upper limb, legs, waist. Input features value of examination items, train medical images by improved optimizer and output the results. According to location message of collection zone of body organs, recognition face image 100 by proposed neural network algorithm and color label image 200. Locate organs 300 of body collection zone and joint 400 more accurately and intelligently.
100002 8 2010.2
Description
The classification method of visceral organs is as application case 3, the method is as bellowing. The descriptions will proceed in the following orders. Design location model of enteral body collection zone 300 and inner body collection zone500. Design the image model of visceral organs 600 and extract outlines features of visceral organs such as color, shape, outlines . locate the locations of heart, breast, lung, livergall bladder, spleen, kidney, womb, prostate. A robot arm connect a plurality of links. Input features of items, compute and output results. According to classification image of visceral organs, classification medical image of 601-heart, 602-breast, 603-lung , 604-livergall bladderspleen, 605-kidney, 606- womb, 607-prostate. A improved deep neural network method for recognition visceral organs and recognition diseases of visceral organs proceed in the following orders. Input visceral organs model of 601-607, extract features of visceral organs ,such as color, shape, outlines,veins. feature positions of enteral body collection zone 300 and inner body collection zone500. Input features of items, compute and output results. Locate the locations of heart, breast, lung , livergall bladder, spleen, kidney, womb, prostate. And a robot arm of joint function modules to connect a plurality of links. Input features value of examination items and blood color, improve weight optimizer, train images and output values. According to output value, classify images of heart, breast, lung , livergall bladder, spleen, kidney, womb, prostate. Recognize medical images more accurately and return output results.
100002 9 2010.2

Claims (8)

Claims
1. Integrated robot and platform for ultrasound image data acquisition analysis and recognition wherein a robot apparatus and physical examination robot system are with control function module which is configured to control another joint function modules. The control function module communicates with machine vision devices, ultrasonic examination devices, CT examination devices, another examination devices, and medical data analysis model, medical images recognition model. It is more effect to collect medical images of heart, breast, abdominal organs ,to perform self-controlled collection,remote-controlled collection and speech recognition. It is more effect to analysis, to classification and to recognition medical images. The present invention includes: A robot main system which is with control function modules configured to control drive devices of another joint function modules which communicate with machine vision devices, medical image collection devices,another examination devices and medical data analysis model. The robot main system is also used to control robot arms, motion planning model, speech models which are used to interaction between Robot and users. A machine vision device of joint function modules and sensors collection devices are used to collect physical examination data and medical images. Speech function module and Speech recognition model is used for interaction between robot and users by functions of speech command, speech recognition, speech transfer text, speech induction. Laser and movement model, which is used to navigate, to mapping, to move by self controlled mode. Medical data analysis function module is used to analysis physical examination data , to detect all of the warning data and disease data. Medical image classification function module and recognition function module are configured to classify ultrasound medical image and intra-organ ultrasound image. Medical image collection function module which is used to collect ultrasound examination images, CT examination images and another medical data and medical images by physical examination apparatus. Robot arm function modules are configured to plan motion function modules and to perform motion interaction between robot and users for collection. Robot system communicates with camera function modules, sensor function modules and sensor data collection models, medical image collection function modules, medical data analysis function module, medical image recognition function module. Robot system communicates with motion planning function module and speech function module to remote control robot arm, to motion interaction and to collect medical data more intelligently.
2. Integrated robot and platform for ultrasound image data acquisition analysis and recognition wherein a novel neural network method for multi-recognition is proposed, which combines face image recognition, color mark recognition, joint image recognition, feature position image recognition, visceral organs image recognition and external collection position recognition. Extract outlines features from in vitro feature location model which are including a set of features and feature positions of color mark, shoulder, waist, lower limb joint, face.Output the external position information of visceral organs collection area, return the location information of visceral organs, the joint image and the color mark image. A novel multi-recognition method is used to classify visceral organs images and external position images of visceral organs, feature positions images which are recognized and marked to locate joints, visceral organs, the external positions of visceral organs, feature positions.
3. The robot of claim 1, the integrated robot for ultrasound image data acquisition analysis and recognition wherein comprise features marks which include color mark, joint positions , AR labels, in vitro feature positions of visceral organs are used and recognized in this invention. These features marks are used to locate organs and collect medical images. Robot main system connects to machine vision devices, robot arm joint function modules, ultrasound scanner which moves to the positions of these features marks to collect medical images by locating external collection positions of visceral organs.
4. Robot arm joint function modules connect to sensors to collect medical data, robot arm joint function modules connect to machine vision devices for face images collection, in vitro feature images collection, joint function modules images collection. A robot apparatus is used to control robot arms, motion planning model which are used to motion interaction between robot and users.
5. The robot of claim 1, the integrated robot for ultrasound image data acquisition analysis and recognition wherein a robot apparatus, robot arm joint function modules connect and communicate with machine vision devices, sensors, ultrasound scanner and ultrasound examination devices, robot arm joint function modules connect to ultrasound scanner to collect physical examination data and medical images.
6. Integrated robot and platform for ultrasound image data acquisition analysis and recognition wherein a improved machine learning methods for medical images classification of visceral organs is proposed. A novel visceral organs model and extract outlines features of visceral organs model which are including a set of facial features such as color features, shape pattern features, outlines features to classify images of visceral organs more accurately and return location and perform images classification of visceral organs which are including heart, breast, lung, liver, gall bladder, spleen, kidney, womb, prostate.
7. Integrated robot and platform for ultrasound image data acquisition analysis and recognition wherein a improved deep neural network method and disease models of each visceral organ and each visceral organ model are proposed. Extract novel features which are including a set of disease features of each visceral organ, The novel features which are combined with color features, shape pattern features, texture features, outlines features of disease in visceral organ. The feature values of internal organ image corresponding to external feature value of each organ are input into the disease models.These in vitro feature positions which include shoulder joint, breast and nipple, tummy button, feature genital organs, lumbar joint and blood vessels.
8. Integrated robot and platform for ultrasound image data acquisition analysis and recognition wherein comprises robot arm and motion planing method are proposed for self-controlled collection. Robot arm moves and scans the external positions of human body to collect medical images. Robot main system communicate with robot arm by communicate module to perform physical examination data collection by self-controlled mode and remote-controlledmode.
AU2021292112A 2020-06-17 2021-06-17 Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition Abandoned AU2021292112A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN202010556720.1 2020-06-17
CN202010556720.1A CN111973228A (en) 2020-06-17 2020-06-17 B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform
CN202010780479.0 2020-08-05
CN202010780479.0A CN111916195A (en) 2020-08-05 2020-08-05 Medical robot device, system and method
PCT/CN2021/100562 WO2021254427A1 (en) 2020-06-17 2021-06-17 Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition

Publications (1)

Publication Number Publication Date
AU2021292112A1 true AU2021292112A1 (en) 2023-03-02

Family

ID=79268472

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021292112A Abandoned AU2021292112A1 (en) 2020-06-17 2021-06-17 Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition

Country Status (2)

Country Link
AU (1) AU2021292112A1 (en)
WO (1) WO2021254427A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536323A (en) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 Classification robot based on image processing
CN115153634A (en) * 2022-07-22 2022-10-11 中山大学孙逸仙纪念医院 A kind of intelligent ultrasonic examination diagnosis method and system
CN116570224B (en) * 2023-05-31 2025-10-31 复旦大学 Ultrasonic capsule robot imaging method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081706A1 (en) * 2005-09-28 2007-04-12 Xiang Zhou Systems and methods for computer aided diagnosis and decision support in whole-body imaging
US10517681B2 (en) * 2018-02-27 2019-12-31 NavLab, Inc. Artificial intelligence guidance system for robotic surgery
US11717203B2 (en) * 2018-05-23 2023-08-08 Aeolus Robotics, Inc. Robotic interactions for observable signs of core health
CN110288574A (en) * 2019-06-13 2019-09-27 南通市传染病防治院(南通市第三人民医院) A system and method for ultrasound-assisted diagnosis of liver masses
CN110477956A (en) * 2019-09-27 2019-11-22 哈尔滨工业大学 An Intelligent Scanning Method Based on Ultrasound Image-Guided Robotic Diagnosis System
CN111973228A (en) * 2020-06-17 2020-11-24 谈斯聪 B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform
CN111916195A (en) * 2020-08-05 2020-11-10 谈斯聪 Medical robot device, system and method
CN111973152A (en) * 2020-06-17 2020-11-24 谈斯聪 Five sense organs and surgical medical data acquisition analysis diagnosis robot and platform

Also Published As

Publication number Publication date
WO2021254427A1 (en) 2021-12-23

Similar Documents

Publication Publication Date Title
AU2021292112A1 (en) Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition
CN102914967B (en) Autonomous navigation and man-machine coordination picking operating system of picking robot
CN108748153B (en) Medical robot and control method thereof
CN116507286A (en) Ultrasonic image data collection, analysis and recognition integrated robot, platform
CN110666791B (en) An RGBD robot nursing system and method based on deep learning
CN112155729A (en) Intelligent automatic planning method and system for surgical puncture path and medical system
EP3549725A1 (en) Upper limb motion assisting device and upper limb motion assisting system
US12465441B2 (en) Multi-arm robotic systems and methods for identifying a target
AU2021290732A1 (en) Five-sense-organ and surgical medical data acquisition, analysis and diagnosis robot and platform
WO2020144175A1 (en) Method and system for capturing the sequence of movement of a person
EP1541295A1 (en) Environment identification device, environment identification method, and robot device
CN109820695A (en) A ICU ward horizontal bilateral cerebral palsy lower extremity rehabilitation robot with communication and autonomous navigation and movement functions
US11446002B2 (en) Methods and systems for a medical imaging device
CN219629777U (en) Automatic surgical robot system
Jamone et al. Autonomous online generation of a motor representation of the workspace for intelligent whole-body reaching
CN110189811A (en) A myocardial infarction emergency medicine feeding robot and its working method
CN112132805A (en) A method and system for state normalization of ultrasonic robot based on human characteristics
AU2022335276A1 (en) Recognition, autonomous positioning and scanning method for visual image and medical image fusion
KR101398880B1 (en) Wearable robot with humanoid function and control method of the same
CN118338997A (en) Medical robot device, system and method
CN120419999A (en) Ultrasonic robot real-time track planning method and system based on ultrasonic image
CN107749060A (en) Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm
AU2022333990A1 (en) Method for intelligently identifying thoracic organ, autonomously locating and scanning thoracic organ
US12433683B2 (en) Tracking soft tissue changes intraoperatively
CN118304165B (en) Intelligent robot for moxibustion and thermal therapy based on infrared-visible light fusion positioning and its control method

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period