WO2024098149A1 - Capsule médicale avancée avec système ou méthode de diagnostic utilisant la reconnaissance de geste et des données biométriques - Google Patents
Capsule médicale avancée avec système ou méthode de diagnostic utilisant la reconnaissance de geste et des données biométriques Download PDFInfo
- Publication number
- WO2024098149A1 WO2024098149A1 PCT/CA2023/051493 CA2023051493W WO2024098149A1 WO 2024098149 A1 WO2024098149 A1 WO 2024098149A1 CA 2023051493 W CA2023051493 W CA 2023051493W WO 2024098149 A1 WO2024098149 A1 WO 2024098149A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- implementations
- pod
- data
- medical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/002—Monitoring the patient using a local or closed circuit, e.g. in a room or building
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0024—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system for multiple sensor units attached to the patient, e.g. using a body or personal area network
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1107—Measuring contraction of parts of the body, e.g. organ or muscle
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/112—Gait analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B50/00—Containers, covers, furniture or holders specially adapted for surgical or diagnostic appliances or instruments, e.g. sterile covers
- A61B50/10—Furniture specially adapted for surgical or diagnostic appliances or instruments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the disclosed implementations relate generally to portable, modular medical facilities, and more specifically to a self-sustained portable, modular, medical facility with renewable life-sustaining resources, fitted with Artificial Intelligence (Al) and robotics for comprehensive local and remote medical support for patients and healthcare professionals.
- Artificial Intelligence Al
- robotics for comprehensive local and remote medical support for patients and healthcare professionals.
- the disclosed implementations also relate generally to gesture recognition using motion sensors and more specifically to a method, system, and device for medical diagnosis using motion sensors (e.g., motion sensors with drift correction), biometric sensors, and/or infrared cameras.
- motion sensors e.g., motion sensors with drift correction
- biometric sensors e.g., biometric sensors, and/or infrared cameras.
- Motion tracking detects the precise position and location of an object by recognizing rotation (pitch, yaw, and roll) and translational movements of the object.
- Inertial tracking is a type of motion tracking that uses data from sensors (e.g., accelerometers, gyroscopes, magnetometers, altimeters, and pressure sensors) mounted on an object to measure positional changes of the object. Some of the sensors are inertial sensors that rely on dead reckoning to operate. Dead reckoning is the process of calculating an object’s current location by using a previously determined position, and advancing that position based upon known or estimated accelerations, speeds, or displacements over elapsed time and course.
- dead reckoning techniques are somewhat effective, they are subject to cumulative error called “drift” Because some IMUs estimate relative position/location by integrating acceleration data twice from an accelerometer, even a small error in acceleration results in compounded, increasing, error in relative position/location that accumulates over time. Similarly, errors in gyroscopic angular velocity data lead to cumulative error in relative angular orientation. Thus, acceleration and gyroscopic data are unreliable, when used in isolation, to estimate orientation and positional changes of an object being tracked using IMUs.
- the portable, modular Al-enabled medical facility addresses the demand for remote and improved healthcare support in the far-flung and underserved regions (or zones of war/conflict/disaster), by integrating cutting-edge Al technologies, to elevate the provision of healthcare services, even in areas without cellphone data networks and even without GPS (such as in space, on the Moon, Mars, etc.).
- the portable, modular Al-enabled medical facility encompasses a mobile housing unit designed with renewable energy generation, energy storage, domestic water supply generation, and sanitary services, effectively serving as a self-sustained emergency room or field hospital room.
- a mobile housing unit designed with renewable energy generation, energy storage, domestic water supply generation, and sanitary services, effectively serving as a self-sustained emergency room or field hospital room.
- some implementations described herein offer on- the-go medical support, catering to a wide spectrum of medical requirements, including diagnostics, monitoring, and real-time decision support for both patients and healthcare professionals.
- some implementations include a tracking device for tracking location and orientation of an object.
- the device comprises one or more sides that form a predetermined shape.
- the device also comprises a plurality of inertial measurement units (IMU) mounted to the one or more sides of the predetermined shape.
- IMU inertial measurement units
- Each IMU is configured to detect movement of the object and generate inertial output data representing non-linear acceleration and/or angular velocity of the object.
- Each IMU includes a first sub-sensor and a second subsensor.
- Each IMU is positioned at a predetermined distance and orientation relative to each other and a center of mass of the tracking device.
- the device also comprises a controller communicatively coupled to the plurality of IMUs, the controller configured to perform a sequence of steps.
- the sequence of steps comprises receiving first sub-sensor inertial output data and second sub-sensor inertial output data from each of the plurality of IMUs.
- the sequence of steps also comprises for each IMU: generating calibrated inertial output data based on the first sub-sensor inertial output data and the second sub-sensor inertial output data; and cross-correlating the first sub-sensor inertial output data with the second sub-sensor inertial output data to identify and remove anomalies from the first sub-sensor inertial output data with the second sub-sensor inertial output data to generate decomposed inertial output data.
- the sequence of steps also comprises determining the translational and rotational state of the tracking device based on the decomposed inertial output data from each of the IMUs.
- the sequence of steps also comprises synthesizing first sub-sensor inertial output data and second sub-sensor inertial output data to create IMU synthesized or computed data using a synthesizing methodology based on the translational and rotational state of the tracking device.
- the sequence of steps also comprises calculating a current tracking device rectified data output (also referred to herein as “drift-free or “drift-corrected”) based on the synthesized movement of each of the IMUs, a predetermined position of each of the IMUs and a predetermined orientation of each of the IMUs.
- the sequence of steps also comprises calculating a current location and orientation of an object based on a difference between the current object rectified data output, and a previous object drift-free or rectified data output.
- generating calibrated inertial output data includes applying neural network weights to the first sub-sensor inertial output data and the second sub-sensor inertial output data, wherein the neural network weights are adjusted at a learning rate based on the positional state of the tracking device, calculating a discrepancy value representative of a difference between an actual movement of the object and estimated movement of the object, and removing the discrepancy value from the calibrated inertial output data.
- the neural network weights applied to the first sub-sensor inertial output data and the second inertial output data are based on historical inertial output data from each of the first and second sub-sensors.
- the decomposed inertial output data corresponding to the first sub-sensor is calibrated based on the second subsensor inertial output data by providing feedback to dynamic-calibration neural network of first sub-sensor.
- cross-correlating the first sub-sensor inertial output data with the second sub-sensor inertial output data includes applying pattern recognition to the second sub-sensor inertial output data to generate a decomposed inertial output data representative of the first sub-sensor inertial output data.
- the first sub-sensor inertial output data and second sub-sensor inertial output data are filtered to minimize signal noise through signal conditioning.
- the first sub-sensor inertial output data and second sub-sensor inertial output data from each of the plurality of IMUs is received periodically at less than approximately 1 millisecond (ms) for continuous high sampling rate.
- the first sub-sensor and the second sub-sensor are each one of: accelerometer, magnetometer, gyroscope, altimeter, and pressure sensor; wherein the first sub-sensor is a different sensor type than the second subsensor.
- the predetermined shape is one of a plane, a tetrahedron, a cube, or any platonic solid, or any other irregular configurations with known distances and angels between IMUs.
- IMUs used to calculate the rectified IMU data output are oriented at different angles along two different axes relative to each other.
- calculating the current position and orientation of the object based on the difference between the current rectified IMU output and the previous object rectified IMU output include: identifying an edge condition; and blending the current object rectified IMU output and the previous object rectified IMU output to remove the edge condition using neural networks
- some implementations include a method of tracking the location and orientation of an object using a tracking device.
- the tracking device includes one or more sides that define a predetermined shape.
- the tracking device also includes a plurality of inertial measurement units (IMU) mounted to the one or more sides of the predetermined shape.
- IMU inertial measurement units
- Each IMU includes a first sub-sensor and a second sub-sensor.
- Each IMU is positioned at a predetermined distance and orientation relative to each other and a center of mass of the tracking device.
- the tracking device also includes a controller communicatively coupled to the plurality of IMUs. The method comprises performing a sequence of steps.
- the sequence of steps includes, at each IMU, detecting movement of the object and generating inertial output data representing acceleration and/or angular velocity of the object.
- the sequence of steps also includes, at the controller, receiving first sub-sensor inertial output data and second sub-sensor inertial output data from each of the plurality of IMUs.
- the sequence of steps also includes, at the controller, for each IMU: generating calibrated inertial output data based on the first subsensor inertial output data and the second sub-sensor inertial output data; cross-correlating the first sub-sensor inertial output data with the second sub-sensor inertial output data to identify and remove anomalies from the first sub-sensor inertial output data with the second sub-sensor inertial output data to generate decomposed inertial output data.
- the sequence of steps also includes, at the controller, determining a translational and rotational state of the tracking device based on the decomposed inertial output data from each of the IMUs.
- the sequence of steps also includes, at the controller, synthesizing first sub-sensor inertial output data and second subsensor inertial output data to create IMU synthesized or computed data using a synthesizing methodology based on the positional and rotational state of the tracking device.
- the sequence of steps also includes, at the controller, calculating a current tracking device overall drift-free or rectified data output based on the synthesized movement of each of the IMUs, a predetermined location of each of the IMUs and a predetermined orientation of each of the IMUs.
- the sequence of steps also includes, at the controller, calculating a current location and orientation of an object based on a difference between the current object overall rectified data and a previous object overall rectified data.
- generating calibrated inertial output data includes applying neural network weights to the first sub-sensor inertial output data and the second sub-sensor inertial output data, wherein the neural network weights are adjusted at a learning rate based on the positional state of the tracking device, calculating a discrepancy value representative of a difference between an actual movement of the object and estimated movement of the object, and removing the discrepancy value from the calibrated inertial output data.
- the neural network weights applied to the first sub-sensor inertial output data and the second inertial output data are based on historical inertial output data from each of the first and second sub-sensors.
- the decomposed inertial output data corresponding to the first sub-sensor is calibrated based on the second sub-sensor inertial output data by providing feedback to dynamic-calibration neural network of first subsensor.
- cross-correlating the first sub-sensor inertial output data with the second sub-sensor inertial output data includes applying pattern recognition to the second sub-sensor inertial output data to generate a decomposed inertial output data representative of the first sub-sensor inertial output data.
- AI7 In some implementations of the method of (A12), the first sub-sensor inertial output data and the second sub-sensor inertial output data are filtered to minimize signal noise through signal conditioning.
- the first sub-sensor inertial output data and second sub-sensor inertial output data from each of the plurality of IMUs is received periodically at less than approximately 1 ms for continuous high sampling rate.
- the first sub-sensor and the second sub-sensor are each one of: accelerometer, magnetometer, gyroscope, altimeter, and pressure sensor and the first sub-sensor is a different sensor type than the second sub-sensor.
- the predetermined shape is one of: a plane, a tetrahedron, a cube or any platonic solid, or any other irregular configurations with known distances and angels between IMUs.
- IMUs used to calculate the overall drift-free or rectified system output are oriented at different angles along two different axes relative to each other.
- calculating the current location and orientation of the object based on the difference between the current object rectified data and the previous object rectified data output include: identifying an edge condition; and blending the current object rectified data output and the previous object rectified data output to remove the edge condition using neural networks.
- Figures 1 A-1F illustrate various configurations of motion sensors mounted on two-dimensional (“2-D”) or three-dimensional (“3-D”) objects, in accordance with some implementations.
- Figure 2 is a block diagram illustrating a representative system with sensor(s) with drift correction, according to some implementations.
- Figure 3 is a flow diagram illustrating the flow of sensor data through a representative system with drift correction, according to some implementations.
- Figures 4A-4D illustrate a flowchart representation of a method of tracking position and orientation of an object using a tracking device, according to some implementations.
- Figure 5A shows examples of application of point estimation for steps for a patient without injuries, according to some implementations.
- Figure 5B shows examples of application of point estimation for steps for a patient with injuries, according to some implementations.
- Figure 6 shows an example of a wrist or glove mounted singular device/system that provides biometric measurements, according to some implementations.
- Figure 7 shows an example system 700 for diagnosis and/or treatment, according to some implementations.
- Figure 8 shows a perspective view of a medical pod according to some implementations.
- Figure 9 shows a perspective view of a medical pod according to some implementations, with the ceiling and an exterior wall removed to show the inside components.
- Figure 10 shows the “Power Wall” (which is the heat and electricity storage/generation module), and the “Hydro Wall” (which is the domestic water and hot water storage/filtration/generation module) of a medical pod according to some implementations.
- Power Wall which is the heat and electricity storage/generation module
- Hydro Wall which is the domestic water and hot water storage/filtration/generation module
- Figure 11 shows a schematic of the Location-Aware mesh network fitted to a medical pod according to some implementations.
- Figure 1 lb depicts a simulation of an astronaut on the Moon or on Mars, wearing a space suit fitted with embedded/wearable location, musculoskeletal and biometric sensors, and additionally fitted with a ROMOS Mesh Network node, according to a preferred embodiment of this invention
- Figure 12 shows the schematic 2D floor layout inside a medical pod according to some implementations.
- Figure 13 shows the schematic of a general Al-driven diagnosis model according to some implementations.
- Figure 14 shows the first stage of the process of symptomatic assessment in patient care according to some implementations.
- Figure 15 shows the second stage of the process of symptomatic assessment in patient care according to some implementations.
- Figure 16 shows a schematic of the model of gait training and analysis according to some implementations.
- Figure 17 shows a schematic of a Natural Language Processing (NLP) and Natural Language Understanding (NLU) technique for Normalization & Standardization of Biometric Measurements/Observation, according to some implementations.
- NLP Natural Language Processing
- NLU Natural Language Understanding
- Figure 18 shows an exemplary schematic of live data conversion model from Interoperable Al in FHIR standard after training according to some implementations.
- Figure 19 shows an exemplary schematic of an AI-Driven Diagnosis Training model which assists the Al system in learning and classifying outputs, according to some implementations.
- Figure 20 shows an exemplary schematic of a model for Cross-Referencing Memorized Standards Databases to Reject Hallucinations.
- Figure 21 shows an exemplary schematic of a more complex AI-Driven Diagnosis model, whereby the Memory Al can be integrated into the diagnosis learning model to produce hallucination-free trained outputs directly.
- first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
- a first electronic device could be termed a second electronic device, and, similarly, a second electronic device could be termed a first electronic device, without departing from the scope of the various described implementations.
- the first electronic device and the second electronic device are both electronic devices, but they are not necessarily the same electronic device.
- the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context.
- the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
- motion sensors that correct for drift
- applications for motion sensors that correct for drift including, but not limited to, gaming systems, smartphones, helmet-mounted displays, military applications, and gesture tracking devices, among others.
- U.S. Patent No. 9,417,693 the “’693 Patent”
- HMI wearable wireless human-machine interface
- a user can control a controllable device based on gestures performed by the user using the wearable HMI.
- a controller to track motion and correct for drift may be connected to the IMUs of the wearable HMI.
- the controller is attached to or integrated in the wearable HMI.
- the controller is remote from the wearable HMI but communicatively coupled to the wearable HMI.
- FIG. 1 A-1F illustrate various configurations of motion sensors mounted on 3D objects, in accordance with some implementations.
- Motion sensors may be mounted in linear arrays, on planar surfaces, or vertices of a myriad of geometric configurations, formed by any dimensional planar surface, platonic solid, or irregular 3D object.
- drift can be eliminated by, among certain methods or portions thereof described herein, resetting the motion sensors’ instantaneous measured acceleration, angular velocity, magnetic orientation, and altitude to match the known geometry formed by the physical distances and angles of the motion sensors relative to each other, as further described below in reference to flowcharts 4A-4D below.
- two sensors 102, 104 are positioned adjacent to each other at a fixed distance 107, and the angles between the two sensors can be considered to be approximately 0 degrees or approximately 180 degrees.
- this drift can be removed and positions of the two motion sensors can be reset to a fairly accurate degree.
- a planar configuration of three (3) or four (4) or more sensors can provide a spatial calculation based on a higher number of IMU readings of instantaneous measurements of all sensors in the array with known physical angles and distances between them.
- Figure IB shows a four-sensor configuration with sensors 106, 108, 110, and 112 mounted adjacent to each other in a planar configuration.
- Planar configurations such as the configurations shown in Figures 1 A and IB, provide a simpler mathematical model with fairly low demand for computation.
- variations in axial motion detection methods of the physical sensors may affect the accuracy of measurement in different axes of motion and orientation.
- motion in the Z-axis of a MEMS-based sensor is heavily biased with a gravity vector which may introduce higher variance in the physical motion of the sensor in this axis.
- the coriolis force, used to calculate Yaw in the z-axis is also susceptible to larger variance than the X, or Y axis.
- a tetrahedron configuration with four (4) sensors, each one mounted on each face of the tetrahedron can provide a blend of multi-axial data resulting in better complementary and compensatory measurement for the gravity vector bias than a single, Z-Axis of all sensors, according to some implementations.
- Figures 1C and ID show one such configuration.
- Figure 1C shows the top - oblique view of a tetrahedron with motion sensors 114, 116, 118 mounted on each of the visible three faces.
- Figure ID shows the bottom - oblique view of the tetrahedron shown in Figure 1C showing the additional sensor 120 on the fourth face of the tetrahedron.
- a component of the X and Y axis is also exposed to the gravity vector from at least three sensors at any given time, permitting a higher degree of accuracy through the removal of the gravity vector from a number of sensors and a number of axes at any instantaneous measurement.
- Sensors are mounted at angles on each surface, providing a blend of X, Y, and Z axis data for better spatial calculations and drift correction, in accordance with some implementations.
- Figure IE shows an oblique view of a cubic configuration, according to some implementations. Only three out of the six faces are visible in Figure IE. Each of the six faces may have a sensor mounted, including the sensors 122, 124, and 126. In some implementations, some, less than all, faces of any object described herein have at least one sensor. In this configuration, each sensor on each face enables a complementary reading between the other sensors on the other faces of the cube. However, as the number of sensors is increased, the latency to read all measurements is also increased in the cubic or higher dimensional solid geometries.
- Motion sensors can also be rotated on opposite faces of the geometric solids to provide an axial blend in any configuration, according to some implementations.
- Figure IF shows an oblique view of another configuration of the cuboid in Figure IE wherein motion sensors are mounted on each face of the cube as before, but sensors may be rotated at an angle between zero (0) and ninety (90) degrees, non-inclusive.
- sensor 122 may be rotated at an angle of approximately forty-five (45) degrees with respect to the other sensors.
- this method may provide a better analysis of instantaneous motion data, the computation time per measurement-to-calculation output may be longer.
- FIG. 2 is a block diagram illustrating a representative system 200 with drift-free sensor(s), according to some implementations.
- the system 200 includes one or more processing units (e.g., CPUs, ASICs, FPGAs, microprocessors, and the like) 202, one or more communication interfaces 214, memory 220, and one or more communication buses 216 for interconnecting these components (sometimes called a chipset).
- the type of processing units 202 is chosen to match the requirement of application, including power requirements, according to some implementations. For example, the speed of the CPU should be sufficient to match application throughput.
- the system 200 includes a user interface 208.
- the user interface 208 includes one or more output devices 210 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
- user interface 208 also includes one or more input devices 212, including user interface components that facilitate user input such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing device, or other input buttons or controls.
- some systems use a microphone and voice recognition or a camera and gesture recognition or a motion device and gesture recognition to supplement or replace the keyboard.
- the system 200 includes one or more Inertial Measurement Unit(s) 204.
- the IMUs include one or more accelerometers, one or more magnetometers, and/or one or more gyroscopes, and/or altimeters, and/or pressure sensors.
- the one or more IMUs are mounted on an object that incorporates the system 200 according to a predetermined shape.
- Figures 1 A-1F described above illustrate various exemplary configurations of motion sensors.
- the initial configuration of the IMUs (e.g., the number of IMUs, the predetermined shape) is also determined based on characteristics of the individual IMUs.
- the orientation or the axis of the IMUs, and therefore the predetermined shape are chosen so as to compensate for manufacturing defects.
- the one or more IMUs are fabricated as CMOS and MEMS system on a chip (SOC) that incorporates the system 200.
- Communication interfaces 214 include, for example, hardware capable of data communications using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 6L0WPAN, Thread, Z-Wave, Bluetooth Smart, ISAlOO. l la, WirelessHART, MiWi, etc.) and/or any of a variety of custom or standard wired protocols (e.g., Ethernet, HomePlug, etc.), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
- custom or standard wireless protocols e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 6L0WPAN, Thread, Z-Wave, Bluetooth Smart, ISAlOO. l la, WirelessHART, MiWi, etc.
- any of a variety of custom or standard wired protocols e.g., Ethernet, HomePlug, etc.
- Memory 220 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, one or more EPROMs, one or more EEPROMs, or one or more other non-volatile solid state storage devices.
- Memory 220, or alternatively the non-volatile memory within memory 220 includes a non-transitory computer readable storage medium.
- memory 220, or the non-transitory computer readable storage medium of memory 220 stores the following programs, modules, and data structures, or a subset or superset thereof
- operating logic 222 including procedures for handling various basic system services and for performing hardware dependent tasks
- device communication module 224 for connecting to and communicating with other network devices (e.g., network interface, such as a router that provides Internet connectivity, networked storage devices, network routing devices, server system, etc.) connected to one or more networks via one or more communication interfaces 214 (wired or wireless); input processing module 226 for detecting one or more user inputs or interactions from the one or more input devices 212 and interpreting the detected inputs or interactions; user interface module 228 for providing and displaying a user interface in which settings, captured data, and/or other data for one or more devices (not shown) can be configured and/or viewed; one or more application modules 230 for execution by the system 200 for controlling devices, and for reviewing data captured by devices (e.g., device status and settings, captured data, or other information regarding the system 200 and/or other client/electronic devices); one or more controller modules 240, which provides functionalities for processing data from the one or more IMUs 204, including but not limited to: o data receiving module 242 for receiving data from the one or more I
- the raw data received by the data receiving module 242 from the IMUs include acceleration information from accelerometers, angular velocities from gyroscopes, degrees of rotation of magnetic field from magnetometers, atmospheric pressure from Altimeters, and differential Pressure Sensors.
- the raw data is received from each of the IMUs sequentially, according to some implementations.
- the IMU data is received in parallel.
- the filtering module 244 filters the raw data to remove noise from the raw data signals received by the data receiving module 242.
- the filtering module 244 uses standard signal processing techniques (e.g., low-pass filtering, clipping, etc.) to filter the raw data thereby minimizing noise in sensor data, according to some implementations.
- the filtering module 244 also computes moving averages and moving variances using historical data from the sensors, according to some implementations.
- the dynamic calibration module 246 uses an Artificial Intelligence (Al) framework (e.g., a neural network framework) to calibrate data from the one or more IMUs 204.
- Al Artificial Intelligence
- one or more “neurons” are configured in a neural network configuration to calibrate the filtered data for the one or more IMUs 204.
- the shape of the object (sometimes herein called a predetermined shape) is a cuboid for the sake of explanation.
- a cuboid-shaped object could be placed on a planar surface six different ways (i.e., on six different faces of the cuboid). So there are six orientations to calibrate on.
- the system 200 collects a large number of samples (e.g., approximately 1,000 or more samples) for each of the six orientations. This sampled data is collected and stored in memory 220. Later, when raw data is received, the stored sampled data is used as a baseline to correct any offset error in the raw data during sedentary states (i.e., when the object is not moving).
- the weights of the network are constantly tuned or adjusted based on the received raw data from the IMUs after offsetting the stored sampled data, according to some implementations.
- a neural network-based solution provides better estimates than a least squares regression analysis or statistical measures.
- the neural network weights are adjusted dynamically, consider when the object is stationary but the neural network output indicates that the object is moving. The weights are readjusted, through back propagation, such that the output will indicate that the object is stationary. Thus the weights settle during times when the object is stationary.
- the learning rate of the neural network is maximized during sedentary states (sometimes herein called stationary states), and minimized when the object is in motion.
- Pattern recognition is used to detect whether the object is moving or is stationary so that the learning rate can be adjusted, according to some implementations.
- the different stationary and mobile states are used to adjust the weights affecting the accelerometer.
- known reference to the magnetic north is used to constantly adjust the weights that correspond to the magnetometers.
- the magnetometer data is also used to correct or settle the weights for the accelerometers when the object is moving because the reference point for the magnetic north and gravity vector are always known.
- Gyroscope data is more reliable than data from accelerometers because it only requires a single level integration. So the gyroscope data is also used to correct accelerometer weights, according to some implementations.
- the dynamic calibration module 246 is optional, and a pass-through channel passes the output of the filtering module 244 to the motion synthesis module 250 without dynamic calibration.
- the motion decomposition module 248 uses pattern recognition techniques to eliminate anomalies due to cross-interaction or interference between the sub-sensors, in each IMU.
- Experimental data is collected for controlled translational and rotational movements of an object. For example, the behavior of the gyroscope is tracked under constant velocity and the pattern is stored in memory. When the gyroscopic data follows the known pattern, the fact that the object is under constant velocity is deduced based on this pattern.
- accelerometer data e.g., constant gravity vector
- magnetometer data can be used to identify patterns to correct errors in accelerometer data and/or gyroscope data, according to some implementations.
- the motion decomposition module 248 removes anomalies by observing changes in patterns detected from sensor data, such as when the object stops moving or rotating abruptly, as another effect to correct for anomalies. In some implementations, the motion decomposition module 248 analyzes several distinct stored patterns for correcting anomalies in each of the sensors. In some implementations, the motion decomposition module 248 categorizes the type of translational and/or rotational movements of each IMU of the tracked object and outputs the pattern or the category for the motion synthesis module 250. For example, the motion decomposition module 248 deduces that each IMU is in one of many states, including simple linear motion, simple linear motion with rotation, nonlinear motion with simple rotation. In some implementations, output from the motion decomposition module 248 additionally controls the learning rate in the dynamic calibration module 246.
- the motion synthesis module 250 uses the state information (e.g., constant velocity, constant acceleration, changing acceleration, in combination with rotation) from the motion decomposition module 248 to select one or more algorithms/methodologies. The motion synthesis module 250 subsequently applies the one or more algorithms on the data output from dynamic calibration module 246 to synthesize the motion of the object (sometimes herein referred to as the computation of overall rectified data for the one or more IMUs).
- state information e.g., constant velocity, constant acceleration, changing acceleration, in combination with rotation
- the motion synthesis module 250 subsequently applies the one or more algorithms on the data output from dynamic calibration module 246 to synthesize the motion of the object (sometimes herein referred to as the computation of overall rectified data for the one or more IMUs).
- the motion synthesis module 250 uses an equation to compute the axis of rotation based on the difference in angular momentum of the IMUs (as indicated by the output of the dynamic calibration module) and the known shape outlined by the predetermined position of the different IMUs.
- the object is mounted with IMUs in a planar configuration, such as in Figure IB, with four sensors, each sensor in a corner.
- the planar configuration positioned vertically in a diamond shape, with the longitudinal axis passing through the top IMU and the bottom IMU.
- the side IMUs on either side of the longitudinal axis will share the same angular momentums but will have different angular momentums as compared to the top IMU and the bottom IMU, and the top IMU will have an angular velocity greater than the bottom IMU that is closer to the axis of rotation.
- the motion synthesis module 250 computes or synthesizes the rotational axis data from the differences in the angular momentums and the known distances between the sensors, based on the shape formed by the IMUs.
- the drift correction module 252 uses Bayes theorem to remove drift by re-conforming sensor positions and orientation to the known (sometimes herein called predetermined) shape.
- the Bayesian filter predicts how much the IMU data is drifting by.
- the drift correction module 252 computes the skewness in the data by the motion sensors based on the variation in the norms, distances and angles between the sensors. If the variation in the norms exceeds a threshold, the drift correction module 252 generates a correction matrix (sometimes called a drift matrix) to eliminate drift in successive sensor readings.
- a shape correcting module (not shown) corrects the data output from the dynamic calibration module (sometimes herein called the clean or filtered data) using the correction matrix, by subtracting the predicted drift from the clean data, in a continuous or iterative fashion, according to some implementations. For example, after every reading of sensor data, previously generated and stored data from the drift correction module 252 is used to correct the clean data output from the noise-filtered, and dynamic calibration module, according to some implementations.
- the edge condition handling module 254 handles complex movements (e.g., while spinning along two axes, and moving across on a straight line, say the object also lifts up) and/or transitional movements (e.g., spinning to laterally moving along a straight line) to reduce drift based on the output of the drift correction module 252.
- the edge condition handling module 254 uses Al to apply probability weightings to compensate for the edge conditions.
- the edge condition handling module 254 blends a current object common data point (e.g., output by the drift correction module 252) and the previous object common data point (e.g., previous output for a prior sensor reading by the drift correction module 252 that is stored in memory) to remove the edge condition.
- each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- memory 220 optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 220, optionally, stores additional modules and data structures not described above.
- one or more processing modules and associated data stored in the memory 220 are stored in and executed on a second processing device other than the system with drift-free motion sensors 200 that is configured to receive and process signals produced by the IMUs 214.
- the second processing device might be a computer system, smart home device or gaming console that executes applications (e.g., computer games) at least some of whose operations are responsive to motion signals provided by the IMUs.
- FIG. 3 is a flow diagram illustrating the flow of sensor data through a representative system with drift-free sensor(s), according to some implementations.
- Raw data (302) from the one or more IMU sensors (IMU 0, IMU 1, IMU 2, . . ., IMU N) is received (324) by the controller 300 (e.g., controller module 240).
- the controller receives the data from the one or more IMUs in parallel (as shown in Figure 3).
- the received data is output as raw data (304) to the motion decomposition module 326, according to some implementations.
- the raw data is also input as data 306 to a filtering module 328 which filters the raw data to produce filtered data 310 which is in turn input to a dynamic calibration module 330.
- the motion decomposition module 326 also controls (314) the learning rate of the dynamic calibration module 330.
- the motion decomposition module 326 and/or the dynamic calibration module 330 are optional modules. In such cases, the filtered data 310 is input (not shown) to the motion synthesis module.
- the motion synthesis module 332 in these cases, does not know the pattern or category of motion but iteratively applies one or more algorithms or equations to synthesize motion.
- the motion decomposition 326 and the dynamic calibration 330 steps execute asynchronously and/or in parallel.
- the Bayes calculation step 336 uses the output 316 of the motion synthesis module to generate drift correction matrices 320 (as described previously with reference to Figure 2) which is consumed by a shape correction module 334 to correct input in the next iteration (i.e., when and after such data becomes available) of motion synthesis.
- the shape correction data is not available, and the dynamic calibration output 312 is input to motion synthesis step 332.
- the output of the Bayes calculation step 336 (318) is input to a step 338 to handle edge conditions (described above in reference to Figure 2) for complex movements and dynamic learning.
- the output 322 indicates drift-free real motion output of the controller, according to some implementations.
- filtering module 328 includes similar functionality to filtering module 244 in Figure 2; motion decomposition module 326 includes similar functionality to motion decomposition module 248 in Figure 2; dynamic calibration module 330 includes similar functionality to dynamic calibration module 246 in Figure 2; shape correction module 334 includes similar functionality to shape correction module described above in the description for Figure 2; motion synthesis module 332 includes similar functionality to motion synthesis module 250 in Figure 2; Bayes calculations module 336 includes similar functionality to drift correction module 252 in Figure 2; and handle edge conditions module 338 includes similar functionality to edge condition handling module 254 in Figure 2.
- FIGS 4A-4D illustrate a flowchart representation of a method 400 of tracking position and orientation of an object using a tracking device, according to some implementations.
- the tracking device includes (402) one or more sides that define a predetermined shape, and a plurality of inertial measurement units (IMU) mounted to the one or more sides of the predetermined shape.
- each IMU includes a first sub-sensor and a second sub-sensor, and each IMU is positioned at a predetermined distance and orientation relative to a center of mass of the tracking system, according to some implementations.
- Figures 1 A-1F described above illustrate various configurations of sensors mounted on 3D objects, according to some implementations.
- the first sub-sensor and the second sub-sensor of the tracking device are each one of: accelerometer, magnetometer, gyroscope, altimeter, and pressure sensor and the first sub-sensor is a different sensor type than the second sub-sensor.
- the predetermined shape of the tracking device is one of: a plane, a tetrahedron, and a cube.
- the tracking device also includes a controller communicatively coupled to the plurality of IMUs. An example system 200 with IMUs 204 was described above in reference to Figure 2, according to some implementations.
- each IMU of the tracking device detects movement of the object and generates inertial output data representing location and/or orientation of the object.
- IMUs 204 in Figure 2 or the sensors in Figures 1 A-1F use a combination of accelerometers, magnetometers, gyroscopes, altimeters, and/or pressure sensors to detect movement of the object and generate data that represents location and/or orientation of the object.
- the tracking object receives (412) first sub-sensor inertial output data and second sub-sensor inertial output data from each of the plurality of IMUs.
- the data receiving module 242 of the system 200 receives the output from the one or more IMUs 204 via the communication bus 216.
- the controller receives (414) the first sub-sensor inertial output data and the second sub-sensor inertial output data from each of the plurality of IMUs periodically at less than approximately 1 ms for continuous high sampling rate.
- the controller uses a filtering module (e.g., module 244) to filter (416) the first sub-sensor inertial output data and second sub-sensor inertial output data to minimize signal noise.
- a filtering module e.g., module 244 to filter (416) the first sub-sensor inertial output data and second sub-sensor inertial output data to minimize signal noise.
- the controller performs a sequence of steps 418 for each IMU, according to some implementations.
- the controller generates (420) calibrated inertial output data based on the first sub-sensor inertial output data and the second sub-sensor inertial output data.
- the controller uses the dynamic calibration module 246 to generate calibrated inertial output data.
- the controller calculates the error value using (422) neural network weights to evaluate the first subsensor inertial output data and the second sub-sensor inertial output data, wherein the weights are adjusted at a learning rate based on the positional state (e.g., stationary position state) of the tracking device, calculating a discrepancy value representative of a difference between an actual movement of the object and estimated movement of the object, and removing the discrepancy value from the calibrated inertial output data, (e.g., using the output of a motion decomposition module, such as module 248).
- a motion decomposition module such as module 248
- the controller applies (424) neural network weights to the first sub-sensor inertial output data and the second inertial output data based on historical (e.g., prior or previous) inertial output data from each of the first and second sub-sensors.
- the controller stores and/or accumulates inertial output data received from the IMUs over time that is later retrieved as historical data.
- the controller uses the dynamic calibration module (e.g., module 246) to cross-correlate (426) the first sub-sensor inertial output data with the second sub-sensor inertial output data to identify and remove anomalies from the first sub-sensor inertial output data with the second sub-sensor inertial output data to generate decomposed inertial output data for each IMU, according to some implementations.
- the controller calibrates (428) the decomposed inertial output data corresponding to the first sub-sensor based on the second sub-sensor inertial output data.
- the controller cross-correlates the first sub-sensor inertial output data with the second sub-sensor inertial output data by applying (430) pattern recognition (e.g., by using a motion decomposition module, such as module 248) to the second sub-sensor inertial output data to generate the decomposed inertial output data representative of the first sub-sensor inertial output data.
- pattern recognition e.g., by using a motion decomposition module, such as module 248
- the controller determines (432), using a motion decomposition module (e.g., module 248 described above), a positional and rotational state of the tracking device based on the decomposed inertial output data from each of the IMUs, according to some implementations.
- a motion decomposition module e.g., module 248 described above
- the controller synthesizes (434), using a motion synthesis module (e.g., module 250 described above), first sub-sensor inertial output data and second sub-sensor inertial output data to create IMU synthesized data using a synthesizing methodology based on the positional and rotational state of the tracking device, according to some implementations.
- a motion synthesis module e.g., module 250 described above
- first sub-sensor inertial output data and second sub-sensor inertial output data to create IMU synthesized data using a synthesizing methodology based on the positional and rotational state of the tracking device, according to some implementations.
- the controller calculates (436), using a ACFBT calculation module (not shown), a current tracking device rectified data output based on the data synthesized for each of the IMUs, a predetermined position of each of the IMUs and a predetermined orientation of each of the IMUs to confirm to a predetermined shape.
- a current tracking device rectified data output based on the data synthesized for each of the IMUs
- a predetermined position of each of the IMUs and a predetermined orientation of each of the IMUs to confirm to a predetermined shape.
- at least some of the IMUs used to calculate the common data point are oriented at different angles along two different axes relative to each other.
- the controller subsequently calculates (440), using a current position and orientation determination module (e.g., module 252 in Figure 2, or steps 336 and 334 in Figure 3), a current position and orientation of an object based on a difference between the current object rectified data output and a previous object rectified data output, according to some implementations.
- the controller identifies (442) an edge condition (e.g., complex movements described above) and blends (444), using an edge condition handling module (e.g., module 254 described above), the current object rectified data output and the previous object rectified data output to remove the edge condition.
- an edge condition e.g., complex movements described above
- an edge condition handling module e.g., module 254 described above
- Examples of diagnosis system or method using gesture recognition and biometric data are described herein, according to some implementations.
- Some implementations perform point estimation of joints and/or skeleton of a subject, to obtain movement information for the subject. Some implementations subsequently determine whether the movement information is within a set of predetermined normal parameters. In accordance with a determination that the movement information is not within the set of predetermined normal parameters, some implementations identify that the subject is injured. Some implementations calculate a probability of an injury and/or a probability of further injury, for the subject, based on analysis of the movement information. For example, a particular gait is recognized and subsequently used to diagnose the extent of injury of the subject. Some implementations assign different percentage or probabilities if the subject has a preexisting condition, such as a disability.
- Movement estimation can be used to predict injuries.
- three or more steps of a subject or a patient is used to detect a gesture or a gait.
- a gait may include a gait which may be used to predict injury or further injury.
- Some implementations output a diagnosis, such as extent of injury, probability of further injury, and/or exercise regimen for a subject or patient to follow (e.g., “10% chance of further injury or fall, continue scheduled exercise and stretching”). With certain gestures or gaits, some implementations predict and/or display a higher chance of further injury or fall (e.g., 37% chance).
- Figure 5A shows examples of application of point estimation for steps 502, 504 and 506 for a patient without injuries, according to some implementations.
- Figure 5B shows examples of application of point estimation for steps 508, 510, and 510 for a patient with injuries, according to some implementations.
- Different gaits or gestures can be used to train and/or predict injuries, further injuries, and/or extent/severity of injuries.
- Biometric signature, infrared imaging of zones, and similar data may be additionally used to assess a degree of duress a person is under, so that different treatment options may be suggested.
- FIG. 7 shows an example system 700 for diagnosis and/or treatment, according to some implementations.
- the system 700 includes one or more tracking devices (e.g., tracking devices 702-2, 702-4, 702-6).
- Each tracking device includes a respective one or more inertial sensors, and each tracking device is configured to track a respective location and orientation of a subject for a respective position on a subject based on inertial output data from the respective one or more inertial sensors.
- different tracking devices may be attached or affixed to different points shown in Figure 5A.
- the system 700 also includes a measurement device 706 that is configured to measure biometric or vital signs of the subject.
- the system 700 also includes a controller 704 that is coupled to the one or more tracking devices and the measurement device.
- the controller 704 is configured to receive the respective pose, location and orientation of the subject for each measurement, from the one or more tracking devices, for a plurality of time periods, as a time series data.
- the controller 704 is also configured to receive the biometric or vital signs of the subject from the measurement device.
- the controller 704 is also configured to identify a gesture for the subject based on at least one of (i) the time series data and (ii) features extracted from the biometric or vital signs of the subject, using a gesture recognition technique.
- the controller 704 is also configured to compare the gesture with a set of predetermined parameters indicative of a medical or physiological condition or an extent of the medical or physiological condition.
- the controller 704 is also configured to output a diagnosis for the subject based on at least one of (i) the comparison, and (ii) the features extracted from the biometric or vital signs of the subject.
- the measurement device 706 is a wearable body, limb, wrist or glove mounted device configured to be worn by the subject.
- a sensor may be strapped to a knee.
- the measurement device 706 is configured to measure one or more biometric or vital signs selected from the group consisting of blood oxygen, heart rate, respiratory rate, blood pressure, temperature, EKG/ECG, blood glucose, ultrasound, CT-scan, magnetic resonance imaging, photolithography and skin condition.
- the measurement device 706 is configured to measure heart rate, blood pressure, blood oxygen concentration, respiratory rate, and temperature.
- Figure 6 shows an example of a wrist or glove mounted singular device/system 600 that provides biometric measurements including, but not limited to, vital signs such as blood oxygen, heart rate, respiratory rate, blood pressure, temperature, EKG/ECG, blood glucose, skin moisture.
- biometric measurements including, but not limited to, vital signs such as blood oxygen, heart rate, respiratory rate, blood pressure, temperature, EKG/ECG, blood glucose, skin moisture.
- the combination of biometric measurements along with the inertial sensors provide a form of a gesture that can be interpreted by artificial intelligence, gesture recognition or pattern recognition. This in turn is used to provide a diagnosis on a patient/individual. Additionally, or alternatively, the data can be provided as an assisted diagnosis to a physician or practitioner via notifications or dashboard.
- H refers to heart rate measured in beats per minute
- B refers to blood pressure measured in mmHg
- O refers to blood oxygen concentration measured in SpO2 percentage
- R refers to respiratory rate measured in breaths per minute
- T refers to temperature measured in degrees Celsius. Any number of these vital signs may be used, features may be extracted, and/or used to identify gestures and/or output diagnosis (e.g., used as a subsequent step after gesture recognition and/or infrared imaging).
- the gesture recognition technique used by the controller 704 includes a trained machine learning model that is trained to detect medical or physiological condition of patients based on gestures using features extracted from information on subjects.
- training dataset for training the trained machine learning model and/or any other gesture recognition methods is obtained by having one or more users wear the tracking devices and/or the measurement device and perform actions (e.g., walking) with or without predetermined gaits or gestures.
- the system 700 further includes an infrared camera device 708 configured to detect injured zones or zones with medical conditions of the subject. The controller is coupled to the infrared camera device and further configured to receive information on the injured zones or zones with medical conditions from the infrared camera device.
- the controller is also configured to identify the gesture for the subject and/or output the diagnosis for the subject further based on information on the injured zones or zones with medical conditions.
- a zone is specifically read that can be detected through machine learning.
- Some implementations identify the zone(s) by combining the biometric data.
- this data is a biometric signature and/or infrared data (e.g., data collected for a swelling in a specific zone). In some instances, blood rushing to that zone may generate specific data, may increase a user’s heart rate and/or their respiratory rate.
- signatures that may confirm that there is a severe injury, the person is in duress. The degree of duress can be used to assess whether patient needs urgent care or whether they just need a simple recovery process or some procedure that they can take home. So these steps may be performed as a way of triaging patients.
- the system 700 further includes a camera device 710 configured to detect injured zones or zones with medical conditions of the subject.
- the controller is coupled to the camera device and further configured to receive information on the injured zones or zones with medical conditions from the camera device.
- the controller is further configured to identify the gesture for the subject and/or output the diagnosis for the subject further based on information on the injured zones or zones with medical conditions.
- body skeletal and/or joint position and/or orientation is received by a point-estimation system in the controller 704 based on a camera and/or infra-red camera.
- the biometric measurements extend to one or more forms of interconnected medical measurement devices, although not explicitly mentioned here.
- other tracking devices include a camera and/or an infra-red camera that employ pose and point estimations for the skeletal system and joints.
- the wearable device 600 is configured to be connected to other non-wearable medical devices and/or to obtain data from other non-wearable medical devices and/or to combine the data with own data, for a gesture or pattern recognition using artificial intelligence and/or other pattern recognition techniques.
- the self-sustained portable, modular, medical facility is referred to as the “pod”, or the “Portable AI-Enabled Medical Facility”, or the “portable medical facility”, or the “medical pod”, or the “advanced medical pod” or by the acronym “AMP”, or other similar variations on the same theme.
- the Portable AI-Enabled Medical Facility encompasses a mobile housing unit designed with renewable energy generation, energy storage, domestic water supply generation, and sanitary services, effectively serving as a self-sustained emergency room. Operated by advanced Al capabilities locally or remotely, the pod offers on-the-go medical support, catering to a wide spectrum of medical requirements, including diagnostics, monitoring, and real-time decision support for both patients and healthcare professionals.
- modules inside the pod perform a Verbal Symptomatic Assessment: Upon arrival at the portable medical facility, a patient may engage with the medical facility using verbal communications to commence symptomatic assessment similar to an exchange with a triage nurse at a hospital.
- a robot or a static HMI (human -machine interface) device, fitted inside the pod, can act as the autonomous nurse in the medical facility.
- modules inside the pod perform a Sentiment Analysis: Sentiment is derived from spoken exchange with the patient, in combination with the machine vision to provide insights into the instantaneous state of the patient.
- modules inside the pod perform a Gait Analysis:
- the Portable AI-Enabled Medical Facility incorporates a sophisticated gait analysis system, utilizing video and motion capture/analysis modules to assess patients' mobility, balance, and detect abnormalities or signs of musculoskeletal disorders.
- the gait analysis Al generates a preliminary list of potential patient abnormal conditions, which are further subjected to a cascading Al-driven diagnostics process.
- the pod performs a Biometric Sensor Integration: The pod is equipped with an array of biometric sensors, including ECG, blood pressure monitors, pulse oximeters, and temperature sensors, etc., which are preferably fitted to the body of a patient (as wearable sensors).
- the pod integrates an Al module capable of adaptively parsing and reformatting data generated by various medical equipment into an interoperable format, facilitating seamless data exchange with other medical systems, thereby eliminating the need for manual reformatting and interpretation.
- the pod performs an AI-Driven Diagnostics function, by deploying a set of cascading Al systems with sparse learning capabilities to analyze interoperable biometric data.
- This Al system assists in diagnosing medical conditions, assessing risk, and formulating treatment plans.
- This Al system identifies anomalies, trends, and early warning signs in a patient's health, thus enabling early preventative interventions.
- the pod features Output Interoperability:
- the pod ensures seamless communication and data exchange with remote hospitals and clinics. It adheres to standardized medical/patient data formats, guaranteeing compatibility and secure transmission of medical information. This interoperability enhances telemedicine capabilities and facilitates remote consultations with specialists.
- the pod is fitted with Assistive Robotics:
- the Portable AI-Enabled Medical Facility incorporates mobile robotics and/or robots as mobile units within the facility, such as the assistive robots currently offered commercially by the TEMI company of Israel, USA and Canada, which sells Service robots for Healthcare facilities. These robots provide nursing and medical assistance to patients, engaging in activities such as patient interaction, offering physical support in mobility, and efficiently transporting instruments and medications throughout the medical facility.
- an advanced medical pod according to some implementations has the following overall construction: AMP is preferably built using a shipping container as a base, which is an ideal structure to develop the its medical capabilities, since it provides a robust structure and enables easy transportation, characteristics which are key for the deployment of such pods in harsh and distant environments.
- the Advanced Medical Pod has the following salient features according to some implementations:
- UV & RO water filtration system 4000L rain, stream, or well water refillable tank UV & RO water filtration system
- the Solar panels and the roof items may be recessed.
- Figure 9 shows a perspective view of a medical pod according to some implementations, with the ceiling and an exterior wall removed to show the inside components and layout.
- the AMP’s interior is fabricated to medical clean-room standards with an added benefit of conformal coating between the metallic structure and inner-wall layer to provide hermetic sealing and insulation.
- AMP is preferentially equipped with over 6 medical devices including a Temi Human Assistant robot with computer-vision and fourteen languages to simplify care for patients.
- Medical equipment storage including wearables, defibrillator, 02 mask, and first response kit
- Air intake port for heat recovery in power- wall 19.
- a preferred embodiment of the pod is 12.5m Long and 2.5m Wide.
- Figure 10 shows the “Power Wall” (which is the heat and electricity storage/generation module), and the “Hydro Wall” (which is the domestic water and hot water storage/filtration/generation module) of a medical pod according to some implementations.
- Power Wall which is the heat and electricity storage/generation module
- Hydro Wall which is the domestic water and hot water storage/filtration/generation module
- the “Power Wall”: AMP is preferably fitted with two interior power-walls (electricity storage units) capable of delivering 4 - 12 kVA or more of power each to supply each medical bay and the practitioners’ laboratory. In case of a single power-wall failure, the second power-wall can be used as a redundant fall-back to power the entire AMP pod.
- Fuel tank for genset (engine based electrical generator);
- the “Hydro Wall” Figure 10 shows a cross-section of the Hydro-wall used to supply water for domestic and hydronic heating in the AMP.
- a Filtration system preferably placed above the container supplies make-up water to the water tank.
- Each AMP is preferably fitted with two (2) Hydro-walls. If case of a failure in one Hydro Wall unit, the second Hydro Wall unit can be used in bypass mode.
- FIG. 11 shows a schematic of the Location-Aware mesh network fitted to a medical pod according to some implementations.
- This Location- Aware mesh network is named Romos by the inventor herein.
- the ROMOS Mesh Network is a location-aware network designed for smart-city communications with an implementation independent of the cellular networks, although it can also work with the cellular data networks. Being location-aware, the ROMOS Mesh Network is capable of transporting communication packets via multiple radio signal types and over multiple optimized signal pathways and around obstacles with an added advantage of forming a high- accuracy global positioning system independent of GPS. Each node in the network is a contributor and a consumer of signals providing a distributed intelligent processing of data using Micron Digital’s 4CR artificial intelligence to actively qualify signal transmission path and mechanism.
- each node of the ROMOS mesh network is additionally fitted with at least one Inertial Navigation System (INS) chip (such as the INS chips described in U.S. Patent Application Serial No. 17/084,477 entitled “Motion Sensor with Drift Correction,” filed on October 29, 2020, which is a continuation of U.S. Serial No. 16/453,961, now U.S. Patent No. 10,852,143, entitled “Motion Sensor with Drift Correction,” filed on June 26, 2019, which claims the benefit of U.S. Provisional Patent Application No. 62/690,865 filed on June 27, 2018 entitled “Motion Sensor with Drift Correction”; and U.S. Serial No.
- INS Inertial Navigation System
- field assets can be located with millimeter precision up to the transmission range of 80 km of the mesh network. Adding additional nodes to the edge of transmission range extends the communications range by another 40 km per node.
- Figure 1 lb depicts a simulation of an astronaut on the Moon or on Mars, wearing a space suit fitted with embedded/wearable location, musculoskeletal and biometric sensors, and additionally fitted with a ROMOS Mesh Network node, according to a preferred embodiment of this invention; the ROMOS location-aware Mesh Network node worn by the astronaut enables the transmission (relay) of physiological and other data in real time, to a nearby (up to 80km away) medical pod (fitted with a ROMOS location-aware Mesh Network gateway), or to a nearby rover (also fitted with a ROMOS location-aware Mesh Network node or gateway) for constant assessment by an Al assisted dashboard trained to do data analytics.
- the nearby pod’s ROMOS location-aware Mesh Network gateway also provides a satellite uplink for global or interplanetary communications.
- the ROMOS mesh network uses strong data encryption to secure data transmissions end-to-end. Medical and Systems data from field assets can be relayed to AMP and over satellite safely following the interoperability and data security protocols defined elsewhere in this document.
- the stationary ROMOS Gateways preferably provide redundant satellite uplinks.
- the mesh network topology of ROMOS location-aware Mesh Network also enables local communications redundancy, according to a preferred implementation. If some nodes within the network experience interruptions, the network self-heals and finds alternative node paths through which to continue data transmission. Nodes provide data backup and queuing in case of connection interruption for re-transmission once uplink is established.
- Figure 12 shows the schematic 2D floor layout inside a medical pod according to some implementations.
- the exemplary Portable Medical Facility is designed with two distinct medical treatment areas and a single central triage zone. To enhance its functionality, the facility is equipped with a wall or ceiling-mounted microphone array that effectively captures audible input while canceling out background noise through advanced whitenoise elimination from multiple microphones. Additionally, a sound announcement system is in place to facilitate clear audible communication between patients and remote healthcare practitioners.
- the exemplary Portable Medical Facility is fitted with two entrance and exit doorways to ensure seamless access. The doorways are preferably fitted with universal/modular docking, enabling wheelchair ramps to be added, or enabling the addition of transitional vestibules and access tunnels, to interconnect two or more pods, to form a multi-pod unit.
- the power wall which houses the battery and voltage generation system, serves a dual purpose by also acting as a heat source, providing warmth to the triage area and treatment zones through a combination of forced air circulation and thermal radiation. Furthermore, the electronics and battery systems generate heat during charge and dissipation cycles, which can be efficiently recycled to provide supplementary space heating.
- various mechanical systems such as boilers, plumbing, pumps, water storage, sanitary services, and water filtration and purification units are thoughtfully embedded at the opposite end-walls of the facility.
- the patient's journey within the Portable Medical Facility commences in the Triage area of the pod. Upon entering the pod facility, patients can readily seek assistance by speaking or calling for help. A mobile robotics unit is promptly dispatched to the patient's location provided by the system, or patients may choose to interact with interactive screens within the facility to initiate their assessment. Alternatively, a healthcare practitioner present in the facility can personally engage with the patient to conduct an initial assessment.
- Patient Sentiment can be extracted during symptomatic assessment using Al.
- patients may also be requested to undergo gait analysis based on their reported symptoms and injuries.
- patients can wear a portable, compact vitals monitoring device (with sensors described elsewhere in this document) that continuously and intermittently reports their vital signs to the facility's Al system.
- the Gait analysis can also advise the user, through visual, audible, or printed form on how to correct one's movements and positions, with measurements on body positions and angles.
- This holistic approach combines symptomatic assessment, sentiment analysis, gait analysis, and vital sign monitoring to create a comprehensive profile of the patient's current medical condition. This profile can then be further analyzed by the medical diagnosis Al system, which is described below.
- data collected from various medical instruments encompassing measurements of (for example) vitals, glucose levels, ultrasound results, and more, is meticulously standardized and translated into a universal format. This crucial step ensures that all data is uniformly prepared for input into the diagnosis Al system, promoting seamless interoperability.
- the data preparation process encompasses the operation of a dedicated medical data translator, which plays a pivotal role in translating and normalizing data. Prior to feeding this data into the diagnosis Al system, it is subjected to a universal translation process. The training and functioning of the universal translator Al, as well as the diagnosis Al, are outlined in detail below.
- the output generated by the diagnosis Al is subjected to a rigorous reference check to validate its accuracy.
- This validation process involves cross-referencing the AI- generated diagnosis with established medical databases, such as LOINC and SNOMED.
- the primary objective is to identify and filter out any potential erroneous or hallucinatory diagnoses. For example, if a diagnosis instance lacks a corresponding reference in the LOINC or SNOMED databases, it is flagged as a potential hallucination and can be discarded.
- the foundational model for AI-Driven diagnosis is depicted in Figure 13 and is a method that revolves around the seamless organization of critical healthcare data into easily digestible and standardized inputs. These inputs serve as the foundation for a diagnostic training model, capable of learning and generating outputs based on a holistic combination of critical or minimum health data needed for generating a useful diagnosis.
- the critical health data sets involved in this process are diverse, encompassing symptomatic information exchanged through spoken language or typed inputs, sentiment analysis derived from both linguistic expressions and biometric sensors, ambulatory analysis facilitated by gait assessment, and the collection of biometric readings from vital signs.
- the integration of these multifaceted data sources allows the model to provide comprehensive and well-informed outputs.
- the method incorporates robust measures to mitigate potential inaccuracies or hallucinations. This is achieved by cross-referencing the generated diagnosis with well-established and standardized medical databases, exemplified by LOINC and SNOMED. These databases serve as reference points, helping to verify the validity of the data and prevent erroneous conclusions. The result is a highly dependable Al-driven diagnosis framework that empowers healthcare professionals and patients with precise and trustworthy health insights.
- Verbal Exchange The process of symptomatic assessment in patient care typically entails two fundamental stages, as depicted in Figures 14 and 15.
- the first stage involves the extraction of pertinent information from a verbal interaction with the patient. This is achieved through the utilization of a speech-to-text pipeline, in conjunction with state-of-the-art Natural Language Processing and Natural Language Understanding (NLP/NLU) algorithms and Artificial Intelligence (Al) systems.
- NLP/NLU Natural Language Processing and Natural Language Understanding
- Al Artificial Intelligence
- the output is recorded in a register, preparing for the second stage of the process. Simultaneously, this accumulated information serves as the basis for formulating the subsequent set of pertinent questions to be posed to the patient, thereby advancing the symptomatic assessment.
- the second stage involves training a symptomatic assessment Al to produce a set of acute, probable set of diagnosis based solely on the symptomatic verbal inputs.
- Gait Training As described earlier in this document (in the section dealing with diagnosis with inertial sensors and tracking devices of this document) different tracking devices may be attached or affixed to a patient at different points on his body, as shown in Figures 5A and 5B, which are read/measured by various measurement devices, creating datasets which are sent to a controller module.
- a sparse series of randomized normal and specifically injured gait datasets are supplied in the form of joint vertex coordinates and skeletal vectors from body sensors (ROMOS IMU, IMU, Machine Vision, etc.) applied to a variety of human subjects, as shown in Figures 5 A and 5B.
- the gait model is trained to recognize and produce outputs based on its training defining a class of either a specific injury type (i.e. ankle injury, or shoulder injury) or a normal gait.
- Biometric Measurements by Multi-Functional Wearable Vital Signs Device As depicted in Figure 6, according to a preferred embodiment of this invention, a wearable device has been developed by the inventor to offer comprehensive vital signs monitoring capabilities. As depicted in Figure 6, the device features a prominent main screen for real-time display of vital signs, making it a user-friendly and easily accessible tool for individuals and practitioners.
- the device depicted in Figure 6 is equipped with non-invasive measurement techniques that enable the accurate assessment of an individual's vital signs. It operates on battery power and is seamlessly integrated with a range of connectivity options, including WiFi, Bluetooth, and Cellular connectivity. This ensures that vital sign data can be easily transmitted and accessed, enhancing the versatility of the device.
- the device preferably additionally leverages location-aware Romos mesh networking and GPS positioning (where available) to provide accurate location tracking.
- location-aware Romos mesh networking and GPS positioning where available
- an automated response can be initiated, involving the dispatch of a drone or robotics equipped with the necessary medication and real-time video feedback to the individual's location.
- the device depicted in Figure 6 is highly adaptable, allowing operators to program it for tasks such as assigning identities and tagging for Electronic Medical Records (EMR) and Electronic Health Records (EHR).
- EMR Electronic Medical Records
- EHR Electronic Health Records
- the device's programmability extends to selecting between intermittent interval readings or continuous monitoring of vital signs, catering to diverse healthcare needs.
- the wireless data received from the wearable device depicted in Figure 6 can be seamlessly displayed on a remote monitor screens for healthcare practitioners, facilitating timely and informed decision-making.
- the device's capabilities extend beyond traditional vital signs, encompassing a wide array of measurements, including but not limited to blood oxygen, heart rate, respiratory rate, blood pressure, temperature, EKG/ECG, blood glucose, ultra-sound, ct- scan, magnetic resonance imaging, radiology imaging, bone scans, photolithography, skin condition, fingerprint, Iris scan, Retina scan, Face recognition, Hand geometry, Voice recognition, Palm print, Signature recognition, DNA profiling, Ear recognition, Keystroke dynamics, Gait analysis, Vein recognition, Odor recognition, Typing rhythm, Heartbeat recognition, Electroencephalogram (EEG), Body temperature, Handwriting analysis, Skin texture, Nail bed recognition, Scent recognition, Lip print analysis, Dental biometrics, Behavioral biometrics, Blood Pressure, Heart Rate, Respiratory Rate, Blood Oxygen Saturation, Temperature, Blood Glucose level, Skin Moisture
- Varying combinations of units of measure, associated contexts, and types of measurements serve as the training data for the classification model, according to a preferred embodiment of this invention.
- the model Once the model is trained, it possesses the capability to contextualize information originating from a wide array of medical devices, each with its unique output format.
- the NLP Al classifies and organizes this information into a normalized output format, often adhering to Fast Healthcare Interoperability Resources (FHIR) standards.
- FHIR Fast Healthcare Interoperability Resources
- this standardized and normalized output is then transferred to the Al-driven diagnosis layer, where it plays a crucial role in enhancing the precision and efficiency of the diagnostic process.
- the advanced NLP/NLU techniques employed in this workflow not only ensure data consistency but also empower the healthcare system with the ability to make accurate and informed decisions based on a multitude of biometric inputs. This innovation represents a significant leap forward in medical data processing and diagnostic capabilities.
- Figure 18 shows an exemplary schematic of live data conversion model from Interoperable Al in FHIR standard after training according to some implementations.
- AI-Driven Diagnosis Training The data derived from verbal symptomatic assessment, sentiment analysis, gait analysis, and normalized biometric information serves as a structured input for the Al model, as shown in Figure 19 depicting a preferred embodiment of this invention.
- BM output of normalized biometric analysis
- Diagnosis prior diagnosis knowledge from common medical practices; multiple options may be provided as options of assisted diagnosis for practitioners to more efficiently evaluate the patient’s condition.
- the database-trained model itself does not contain the entire database, yet it possesses the capability to swiftly produce the corresponding record index from the database when presented with a stemmed and lemmatized input to the Al. These numeric indexes are highly efficient during lookup and search operations, ensuring rapid and accurate retrieval of information.
- the output upon receiving a diagnosis from the AI-Driven Diagnosis model, the output can be further processed by stemming and lemmatization to create a root input for the Memory Al.
- This output index produced by the Memory Al is then compared to the index list from the database. If the index generated by the Memory Al model aligns with an entry in the indexed lookup table, the diagnosis is considered reliable and free from hallucinations, affirming its real-world applicability and validity.
- This innovative approach enhances the confidence and trustworthiness of the diagnostic results, promoting the highest standard of healthcare outcomes.
- the Memory Al can be integrated into the diagnosis learning model to produce hallucination-free trained outputs directly.
- This technique is applicable to virtually any field of knowledge to eliminate hallucinations from Al systems.
- stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Fuzzy Systems (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
La présente invention concerne une installation médicale portable (capsule) autonome, dotée de ressources vitales renouvelables, équipée d'intelligence artificielle (IA) et de robotique pour un soutien médical complet aux patients et aux professionnels de la santé. La présente invention concerne également des systèmes et/ou des méthodes pour mettre en œuvre un diagnostic et/ou un traitement en utilisant la reconnaissance de geste et/ou des données biométriques, conjointement avec la capsule. Un système comprend des dispositifs de suivi qui suivent la pose, l'emplacement et/ou l'orientation d'un sujet pour différentes positions. La présente invention concerne également un réseau maillé sensible à la localisation capable de transporter des paquets de communication par le biais de multiples types de signaux radio et sur de multiples voies de signaux optimisées et autour d'obstacles, avec un avantage ajouté de former un système mondial de localisation de haute précision indépendant des satellites GPS.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263423769P | 2022-11-08 | 2022-11-08 | |
| US63/423,769 | 2022-11-08 | ||
| US202363440090P | 2023-01-19 | 2023-01-19 | |
| US63/440,090 | 2023-01-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024098149A1 true WO2024098149A1 (fr) | 2024-05-16 |
Family
ID=91031608
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2023/051493 Ceased WO2024098149A1 (fr) | 2022-11-08 | 2023-11-08 | Capsule médicale avancée avec système ou méthode de diagnostic utilisant la reconnaissance de geste et des données biométriques |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024098149A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102023128728A1 (de) | 2023-10-19 | 2025-04-24 | Eckhard Ball | Mobiler Medizin-Container zur automatisierten Erfassung von Diagnosedaten und Bereitstellung von Behandlungsdaten |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4915435A (en) * | 1989-04-05 | 1990-04-10 | Levine Brian M | Mobile operating room with pre and post-operational areas |
| US7794001B2 (en) * | 2004-05-12 | 2010-09-14 | Charlotte-Mecklenburg Hospital Authority D/B/A Carolinas Medical Center | Mobile medical facility |
| US20190169045A1 (en) * | 2012-09-19 | 2019-06-06 | Deka Products Limited Partnership | Apparatus, System and Method for Resource Distribution |
| US20210233656A1 (en) * | 2019-12-15 | 2021-07-29 | Bao Tran | Health management |
| US20220157143A1 (en) * | 2019-03-22 | 2022-05-19 | Vitaltech Properties, Llc | Baby Vitals Monitor |
-
2023
- 2023-11-08 WO PCT/CA2023/051493 patent/WO2024098149A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4915435A (en) * | 1989-04-05 | 1990-04-10 | Levine Brian M | Mobile operating room with pre and post-operational areas |
| US7794001B2 (en) * | 2004-05-12 | 2010-09-14 | Charlotte-Mecklenburg Hospital Authority D/B/A Carolinas Medical Center | Mobile medical facility |
| US20190169045A1 (en) * | 2012-09-19 | 2019-06-06 | Deka Products Limited Partnership | Apparatus, System and Method for Resource Distribution |
| US20220157143A1 (en) * | 2019-03-22 | 2022-05-19 | Vitaltech Properties, Llc | Baby Vitals Monitor |
| US20210233656A1 (en) * | 2019-12-15 | 2021-07-29 | Bao Tran | Health management |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102023128728A1 (de) | 2023-10-19 | 2025-04-24 | Eckhard Ball | Mobiler Medizin-Container zur automatisierten Erfassung von Diagnosedaten und Bereitstellung von Behandlungsdaten |
| DE102023128728B4 (de) | 2023-10-19 | 2025-12-31 | Eckhard Ball | Mobiler Medizin-Container zur automatisierten Erfassung von Diagnosedaten und Bereitstellung von Behandlungsdaten |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Khan et al. | Digital-twins-based Internet of Robotic Things for remote health monitoring of COVID-19 patients | |
| Helal et al. | Smart home-based health platform for behavioral monitoring and alteration of diabetes patients | |
| CN111506199A (zh) | 基于Kinect的高精度无标记全身运动追踪系统 | |
| WO2024098149A1 (fr) | Capsule médicale avancée avec système ou méthode de diagnostic utilisant la reconnaissance de geste et des données biométriques | |
| Dey et al. | Ambient Intelligence in healthcare: a state-of-the-art | |
| Whig et al. | Ambient Intelligence health services using IoT | |
| Laurijssen et al. | Synchronous wireless body sensor network enabling human body pose estimation | |
| Ghasemzadeh et al. | A phonological expression for physical movement monitoring in body sensor networks | |
| Brahmi et al. | Elaboration of innovative digital twin models for healthcare monitoring with 6g functionalities | |
| Mahmud | Applications for the Internet of medical things | |
| Zhu et al. | Recognizing human daily activity using a single inertial sensor | |
| Ehrenfeld et al. | The modular modality frame model: continuous body state estimation and plausibility-weighted information fusion | |
| Indrakumari et al. | IoT for Health, Safety, Well‐Being, Inclusion, and Active Aging | |
| Rai et al. | Smart sensors transform healthcare system | |
| Bedekar et al. | Medical analytics based on artificial neural networks using cognitive Internet of Things | |
| El-Gayar et al. | The Impact of the Internet of Things in Healthcare Delivery: A Systematic Literature Review | |
| Torres et al. | Assistive technology for fall detection development of integrated wearable sensor to smart home system | |
| Alarcón-Aldana et al. | A kinematic information acquisition model that uses digital signals from an inertial and magnetic motion capture system | |
| De Marchi et al. | Real-Time Multi-Person Identification and Tracking via HPE and IMU Data Fusion | |
| Sun | Machine-Learning Enhanced Information Fusion for Human Action Recognition at the Edge | |
| Levonevskiy et al. | A conceptual model of a smart medical ward for patient care in inpatient facilities | |
| Chang et al. | Inferential motion reconstruction of fall accident based on LSTM neural network | |
| Zhang | [Retracted] Real‐Time Detection of Lower Limb Training Stability Function Based on Smart Wearable Sensors | |
| Rajaei | An iot based smart monitoring system detecting patient falls | |
| Dey et al. | Intelligent Systems in Healthcare |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23887240 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 18.09.2025) |