[go: up one dir, main page]

CN120814966B - A method and system for assisting stroke patients to get up based on multi-scenario body position changes - Google Patents

A method and system for assisting stroke patients to get up based on multi-scenario body position changes

Info

Publication number
CN120814966B
CN120814966B CN202511311845.7A CN202511311845A CN120814966B CN 120814966 B CN120814966 B CN 120814966B CN 202511311845 A CN202511311845 A CN 202511311845A CN 120814966 B CN120814966 B CN 120814966B
Authority
CN
China
Prior art keywords
body position
control mode
control
joint
execution unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202511311845.7A
Other languages
Chinese (zh)
Other versions
CN120814966A (en
Inventor
苏国明
何斌
方晓琳
王超超
徐红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Weiaide Technology Co ltd
Original Assignee
Tianjin Weiaide Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Weiaide Technology Co ltd filed Critical Tianjin Weiaide Technology Co ltd
Priority to CN202511311845.7A priority Critical patent/CN120814966B/en
Publication of CN120814966A publication Critical patent/CN120814966A/en
Application granted granted Critical
Publication of CN120814966B publication Critical patent/CN120814966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/10Parts, details or accessories
    • A61G5/14Standing-up or sitting-down aids
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G7/00Beds specially adapted for nursing; Devices for lifting patients or disabled persons
    • A61G7/05Parts, details or accessories of beds
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G7/00Beds specially adapted for nursing; Devices for lifting patients or disabled persons
    • A61G7/05Parts, details or accessories of beds
    • A61G7/053Aids for getting into, or out of, bed, e.g. steps, chairs, cane-like supports
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2200/00Information related to the kind of patient or his position
    • A61G2200/30Specific positions of the patient
    • A61G2200/32Specific positions of the patient lying
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2200/00Information related to the kind of patient or his position
    • A61G2200/30Specific positions of the patient
    • A61G2200/34Specific positions of the patient sitting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2200/00Information related to the kind of patient or his position
    • A61G2200/30Specific positions of the patient
    • A61G2200/36Specific positions of the patient standing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Nursing (AREA)
  • Rehabilitation Tools (AREA)

Abstract

本申请提供一种基于多场景体位转换的卒中老人助起控制方法及系统。其中,本申请首先控制助起设备获得身体压力分布数据和关节运动数据组成的融合感知结果并进行多特征耦合分析,以生成包含体位状态和场景类型的联合识别结论,然后根据联合识别结论进行动态模式映射,输出与当前场景类型及体位相匹配的目标控制模式,最后利用目标控制模式进行动力指令合成,生成驱动助起设备执行体位转换的协同控制信号;本申请提供的技术方案不仅实现了从标准化控制到个性化控制的精准转换,提升了助起过程的适配性和用户体验,还克服了实际运行中的不确定性和个体差异,提升了体位转换过程的平稳性、安全性和适应性。

This application provides a method and system for assisting stroke patients to stand up based on multi-scenario body position transitions. First, the application controls the standing device to obtain fused perception results composed of body pressure distribution data and joint motion data, and performs multi-feature coupling analysis to generate a joint recognition conclusion including body position and scenario type. Then, based on the joint recognition conclusion, dynamic pattern mapping is performed to output a target control mode matching the current scenario type and body position. Finally, the target control mode is used to synthesize power commands to generate a collaborative control signal that drives the standing device to perform body position transitions. The technical solution provided by this application not only achieves a precise transition from standardized control to personalized control, improving the adaptability and user experience of the standing process, but also overcomes uncertainties and individual differences in actual operation, improving the stability, safety, and adaptability of the body position transition process.

Description

Multi-scene position conversion-based control method and system for assisting in lifting of elderly people suffering from apoplexy
Technical Field
The application relates to the technical field of medical auxiliary equipment control, in particular to a method and a system for assisting in controlling the lifting of a elderly person in stroke based on multi-scene posture conversion.
Background
Because cerebral apoplexy patients often suffer from limb hemiplegia, dystonia and reduced posture control capability due to central nervous system injury, the process of converting from prone position to sitting position and from sitting position to standing position is difficult, and an intelligent lifting assisting technology capable of replacing or assisting artificial nursing is needed.
At present, an existing scheme with pertinence is a lifting assisting device based on a mechanical transmission and turnover plate structure, wherein the lifting assisting device drives a transmission wheel set through a servo motor to drive the turnover plate to rotate upwards so as to provide back support, and meanwhile, the lifting assisting device is matched with a rolling mechanism of an assisting belt to provide auxiliary pulling force for a patient to lift up. The device allows the patient to actively exert force to complete the lifting action within a certain range, and can also completely rely on mechanical support to realize passive lifting, so as to achieve the purpose of taking the lifting assistance and the mild exercise function training into account.
However, such mechanical lifting-assisting devices still have significant drawbacks, the control process of which is highly dependent on preset mechanical procedures, and lacks the ability to dynamically sense and intelligently make decisions on the real-time posture state, the change in muscle tension, the movement intention, and the scene (e.g., bed, wheelchair) of the patient. The rising mode is stiff, and flexible adjustment is difficult to carry out according to individual differences and actual demands, so that the active participation degree and training benefit of a patient are limited, and the risk of secondary injury caused by improper posture or uncomfortable power assisting cannot be effectively avoided.
Disclosure of Invention
The application provides a multi-scene position conversion-based method and a multi-scene position conversion-based system for controlling the lifting of a stroke aged, which are used for solving the problems that in the prior art, the control mode of a lifting assisting device is stiff and the dynamic adaptation capability to the real-time state and multi-scene change of a patient is lacking.
In a first aspect, the application provides a method for assisting in controlling the lifting of a elderly person in stroke based on multi-scene posture conversion, which comprises the following steps:
controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting equipment of the target person in advance to synchronously sense so as to obtain a fusion sensing result consisting of body pressure distribution data and articulation data;
Performing multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing a body position state and a scene type;
Performing dynamic mode mapping according to the joint recognition conclusion, and outputting a target control mode matched with the current scene type and the body phase;
and synthesizing a power instruction by using the target control mode to generate a cooperative control signal for driving the lifting assisting equipment to execute posture conversion.
Optionally, controlling the plurality of sensors pre-arranged on the contact surface of the lifting device of the target person to perform synchronous sensing so as to obtain a fusion sensing result composed of the body pressure distribution data and the articulation data, including:
Collecting pressure distribution original data through a pressure sensing array uniformly distributed on the supporting surface of the lifting assisting equipment;
acquiring joint motion raw data through a motion sensing unit fixed at a limb joint of a target person;
Converting the pressure distribution original data into continuous track information reflecting the displacement of the pressure center;
converting the joint motion raw data into variation sequence information describing the degree of the joint Qu Shenjiao ℃;
And integrating the continuous track information and the change sequence information in a time sequence association mode to obtain a fusion perception result representing the physical characteristics of the user.
Optionally, performing multi-feature coupling analysis based on the fusion sensing result to generate a joint recognition conclusion including a posture state and a scene type, including:
extracting pressure distribution characteristics and movement pattern characteristics from the fusion sensing result;
matching the pressure distribution characteristics with a predefined pressure distribution mode to obtain a preliminary body position judgment result;
Matching the motion pattern features with a predefined motion pattern sequence to obtain an auxiliary body position judging result;
Determining a final posture state according to the combination relation of the preliminary posture judgment result and the auxiliary posture judgment result;
Based on the contact characteristics of the final posture state and the equipment supporting structure of the lifting assisting equipment, identifying the current scene type of the target personnel;
and combining the final posture state and the current scene type of the target person to form a joint recognition conclusion.
Optionally, performing dynamic mode mapping according to the joint recognition conclusion, and outputting a target control mode matched with the current scene type and the body position, including:
analyzing the current scene type and the current position state of the target person from the joint identification conclusion;
matching the current scene type with scene classification in a pre-stored lifting-assisting mode library to determine a candidate control mode set;
performing matching screening according to the current posture state and posture requirements in the candidate control mode set, and generating an adaptive control mode according to a matching screening result;
and determining a target control mode based on the preference characteristics of the adaptive control mode and the historical operation record of the user.
Optionally, determining the target control mode based on the preference characteristics of the adaptive control mode and the historical operation record of the user includes:
Extracting preference data comprising a lifting speed, an angle adjustment and a pause interval from a user history operation record;
converting the preference data into a control parameter adjustment vector;
combining the basic parameters of the adaptive control mode with the control parameter adjustment vector;
and generating a target control mode meeting the personalized requirements of the user according to the combined operation result.
Optionally, the power command synthesis is performed by using the target control mode, and a cooperative control signal for driving the lifting assisting device to perform posture conversion is generated, including:
Analyzing action sequence parameters of each power execution unit from the target control mode;
Generating a basic driving instruction set according to the action sequence parameters;
Converting the basic driving instruction set into a multi-execution unit cooperative instruction with time sequence association through a motion coordination algorithm;
the multi-execution unit cooperative instruction is adjusted in real time according to the current state feedback of the equipment, and the adjusted multi-execution unit cooperative instruction is obtained;
And generating the cooperative control signals comprising speed control, angle control and strength control according to the adjusted cooperative instructions of the multiple execution units.
Optionally, the multi-execution unit cooperative instruction is adjusted in real time according to the current state feedback of the device, so as to obtain an adjusted multi-execution unit cooperative instruction, which includes:
acquiring the operation parameters and the load parameters of each power executing mechanism in the lifting assisting equipment in real time through a state monitoring module on the lifting assisting equipment;
Performing difference comparison on the running parameter and the load parameter and expected parameters in the multi-execution unit cooperative instruction to obtain a difference comparison result;
Determining parameter adjustment amounts of all power execution mechanisms according to the difference comparison result;
And applying the parameter adjustment quantity to the multi-execution unit cooperative instruction to generate an adjusted multi-execution unit cooperative instruction.
In a second aspect, the present application provides a multi-scene posture conversion-based stroke elderly person lifting control system, comprising:
The control module is used for controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting equipment of the target person in advance to synchronously sense so as to obtain a fusion sensing result consisting of body pressure distribution data and articulation data;
the coupling analysis module is used for carrying out multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing a body position state and a scene type;
The mode mapping module is used for carrying out dynamic mode mapping according to the joint identification conclusion and outputting a target control mode matched with the current scene type and the body position;
and the command synthesis module is used for synthesizing the power command by utilizing the target control mode and generating a cooperative control signal for driving the lifting assisting equipment to execute posture conversion.
In a third aspect, the application provides a computing device, which comprises a processing component and a storage component, wherein the storage component stores one or more computer instructions, and the one or more computer instructions are used for being called and executed by the processing component to realize the stroke elderly person assisting control method based on the multi-scene posture conversion according to the first aspect.
In a fourth aspect, the present application provides a computer storage medium storing a computer program, where the computer program is executed by a computer to implement a multi-scene posture conversion-based stroke elderly person lifting control method according to the first aspect.
According to the application, fusion data is obtained through synchronous sensing of a plurality of sensors, the body position state and the scene type are accurately identified by combining multi-feature coupling analysis, a control mode which is matched with the current scene and the body position height is generated by utilizing dynamic mode mapping, and finally, cooperative control is realized through power instruction synthesis. The scheme can adapt to various lifting-assisting scenes such as beds, wheelchairs and chairs, provides individualized body position conversion assistance according to the real-time physical state of the old people suffering from apoplexy, and effectively improves the safety and adaptability of the lifting-assisting process.
Further, by monitoring the running state of each power executing mechanism in real time, the actual parameters and the expected instructions are subjected to difference comparison, and the control parameters are dynamically adjusted, so that the coordination operation of each executing unit is ensured to be always kept in the process of executing body position conversion by the lifting assisting equipment. The real-time adjustment mechanism based on the state feedback can effectively cope with equipment load change and patient position fluctuation, prevent the overload or insufficient power assisting condition from happening, and remarkably improve the stability and safety of the power assisting process.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flow chart of a multi-scene position conversion-based method for assisting in controlling the lifting of a elderly person in a stroke;
Fig. 2 shows a schematic structural diagram of a stroke aged person lifting control system based on multi-scene position conversion;
FIG. 3 illustrates a schematic diagram of a computing device provided by the present application.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present application and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The following description of the embodiments of the present application will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
Fig. 1 is a flowchart of a method for assisting in controlling the elderly in the stroke based on multi-scene posture conversion, as shown in fig. 1, the method comprises:
Step 101, controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting device of the target person in advance to synchronously sense so as to obtain a fusion sensing result composed of body pressure distribution data and articulation data.
Optionally, step 101 may specifically include the following steps:
Step 1011, acquiring pressure distribution original data through pressure sensing arrays uniformly distributed on the supporting surface of the lifting assisting equipment;
Step 1012, acquiring joint motion raw data through a motion sensing unit fixed at a limb joint of a target person;
step 1013, converting the pressure distribution original data into continuous track information reflecting the displacement of the pressure center;
step 1014, converting the original data of the joint motion into variable sequence information describing the degree of the joint Qu Shenjiao;
step 1015, integrating the continuous track information with the change sequence information in a time sequence association manner to obtain a fusion perception result representing the physical characteristics of the user.
In the above scheme, the lifting-assisting device contact surface refers to a part where the device is in direct contact with the body of a user to provide support, and the pressure sensing arrays are uniformly distributed on the surface of the lifting-assisting device contact surface and are used for collecting pressure distribution raw data, wherein the pressure distribution raw data reflect the pressure and distribution of each part of the body of the user to the contact surface. The body pressure distribution data is information describing the overall distribution characteristics of the pressure of the body of the user on the lifting aid, which is obtained by analyzing the raw data of the pressure distribution. The joint movement data describe the movement state of the joints of the limbs of the user, and the movement data are obtained by acquiring and converting the joint movement raw data by a movement sensing unit fixed at the joints of the limbs. The fusion sensing result is comprehensive data which is formed by integrating continuous track information reflecting the displacement of the pressure center and change sequence information describing the Qu Shenjiao degrees of joints in a time sequence correlation mode and can represent the physical characteristics of a user.
In this embodiment, step 1011 collects raw data of pressure distribution by pressure sensing arrays uniformly distributed on the support surface of the lifting device, and these arrays can continuously measure the pressure value applied to the surface of the device by the body of the user. Next step 1012 collects raw joint motion data via motion sensing units secured to the joints of the subject person's limbs, which units are capable of tracking the angle and direction of motion of the joints. Next, step 1013 is to implement a description of the dynamic change of the pressure center by analyzing the moving path of the pressure point by converting the pressure distribution raw data into continuous track information reflecting the displacement of the pressure center. The raw data of joint motion is then converted to sequence information describing the changes in the joint Qu Shenjiao degrees, forming a sequence of angular changes by parsing the successive data points of joint motion, via step 1014. And finally, in step 1015, continuous track information and change sequence information are integrated in a time sequence association mode, so that the movement of the pressure center and the change of the joint angle are aligned on a time axis, and a fusion perception result capable of comprehensively representing the physical characteristics of a user is formed.
For example, in the application scenario of the lifting-assisting equipment in the rehabilitation center A, a pressure sensing array is paved on the equipment bearing surface, and when the old people in stroke use the equipment, the array continuously collects the pressure distribution raw data of the back and the buttocks. Meanwhile, the motion sensing units arranged on the knee joint points and the hip joint points of the old synchronously collect joint motion raw data. After processing these raw data, the pressure distribution data is converted into continuous track information reflecting the movement path of the center of pressure from sitting to standing, and the articulation data is converted into variation sequence information recording the knee joint and hip joint Qu Shenjiao degree variation. After the two types of information are integrated through a time sequence association algorithm, the system generates a fusion sensing result which accurately reflects the posture change of the old in the standing process, and data support is provided for subsequent lifting control.
According to the scheme, the pressure and joint motion data are cooperatively collected and converted into the track and sequence information related to the time sequence, so that the comprehensive perception of the physical characteristics of the user is realized, a high-precision and synchronous multi-mode data base is provided for assisting control, and the accuracy of the subsequent links in judging the state of the user and the effectiveness of a control strategy are ensured.
And 102, performing multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing the posture state and the scene type.
Optionally, step 102 may specifically include the following steps:
Step 1021, extracting pressure distribution characteristics and motion pattern characteristics from the fusion sensing result;
step 1022, matching the pressure distribution characteristic with a predefined pressure distribution mode to obtain a preliminary body position judgment result;
step 1023, matching the motion pattern features with a predefined motion pattern sequence to obtain an auxiliary body position judgment result;
Step 1024, determining a final posture state according to the combination relationship of the preliminary posture determination result and the auxiliary posture determination result;
Step 1025, identifying the current scene type of the target person based on the contact characteristics of the final posture state and the equipment supporting structure of the lifting assisting equipment;
And 1026, combining the final posture state and the current scene type of the target person to form a joint identification conclusion.
In the above scheme, the multi-feature coupling analysis refers to a process of performing joint processing on the pressure distribution features and the motion pattern features in the fusion sensing result to identify the body position state and the scene type of the user. The posture state describes the user's body posture category (e.g., lying, sitting or standing) and the scene type refers to the supporting environment (e.g., bed, wheelchair or chair) in which the assist device is currently located. The joint recognition conclusion is a judgment result formed by integrating the final posture state and the current scene type. The pressure distribution characteristics are data which are extracted from the fusion sensing result and reflect the body pressure distribution mode, and the movement mode characteristics are data describing the movement rule of the joint. The preliminary posture determination result is an initial posture classification obtained by matching the pressure distribution feature with a predefined pressure distribution pattern, the predefined pressure distribution pattern is a pre-stored pressure distribution pattern set, the predefined movement pattern sequence is a pre-stored typical joint movement pattern set, and the auxiliary posture determination result is an auxiliary classification obtained by matching the movement pattern feature with the predefined movement pattern sequence. The final posture state is an accurate posture classification determined by combining the preliminary posture judgment result and the auxiliary posture judgment result. The contact characteristics of the device support structure describe the manner in which the lift-assisting device contacts a support surface (e.g., mattress, wheelchair cushion) for distinguishing scene types.
In the scheme, first, pressure distribution characteristics and movement pattern characteristics are separated from a fusion sensing result by 1021, wherein the pressure distribution characteristics reflect pressure intensity distribution of each part of a body, and the movement pattern characteristics describe a time sequence rule of joint angle change. And then the pressure distribution characteristics are compared with a pre-stored pressure distribution pattern library through 1022, and a preliminary body position judgment result (such as a seat) is obtained through calculating the similarity. The motion pattern features are then matched to a predefined sequence of motion patterns by 1023, and an auxiliary body position determination result (e.g., identified as "standing transition state") is obtained by a sequence alignment algorithm. The preliminary and auxiliary posture decisions are then weighted fused by 1024 to determine the final posture state (e.g. "in seat to upright transition") according to predefined decision rules (e.g. majority vote or confidence weighting). The type of scene currently in use (e.g., a "wheelchair scene") is then identified by analyzing the contact characteristics of the device support structure (e.g., center of pressure position and support surface shape) based on the final posture state 1025, matching a predefined scene template (e.g., a bed scene typically having a large area of uniform pressure distribution). Finally, the final posture state and the current scene type are combined into a joint recognition conclusion (such as "sitting state in wheelchair scene") through 1026.
With the specific embodiment of the above scheme, in the application scenario of the lifting assisting device of the rehabilitation center A, the system extracts pressure distribution characteristics (showing that the pressure is concentrated on buttocks and thighs) and movement pattern characteristics (showing the slow extension pattern of knee joints and hip joints) from the fusion perception result. The pressure distribution characteristics are highly matched with the prestored seat mode, and a preliminary body position judging result seat is generated. The motion pattern features are matched with a predefined sequence of 'transition from sitting to standing', and an auxiliary body position judgment result 'transition from standing' is generated. The system synthesizes the two judging results, and determines the final posture state as 'in the process of converting the sitting position into the standing position'. By analyzing the contact characteristics of the equipment support structure (finding that the pressure distribution is rectangular and has moving wheel markers), the type of scene currently in is identified as "wheelchair scene". And finally, generating a combined recognition conclusion of 'the conversion state from the sitting position to the standing position in the wheelchair scene', and providing accurate input for subsequent lifting control.
According to the scheme, high-precision posture state and scene type recognition is realized through multi-feature coupling analysis, misjudgment possibility is reduced through a double verification mechanism of pressure and motion features, scene distinguishing capability is enhanced through equipment supporting structure contact feature analysis, and finally generated joint recognition conclusion provides comprehensive and accurate environment perception basis for follow-up starting control.
And step 103, performing dynamic mode mapping according to the joint identification conclusion, and outputting a target control mode matched with the current scene type and the body position.
Optionally, step 103 may specifically include the following steps:
step 1031, analyzing the current scene type and the current posture state of the target person from the joint identification conclusion;
step 1032, matching the current scene type with scene classification in a pre-stored lifting-assisting mode library to determine a candidate control mode set;
Step 1033, performing matching screening according to the current posture state and posture requirements in the candidate control mode set, and generating an adaptive control mode according to a matching screening result;
Step 1034, determining a target control mode based on the adaptive control mode and the preference characteristics of the user history.
Step 1034 may specifically include the following steps:
extracting preference data comprising lifting speed, angle adjustment and pause interval from a user history operation record, converting the preference data into a control parameter adjustment vector, carrying out combination operation on basic parameters of the adaptive control mode and the control parameter adjustment vector, and generating a target control mode meeting personalized requirements of a user according to a combination operation result.
In the above scheme, the dynamic pattern mapping refers to a process of matching and generating a control strategy suitable for the current situation from a pre-stored start-up assisting pattern library according to the scene type and the posture state in the joint recognition conclusion. The target control mode is a specific set of control instructions for driving the lifting-assist device to perform posture conversion. The pre-stored lift-assistance pattern library contains standardized control pattern templates for different scenarios (e.g. bed, wheelchair, chair) and body positions (e.g. lying, sitting, standing). The candidate control pattern set is a plurality of possible applicable control pattern groups which are preliminarily screened out through scene classification matching. The adaptive control mode is a control mode which is obtained from the candidate set and is closer to the current body position after matching and screening according to the body position requirement. The preference characteristics of the historical operation records of the user reflect the personalized use habits of the user on parameters such as the lifting speed, the angle adjustment, the pause interval and the like. The lifting speed refers to a lifting speed range preferred by a user in historical operation, the angle adjustment refers to the adjustment tendency of the preference in the user operation to back support, leg lifting and other angles, and the pause interval refers to the time length of the preference in the user operation to pause in the posture conversion process. The control parameter adjustment vector is a parameter adjustment instruction set formed by quantizing these preference data. The basic parameters are standard control parameters preset in the adaptive control mode.
In this scenario, the current scene type (e.g., wheelchair scene) and the current posture state of the target person (e.g., in the seat-to-upright transition) are first separated from the joint recognition conclusion by 1031. And secondly, matching the current scene type with scene classification in a pre-stored lifting-assisting mode library (such as matching a wheelchair scene with wheelchair scene modes in the library) through 1032, and screening out all possible applicable modes to form a candidate control mode set. And then, matching and screening the current posture state and the posture requirement of each mode in the candidate control mode set (for example, matching the posture conversion state in the process of converting the sitting position to the standing position with the posture conversion requirement in the candidate mode) through 1033, and removing the unmatched modes to generate the adaptive control mode. Finally, 1034, preference data of the lifting speed, the angle adjustment and the pause interval are extracted from the historical operation record of the user, the preference data are converted into control parameter adjustment vectors (such as the preference data are quantized into speed increasing coefficients, angle adjustment coefficients and the like), basic parameters of the adaptive control mode and the control parameter adjustment vectors are combined to operate (such as the basic speed parameters are multiplied by the speed adjustment coefficients), and a target control mode meeting the personalized requirements of the user is generated according to operation results.
In the specific embodiment of the scheme, in the application scene of the lifting assisting equipment of the rehabilitation center A, the system analyzes that the current scene type is a wheelchair scene from the joint identification conclusion, and the current posture state is in the conversion from the sitting position to the standing position. The system matches the wheelchair scene with scene classification in a pre-stored lifting-assisting mode library, and screens out all control modes suitable for the wheelchair scene to form a candidate control mode set. And then matching the state in the conversion from sitting to standing with the position requirements of all modes in the candidate set, and screening out an adaptive control mode specially designed for sitting to standing in a wheelchair scene. The system then extracts from the historical operating record of user B preference data for which faster build-up speed, smaller back angle adjustment, and brief pause are preferred, and converts these data into control parameter adjustment vectors (e.g., speed factor 1.2, angle factor 0.9, pause time 0.5 seconds). And carrying out combination operation on the vector and basic parameters of the adaptive control mode to finally generate a target control mode which meets the personalized requirements of the user B.
According to the scheme, accurate conversion from standardized control to personalized control is realized through dynamic mode mapping, basic applicability of a control mode is ensured through double matching of a scene and a body position, control parameters are further refined through introducing user history preference data, the generated target control mode meets the current environment requirements, personalized requirements of a user are met, and suitability and user experience of a boosting process are remarkably improved.
And 104, performing power instruction synthesis by using the target control mode to generate a cooperative control signal for driving the lifting assisting equipment to perform posture conversion.
Optionally, the step 104 may specifically include the following steps:
Step 1041, analyzing the motion sequence parameters of each power execution unit from the target control mode;
step 1042, generating a basic driving instruction set according to the action sequence parameters;
Step 1043, converting the basic driving instruction set into a multi-execution unit cooperative instruction with time sequence association through a motion coordination algorithm;
Step 1044, adjusting the multi-execution unit cooperative instruction in real time according to the current state feedback of the device to obtain an adjusted multi-execution unit cooperative instruction;
step 1044 may specifically include the following steps:
The method comprises the steps of acquiring running parameters and load parameters of power execution mechanisms in lifting assisting equipment in real time through a state monitoring module on the lifting assisting equipment, comparing the running parameters and the load parameters with expected parameters in a multi-execution unit cooperative instruction to obtain a difference comparison result, determining parameter adjustment amounts of the power execution mechanisms according to the difference comparison result, and applying the parameter adjustment amounts to the multi-execution unit cooperative instruction to generate an adjusted multi-execution unit cooperative instruction.
Step 1045, generating the cooperative control signal including the speed control, the angle control and the force control according to the adjusted cooperative instruction of the multiple execution units.
In the above scheme, the power command synthesis refers to a process of generating a specific driving command according to a target control mode, and the cooperative control signal is a control command set which is finally output and is used for coordinating a plurality of execution units to complete body position conversion. Each power execution unit refers to an independent component (such as a motor, a hydraulic cylinder and the like) in the lifting assisting equipment, wherein the independent component is responsible for generating the walking assisting power, and the action sequence parameters describe the action sequence, the amplitude and the time requirement of each execution unit. The basic driving instruction set is a preliminary control command set converted by action sequence parameters. A motion coordination algorithm is a processing logic for converting a base instruction into instructions that are coordinated in time by multiple execution units. The multi-execution unit cooperative instruction is a control instruction set which is obtained after being processed by a motion coordination algorithm and has an accurate time sequence relation. The adjusted multi-execution unit cooperative instruction is a final instruction after the cooperative instruction is corrected according to the real-time state feedback. The state monitoring module is a sensing unit for collecting the running state of the equipment, and the running parameters and the load parameters of each power executing mechanism respectively reflect the actual running state of the executing mechanism and the born force/moment data. The expected parameter is an ideal running value preset in the cooperative instruction, the difference comparison result is a deviation analysis result of the actual parameter and the expected value, and the parameter adjustment quantity is a numerical quantity which is calculated according to the difference and needs to be corrected.
In this scenario, the motion sequence parameters of each power execution unit are first extracted from the target control mode by 1041, where the parameters specify the motion sequence, range of travel, and timing requirements of each execution unit. Secondly, the motion sequence parameters are converted into a specific basic driving instruction set through 1042, and the abstract motion parameters are converted into electric signals or digital commands capable of directly driving the execution unit through the conversion process. Then, the basic driving instruction set is processed by 1043 using a motion coordination algorithm, and the algorithm integrates independent instructions into a multi-execution unit cooperative instruction with accurate time sequence coordination by establishing a space-time relation model among execution units, so that the synchronization and no conflict of the actions of the units are ensured. And then, the state monitoring module is utilized to collect the operation parameters (such as speed and position) and the load parameters (such as pressure and torque) of each power executing mechanism in real time through 1044, the actual parameters are compared with the expected parameters in the multi-executing unit cooperative instruction to obtain a difference comparison result, the parameter adjustment quantity required by each executing mechanism is calculated according to the difference result, and the adjustment quantity is applied to the cooperative instruction to generate an adjusted multi-executing unit cooperative instruction. Finally, a cooperative control signal comprising speed control (adjusting motion speed), angle control (adjusting joint angle) and strength control (adjusting output force/torque) is generated by 1045 according to the adjusted cooperative instruction of the multiple execution units, and the signal can directly drive the lifting-assisting equipment to execute body position conversion.
In the application scenario of the lifting assisting equipment of the rehabilitation center A, the system analyzes the action sequence parameters (including the requirements of the stretching angle, the movement speed and the force value) of the back support unit and the leg lifting unit from the target control mode. These parameters are converted into a basic drive command set that controls motor speed and travel. Through the motion coordination algorithm processing, a multi-execution unit cooperative instruction (such as raising the legs by 15 degrees simultaneously when the back is raised by 30 degrees) is generated, which ensures that the back and leg movements are synchronized. The state monitoring module acquires that the actual pressure value of the back support unit is lower than an expected value, the movement speed of the leg unit is higher than the expected value, and the system calculates the parameter adjustment quantity required to increase the back output force and reduce the leg speed. After the adjustment amounts are applied, an adjusted multi-execution unit cooperative instruction is generated, and finally a cooperative control signal containing proper speed, angle and strength control is output, so that the equipment stably completes posture conversion from sitting to standing.
According to the scheme, the abstract control mode is converted into the precisely executable driving signal through the power instruction synthesis and real-time adjustment mechanism, the action synchronism and coordination of the multiple execution units are ensured through the motion coordination algorithm, the uncertainty and individual difference in actual operation are overcome through state feedback adjustment, and finally the generated cooperative control signal has multiple control dimensions of speed, angle and strength, so that the stability, safety and adaptability of the body position conversion process are remarkably improved.
Fig. 2 is a schematic structural diagram of a stroke aged person lifting control system based on multi-scene position conversion, as shown in fig. 2, the system includes:
A control module 21 for controlling a plurality of sensors arranged on the contact surface of the lifting assisting device of the target person in advance to perform synchronous sensing so as to obtain a fusion sensing result composed of body pressure distribution data and articulation data;
The coupling analysis module 22 is configured to perform multi-feature coupling analysis based on the fusion sensing result, and generate a joint recognition conclusion including a posture state and a scene type;
The mode mapping module 23 is configured to perform dynamic mode mapping according to the joint identification conclusion, and output a target control mode that matches with the current scene type and the body position;
the command synthesis module 24 is configured to perform power command synthesis using the target control mode, and generate a cooperative control signal for driving the lifting assisting device to perform body position conversion.
The stroke elderly person assisting control system based on the multi-scene position conversion shown in fig. 2 may execute the stroke elderly person assisting control method based on the multi-scene position conversion shown in the embodiment shown in fig. 1, and its implementation principle and technical effects are not repeated. The specific manner in which the individual modules and units perform the operations in the above-described embodiment of the multi-scene position conversion-based stroke elderly person support control system has been described in detail in the embodiment related to the method, and will not be described in detail herein.
In one possible design, a multi-scenario posture-based stroke senior citizen assistance control system of the embodiment of fig. 2 may be implemented as a computing device, as shown in fig. 3, which may include a storage component 31 and a processing component 32;
the storage component 31 stores one or more computer instructions for execution by the processing component 32.
The processing component 32 is used in a multi-scene posture conversion-based stroke elderly person lift control method according to the embodiment shown in fig. 1.
Wherein the processing component 32 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 31 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Of course, the computing device may necessarily include other components as well, such as input/output interfaces, display components, communication components, and the like.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
The embodiment of the application also provides a computer storage medium which stores a computer program, and the computer program can realize the multi-scene posture conversion-based control method for assisting the elderly in the stroke when being executed by a computer.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present application.

Claims (7)

1.一种基于多场景体位转换的卒中老人助起控制方法,其特征在于,包括:1. A method for assisted standing control in stroke patients based on multi-scenario positional transitions, characterized in that it includes: 控制预先布置在目标人员的助起设备接触表面的多个传感器进行同步感知,以获得身体压力分布数据和关节运动数据组成的融合感知结果,包括:通过均匀分布在助起设备承托表面的压力感应阵列采集压力分布原始数据;通过固定于目标人员肢体关节处的运动传感单元采集关节运动原始数据;将所述压力分布原始数据转换为反映压力中心位移的连续轨迹信息;将所述关节运动原始数据转换为描述关节屈伸角度的变化序列信息;通过时序关联方式将所述连续轨迹信息与所述变化序列信息进行整合,以获得表征用户体态特征的融合感知结果;Multiple sensors pre-positioned on the contact surface of the assistive device on the target person are controlled to synchronously perceive and obtain a fusion perception result composed of body pressure distribution data and joint motion data. This includes: collecting raw pressure distribution data through a pressure sensing array uniformly distributed on the support surface of the assistive device; collecting raw joint motion data through motion sensing units fixed to the joints of the target person's limbs; converting the raw pressure distribution data into continuous trajectory information reflecting the displacement of the pressure center; converting the raw joint motion data into a sequence of changes describing the joint flexion and extension angles; and integrating the continuous trajectory information and the sequence of changes through a temporal correlation method to obtain a fusion perception result characterizing the user's body posture. 基于所述融合感知结果进行多特征耦合分析,生成包含体位状态和场景类型的联合识别结论,包括:从所述融合感知结果中提取压力分布特征和运动模式特征;将所述压力分布特征与预定义的压力分布模式进行匹配,得到初步体位判断结果;将所述运动模式特征与预定义的运动模式序列进行匹配,得到辅助体位判断结果;根据所述初步体位判断结果和所述辅助体位判断结果的组合关系,确定出最终体位状态;基于所述最终体位状态与助起设备的设备支撑结构的接触特征,识别出目标人员的当前所处场景类型;将所述最终体位状态和目标人员的当前所处场景类型组合形成联合识别结论;Based on the fusion perception results, multi-feature coupling analysis is performed to generate a joint recognition conclusion including body position and scene type. This includes: extracting pressure distribution features and motion pattern features from the fusion perception results; matching the pressure distribution features with a predefined pressure distribution pattern to obtain a preliminary body position judgment result; matching the motion pattern features with a predefined motion pattern sequence to obtain an auxiliary body position judgment result; determining the final body position based on the combination relationship between the preliminary body position judgment result and the auxiliary body position judgment result; identifying the current scene type of the target person based on the contact features between the final body position and the equipment support structure of the assistive device; and combining the final body position and the current scene type of the target person to form a joint recognition conclusion. 根据所述联合识别结论进行动态模式映射,输出与当前场景类型及体位相匹配的目标控制模式,包括:从所述联合识别结论中解析出当前场景类型和目标人员的当前体位状态;将所述当前场景类型与预存储的助起模式库中的场景分类进行匹配,以确定候选控制模式集合;根据所述当前体位状态与所述候选控制模式集合中的体位要求进行匹配筛选,根据匹配筛选结果生成适配控制模式;基于所述适配控制模式与用户历史操作记录的偏好特征,确定出目标控制模式;Dynamic pattern mapping is performed based on the joint identification conclusion to output a target control mode that matches the current scene type and body position. This includes: parsing the current scene type and the current body position of the target person from the joint identification conclusion; matching the current scene type with scene categories in a pre-stored assisted lifting mode library to determine a set of candidate control modes; matching and filtering the current body position with the body position requirements in the candidate control mode set, and generating an adaptive control mode based on the matching and filtering results; and determining the target control mode based on the adaptive control mode and the user's historical operation records and preference features. 利用所述目标控制模式进行动力指令合成,生成驱动助起设备执行体位转换的协同控制信号。The target control mode is used to synthesize power commands to generate a coordinated control signal that drives the lifting device to perform body position conversion. 2.根据权利要求1所述的方法,其特征在于,基于所述适配控制模式与用户历史操作记录的偏好特征,确定出目标控制模式,包括:2. The method according to claim 1, characterized in that, determining the target control mode based on the adaptive control mode and the user's historical operation records includes: 从用户历史操作记录中提取包含助起速度、角度调整和停顿间隔的偏好数据;Extract preference data, including assisted start speed, angle adjustment, and pause interval, from the user's historical operation records; 将所述偏好数据转换为控制参数调整向量;The preference data is converted into a control parameter adjustment vector; 将所述适配控制模式的基础参数与所述控制参数调整向量进行组合运算;The basic parameters of the adaptive control mode are combined with the control parameter adjustment vector for calculation. 根据组合运算结果生成符合用户个性化需求的目标控制模式。Based on the combined calculation results, a target control mode that meets the user's personalized needs is generated. 3.根据权利要求1所述的方法,其特征在于,利用所述目标控制模式进行动力指令合成,生成驱动助起设备执行体位转换的协同控制信号,包括:3. The method according to claim 1, characterized in that, by using the target control mode to synthesize power commands and generate a coordinated control signal to drive the assisted-lifting device to perform body position conversion, the method includes: 从所述目标控制模式中解析出各动力执行单元的动作序列参数;The action sequence parameters of each power execution unit are parsed from the target control mode; 根据所述动作序列参数生成基础驱动指令集;Generate a basic drive instruction set based on the action sequence parameters; 通过运动协调算法将所述基础驱动指令集转换为具有时序关联的多执行单元协同指令;The basic driver instruction set is converted into multi-execution unit cooperative instructions with timing correlation through a motion coordination algorithm; 根据设备当前状态反馈对所述多执行单元协同指令进行实时调整,得到调整后的多执行单元协同指令;The multi-execution unit collaborative instructions are adjusted in real time based on the current status feedback of the device to obtain the adjusted multi-execution unit collaborative instructions. 根据调整后的多执行单元协同指令,生成包含速度控制、角度控制和力量控制的所述协同控制信号。Based on the adjusted multi-execution unit cooperative instructions, the cooperative control signal containing speed control, angle control, and force control is generated. 4.根据权利要求3所述的方法,其特征在于,根据设备当前状态反馈对所述多执行单元协同指令进行实时调整,得到调整后的多执行单元协同指令,包括:4. The method according to claim 3, characterized in that, adjusting the multi-execution unit collaborative instructions in real time based on the current status feedback of the device to obtain the adjusted multi-execution unit collaborative instructions includes: 通过所述助起设备上的状态监测模块实时获取助起设备中各动力执行机构的运行参数和负载参数;The operating parameters and load parameters of each power actuator in the lifting device are obtained in real time through the status monitoring module on the lifting device. 将所述运行参数和所述负载参数与所述多执行单元协同指令中的预期参数进行差异比较,得到差异比较结果;The operating parameters and the load parameters are compared with the expected parameters in the multi-execution unit cooperative instructions to obtain the difference comparison results. 根据差异比较结果确定各动力执行机构的参数调整量;The parameter adjustment amount for each power actuator is determined based on the difference comparison results; 将所述参数调整量应用到所述多执行单元协同指令中,生成调整后的多执行单元协同指令。The parameter adjustment amount is applied to the multi-execution unit cooperative instruction to generate the adjusted multi-execution unit cooperative instruction. 5.一种基于多场景体位转换的卒中老人助起控制系统,应用于所述权利要求1-4中任一项所述的一种基于多场景体位转换的卒中老人助起控制方法,其特征在于,包括:5. A stroke-assisted standing control system based on multi-scenario body position transformation, applied to the stroke-assisted standing control method based on multi-scenario body position transformation as described in any one of claims 1-4, characterized in that it includes: 控制模块,用于控制预先布置在目标人员的助起设备接触表面的多个传感器进行同步感知,以获得身体压力分布数据和关节运动数据组成的融合感知结果;The control module is used to control multiple sensors pre-placed on the contact surface of the assistive device for the target person to synchronously sense and obtain a fusion sensing result composed of body pressure distribution data and joint motion data. 耦合分析模块,用于基于所述融合感知结果进行多特征耦合分析,生成包含体位状态和场景类型的联合识别结论;The coupling analysis module is used to perform multi-feature coupling analysis based on the fused perception results and generate a joint recognition conclusion that includes body position and scene type. 模式映射模块,用于根据所述联合识别结论进行动态模式映射,输出与当前场景类型及体位相匹配的目标控制模式;The pattern mapping module is used to perform dynamic pattern mapping based on the joint recognition conclusion and output a target control pattern that matches the current scene type and body position. 指令合成模块,用于利用所述目标控制模式进行动力指令合成,生成驱动助起设备执行体位转换的协同控制信号。The instruction synthesis module is used to synthesize power instructions using the target control mode to generate a coordinated control signal that drives the lifting device to perform body position conversion. 6.一种计算设备,其特征在于,包括处理组件以及存储组件;所述存储组件存储一个或多个计算机指令;所述一个或多个计算机指令用以被所述处理组件调用执行,实现如权利要求1~4任一项所述的一种基于多场景体位转换的卒中老人助起控制方法。6. A computing device, characterized in that it includes a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions are invoked and executed by the processing component to implement the stroke-assisted standing control method based on multi-scenario body position transformation as described in any one of claims 1 to 4. 7.一种计算机存储介质,其特征在于,存储有计算机程序,所述计算机程序被计算机执行时,实现如权利要求1~4任一项所述的一种基于多场景体位转换的卒中老人助起控制方法。7. A computer storage medium, characterized in that it stores a computer program, wherein when the computer program is executed by a computer, it implements a method for assisting stroke patients to get up based on multi-scenario body position transformation as described in any one of claims 1 to 4.
CN202511311845.7A 2025-09-15 2025-09-15 A method and system for assisting stroke patients to get up based on multi-scenario body position changes Active CN120814966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511311845.7A CN120814966B (en) 2025-09-15 2025-09-15 A method and system for assisting stroke patients to get up based on multi-scenario body position changes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511311845.7A CN120814966B (en) 2025-09-15 2025-09-15 A method and system for assisting stroke patients to get up based on multi-scenario body position changes

Publications (2)

Publication Number Publication Date
CN120814966A CN120814966A (en) 2025-10-21
CN120814966B true CN120814966B (en) 2026-01-06

Family

ID=97366130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511311845.7A Active CN120814966B (en) 2025-09-15 2025-09-15 A method and system for assisting stroke patients to get up based on multi-scenario body position changes

Country Status (1)

Country Link
CN (1) CN120814966B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111267071A (en) * 2020-02-14 2020-06-12 上海航天控制技术研究所 Multi-joint combined control system and method for exoskeleton robot
CN115645235A (en) * 2022-10-18 2023-01-31 国家康复辅具研究中心 Multi-scene-oriented intelligent shifting machine system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004140949A (en) * 2002-10-18 2004-05-13 Fuji Heavy Ind Ltd Travel control device based on pressure distribution pattern
DE102004029513B3 (en) * 2004-06-18 2005-09-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Handicapped person moving ability supporting device, has sensors detecting state of rest and movement of persons, and control unit producing control signals based on sensor and control instruction signals to control actuator
JP5716873B2 (en) * 2012-09-18 2015-05-13 株式会社村田製作所 Moving body
CN102922508B (en) * 2012-09-21 2015-01-07 西安交通大学 Exoskeleton robot system for reloading batteries of electric vehicle
CN105411816B (en) * 2015-12-16 2019-09-20 哈尔滨工业大学深圳研究生院 A control system and control method for a walking assist device
CN108107891A (en) * 2017-12-19 2018-06-01 北京九星智元科技有限公司 Power-assisted stroller control system and method based on all-wheel drive Multi-sensor Fusion
JP2019111635A (en) * 2017-12-26 2019-07-11 株式会社東芝 Motion support method
CN110139449A (en) * 2019-06-13 2019-08-16 安徽理工大学 A kind of full room lighting system of intelligence based on human body attitude identification
EP4099970A1 (en) * 2020-02-03 2022-12-14 Koninklijke Philips N.V. Patient transfer training system
US11786425B2 (en) * 2020-06-30 2023-10-17 Toyota Motor North America, Inc. Systems incorporating a wheelchair with an exoskeleton assembly and methods of controlling the same
US20220176559A1 (en) * 2020-12-07 2022-06-09 Sarcos Corp. Method for Redundant Control Policies for Safe Operation of an Exoskeleton
CN115157213A (en) * 2021-07-19 2022-10-11 重庆牛迪创新科技有限公司 Method and device for assisting power control of exoskeleton and computer storage medium
CN114833804B (en) * 2022-06-13 2024-09-13 山东瑞曼智能装备有限公司 Active power assisting device and method suitable for multiple scenes
CN116597119B (en) * 2022-12-30 2025-06-24 北京津发科技股份有限公司 Human-computer interaction acquisition method, device and system for wearable extended reality device
CN117234099B (en) * 2023-10-10 2025-08-26 深圳腾信百纳科技有限公司 A home control method and system combined with wearable devices
CN119734274A (en) * 2025-01-16 2025-04-01 兖矿能源集团股份有限公司 Hip-assisted exoskeleton control method based on force-position information fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111267071A (en) * 2020-02-14 2020-06-12 上海航天控制技术研究所 Multi-joint combined control system and method for exoskeleton robot
CN115645235A (en) * 2022-10-18 2023-01-31 国家康复辅具研究中心 Multi-scene-oriented intelligent shifting machine system

Also Published As

Publication number Publication date
CN120814966A (en) 2025-10-21

Similar Documents

Publication Publication Date Title
Liu et al. EMG-based real-time linear-nonlinear cascade regression decoding of shoulder, elbow, and wrist movements in able-bodied persons and stroke survivors
CN117298452B (en) Lower limb rehabilitation system under virtual reality assistance stimulated by transcranial alternating current
US20150012111A1 (en) Methods for closed-loop neural-machine interface systems for the control of wearable exoskeletons and prosthetic devices
TWI762313B (en) Immersive and multi-posture rehabilitation training system with active/passive physical coordination
CN102799937A (en) Lower limb movement track predication method under fusion of information of myoelectricity signal and joint angle
JP2010509010A (en) Device and method for following the movement of a living organism
CN114099234B (en) An intelligent rehabilitation robot data processing method and system for assisting rehabilitation training
CN116999296B (en) A method for real-time estimation of lower limb joint torques using an exoskeleton-assisted control
JP2013516258A (en) Method for determining artificial limb movements from EEG signals
Akkawutvanich et al. Personalized symmetrical and asymmetrical gait generation of a lower limb exoskeleton
CN111067543A (en) Man-machine interaction system of horizontal stepping type rehabilitation training robot
CN106109164A (en) Rehabilitation system and the control method of rehabilitation system
CN117297583A (en) Multimodal proprioceptive perception disorder assessment system and lower limb movement model construction system
CN120814966B (en) A method and system for assisting stroke patients to get up based on multi-scenario body position changes
Goffredo et al. A neural tracking and motor control approach to improve rehabilitation of upper limb movements
Huang et al. Modeling and individualizing continuous joint kinematics using gaussian process enhanced fourier series
Jensen et al. Improving signal reliability for on-line joint angle estimation from nerve cuff recordings of muscle afferents
CN120862713B (en) Control method and system for improving humanoid degree of robot
Ling et al. semg-based knee angle prediction: An efficient framework with xgboost feature selection and multi-attention lstm
CN121287458A (en) A multi-angle adjustable electric ankle pump intelligent control system based on somatosensory interaction
RU2762775C1 (en) Method for deciphering electromyosignals and apparatus for implementation thereof
Díaz et al. Human-in-the-loop optimization of wearable device parameters using an EMG-based objective function
Osawa et al. Learning and visualization of features using MC-DCNN for gait training considering physical individual differences
Sekharamantry et al. A case study on artifical intelligence based data processing in passive brain–computer interface
Zaway Hybrid FOPID and DL on EMG signals for gait phases classification to rehabilitation robot control: A comparative study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant