Disclosure of Invention
The application provides a multi-scene position conversion-based method and a multi-scene position conversion-based system for controlling the lifting of a stroke aged, which are used for solving the problems that in the prior art, the control mode of a lifting assisting device is stiff and the dynamic adaptation capability to the real-time state and multi-scene change of a patient is lacking.
In a first aspect, the application provides a method for assisting in controlling the lifting of a elderly person in stroke based on multi-scene posture conversion, which comprises the following steps:
controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting equipment of the target person in advance to synchronously sense so as to obtain a fusion sensing result consisting of body pressure distribution data and articulation data;
Performing multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing a body position state and a scene type;
Performing dynamic mode mapping according to the joint recognition conclusion, and outputting a target control mode matched with the current scene type and the body phase;
and synthesizing a power instruction by using the target control mode to generate a cooperative control signal for driving the lifting assisting equipment to execute posture conversion.
Optionally, controlling the plurality of sensors pre-arranged on the contact surface of the lifting device of the target person to perform synchronous sensing so as to obtain a fusion sensing result composed of the body pressure distribution data and the articulation data, including:
Collecting pressure distribution original data through a pressure sensing array uniformly distributed on the supporting surface of the lifting assisting equipment;
acquiring joint motion raw data through a motion sensing unit fixed at a limb joint of a target person;
Converting the pressure distribution original data into continuous track information reflecting the displacement of the pressure center;
converting the joint motion raw data into variation sequence information describing the degree of the joint Qu Shenjiao ℃;
And integrating the continuous track information and the change sequence information in a time sequence association mode to obtain a fusion perception result representing the physical characteristics of the user.
Optionally, performing multi-feature coupling analysis based on the fusion sensing result to generate a joint recognition conclusion including a posture state and a scene type, including:
extracting pressure distribution characteristics and movement pattern characteristics from the fusion sensing result;
matching the pressure distribution characteristics with a predefined pressure distribution mode to obtain a preliminary body position judgment result;
Matching the motion pattern features with a predefined motion pattern sequence to obtain an auxiliary body position judging result;
Determining a final posture state according to the combination relation of the preliminary posture judgment result and the auxiliary posture judgment result;
Based on the contact characteristics of the final posture state and the equipment supporting structure of the lifting assisting equipment, identifying the current scene type of the target personnel;
and combining the final posture state and the current scene type of the target person to form a joint recognition conclusion.
Optionally, performing dynamic mode mapping according to the joint recognition conclusion, and outputting a target control mode matched with the current scene type and the body position, including:
analyzing the current scene type and the current position state of the target person from the joint identification conclusion;
matching the current scene type with scene classification in a pre-stored lifting-assisting mode library to determine a candidate control mode set;
performing matching screening according to the current posture state and posture requirements in the candidate control mode set, and generating an adaptive control mode according to a matching screening result;
and determining a target control mode based on the preference characteristics of the adaptive control mode and the historical operation record of the user.
Optionally, determining the target control mode based on the preference characteristics of the adaptive control mode and the historical operation record of the user includes:
Extracting preference data comprising a lifting speed, an angle adjustment and a pause interval from a user history operation record;
converting the preference data into a control parameter adjustment vector;
combining the basic parameters of the adaptive control mode with the control parameter adjustment vector;
and generating a target control mode meeting the personalized requirements of the user according to the combined operation result.
Optionally, the power command synthesis is performed by using the target control mode, and a cooperative control signal for driving the lifting assisting device to perform posture conversion is generated, including:
Analyzing action sequence parameters of each power execution unit from the target control mode;
Generating a basic driving instruction set according to the action sequence parameters;
Converting the basic driving instruction set into a multi-execution unit cooperative instruction with time sequence association through a motion coordination algorithm;
the multi-execution unit cooperative instruction is adjusted in real time according to the current state feedback of the equipment, and the adjusted multi-execution unit cooperative instruction is obtained;
And generating the cooperative control signals comprising speed control, angle control and strength control according to the adjusted cooperative instructions of the multiple execution units.
Optionally, the multi-execution unit cooperative instruction is adjusted in real time according to the current state feedback of the device, so as to obtain an adjusted multi-execution unit cooperative instruction, which includes:
acquiring the operation parameters and the load parameters of each power executing mechanism in the lifting assisting equipment in real time through a state monitoring module on the lifting assisting equipment;
Performing difference comparison on the running parameter and the load parameter and expected parameters in the multi-execution unit cooperative instruction to obtain a difference comparison result;
Determining parameter adjustment amounts of all power execution mechanisms according to the difference comparison result;
And applying the parameter adjustment quantity to the multi-execution unit cooperative instruction to generate an adjusted multi-execution unit cooperative instruction.
In a second aspect, the present application provides a multi-scene posture conversion-based stroke elderly person lifting control system, comprising:
The control module is used for controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting equipment of the target person in advance to synchronously sense so as to obtain a fusion sensing result consisting of body pressure distribution data and articulation data;
the coupling analysis module is used for carrying out multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing a body position state and a scene type;
The mode mapping module is used for carrying out dynamic mode mapping according to the joint identification conclusion and outputting a target control mode matched with the current scene type and the body position;
and the command synthesis module is used for synthesizing the power command by utilizing the target control mode and generating a cooperative control signal for driving the lifting assisting equipment to execute posture conversion.
In a third aspect, the application provides a computing device, which comprises a processing component and a storage component, wherein the storage component stores one or more computer instructions, and the one or more computer instructions are used for being called and executed by the processing component to realize the stroke elderly person assisting control method based on the multi-scene posture conversion according to the first aspect.
In a fourth aspect, the present application provides a computer storage medium storing a computer program, where the computer program is executed by a computer to implement a multi-scene posture conversion-based stroke elderly person lifting control method according to the first aspect.
According to the application, fusion data is obtained through synchronous sensing of a plurality of sensors, the body position state and the scene type are accurately identified by combining multi-feature coupling analysis, a control mode which is matched with the current scene and the body position height is generated by utilizing dynamic mode mapping, and finally, cooperative control is realized through power instruction synthesis. The scheme can adapt to various lifting-assisting scenes such as beds, wheelchairs and chairs, provides individualized body position conversion assistance according to the real-time physical state of the old people suffering from apoplexy, and effectively improves the safety and adaptability of the lifting-assisting process.
Further, by monitoring the running state of each power executing mechanism in real time, the actual parameters and the expected instructions are subjected to difference comparison, and the control parameters are dynamically adjusted, so that the coordination operation of each executing unit is ensured to be always kept in the process of executing body position conversion by the lifting assisting equipment. The real-time adjustment mechanism based on the state feedback can effectively cope with equipment load change and patient position fluctuation, prevent the overload or insufficient power assisting condition from happening, and remarkably improve the stability and safety of the power assisting process.
These and other aspects of the application will be more readily apparent from the following description of the embodiments.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present application and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The following description of the embodiments of the present application will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
Fig. 1 is a flowchart of a method for assisting in controlling the elderly in the stroke based on multi-scene posture conversion, as shown in fig. 1, the method comprises:
Step 101, controlling a plurality of sensors which are arranged on the contact surface of the lifting assisting device of the target person in advance to synchronously sense so as to obtain a fusion sensing result composed of body pressure distribution data and articulation data.
Optionally, step 101 may specifically include the following steps:
Step 1011, acquiring pressure distribution original data through pressure sensing arrays uniformly distributed on the supporting surface of the lifting assisting equipment;
Step 1012, acquiring joint motion raw data through a motion sensing unit fixed at a limb joint of a target person;
step 1013, converting the pressure distribution original data into continuous track information reflecting the displacement of the pressure center;
step 1014, converting the original data of the joint motion into variable sequence information describing the degree of the joint Qu Shenjiao;
step 1015, integrating the continuous track information with the change sequence information in a time sequence association manner to obtain a fusion perception result representing the physical characteristics of the user.
In the above scheme, the lifting-assisting device contact surface refers to a part where the device is in direct contact with the body of a user to provide support, and the pressure sensing arrays are uniformly distributed on the surface of the lifting-assisting device contact surface and are used for collecting pressure distribution raw data, wherein the pressure distribution raw data reflect the pressure and distribution of each part of the body of the user to the contact surface. The body pressure distribution data is information describing the overall distribution characteristics of the pressure of the body of the user on the lifting aid, which is obtained by analyzing the raw data of the pressure distribution. The joint movement data describe the movement state of the joints of the limbs of the user, and the movement data are obtained by acquiring and converting the joint movement raw data by a movement sensing unit fixed at the joints of the limbs. The fusion sensing result is comprehensive data which is formed by integrating continuous track information reflecting the displacement of the pressure center and change sequence information describing the Qu Shenjiao degrees of joints in a time sequence correlation mode and can represent the physical characteristics of a user.
In this embodiment, step 1011 collects raw data of pressure distribution by pressure sensing arrays uniformly distributed on the support surface of the lifting device, and these arrays can continuously measure the pressure value applied to the surface of the device by the body of the user. Next step 1012 collects raw joint motion data via motion sensing units secured to the joints of the subject person's limbs, which units are capable of tracking the angle and direction of motion of the joints. Next, step 1013 is to implement a description of the dynamic change of the pressure center by analyzing the moving path of the pressure point by converting the pressure distribution raw data into continuous track information reflecting the displacement of the pressure center. The raw data of joint motion is then converted to sequence information describing the changes in the joint Qu Shenjiao degrees, forming a sequence of angular changes by parsing the successive data points of joint motion, via step 1014. And finally, in step 1015, continuous track information and change sequence information are integrated in a time sequence association mode, so that the movement of the pressure center and the change of the joint angle are aligned on a time axis, and a fusion perception result capable of comprehensively representing the physical characteristics of a user is formed.
For example, in the application scenario of the lifting-assisting equipment in the rehabilitation center A, a pressure sensing array is paved on the equipment bearing surface, and when the old people in stroke use the equipment, the array continuously collects the pressure distribution raw data of the back and the buttocks. Meanwhile, the motion sensing units arranged on the knee joint points and the hip joint points of the old synchronously collect joint motion raw data. After processing these raw data, the pressure distribution data is converted into continuous track information reflecting the movement path of the center of pressure from sitting to standing, and the articulation data is converted into variation sequence information recording the knee joint and hip joint Qu Shenjiao degree variation. After the two types of information are integrated through a time sequence association algorithm, the system generates a fusion sensing result which accurately reflects the posture change of the old in the standing process, and data support is provided for subsequent lifting control.
According to the scheme, the pressure and joint motion data are cooperatively collected and converted into the track and sequence information related to the time sequence, so that the comprehensive perception of the physical characteristics of the user is realized, a high-precision and synchronous multi-mode data base is provided for assisting control, and the accuracy of the subsequent links in judging the state of the user and the effectiveness of a control strategy are ensured.
And 102, performing multi-feature coupling analysis based on the fusion perception result to generate a joint recognition conclusion containing the posture state and the scene type.
Optionally, step 102 may specifically include the following steps:
Step 1021, extracting pressure distribution characteristics and motion pattern characteristics from the fusion sensing result;
step 1022, matching the pressure distribution characteristic with a predefined pressure distribution mode to obtain a preliminary body position judgment result;
step 1023, matching the motion pattern features with a predefined motion pattern sequence to obtain an auxiliary body position judgment result;
Step 1024, determining a final posture state according to the combination relationship of the preliminary posture determination result and the auxiliary posture determination result;
Step 1025, identifying the current scene type of the target person based on the contact characteristics of the final posture state and the equipment supporting structure of the lifting assisting equipment;
And 1026, combining the final posture state and the current scene type of the target person to form a joint identification conclusion.
In the above scheme, the multi-feature coupling analysis refers to a process of performing joint processing on the pressure distribution features and the motion pattern features in the fusion sensing result to identify the body position state and the scene type of the user. The posture state describes the user's body posture category (e.g., lying, sitting or standing) and the scene type refers to the supporting environment (e.g., bed, wheelchair or chair) in which the assist device is currently located. The joint recognition conclusion is a judgment result formed by integrating the final posture state and the current scene type. The pressure distribution characteristics are data which are extracted from the fusion sensing result and reflect the body pressure distribution mode, and the movement mode characteristics are data describing the movement rule of the joint. The preliminary posture determination result is an initial posture classification obtained by matching the pressure distribution feature with a predefined pressure distribution pattern, the predefined pressure distribution pattern is a pre-stored pressure distribution pattern set, the predefined movement pattern sequence is a pre-stored typical joint movement pattern set, and the auxiliary posture determination result is an auxiliary classification obtained by matching the movement pattern feature with the predefined movement pattern sequence. The final posture state is an accurate posture classification determined by combining the preliminary posture judgment result and the auxiliary posture judgment result. The contact characteristics of the device support structure describe the manner in which the lift-assisting device contacts a support surface (e.g., mattress, wheelchair cushion) for distinguishing scene types.
In the scheme, first, pressure distribution characteristics and movement pattern characteristics are separated from a fusion sensing result by 1021, wherein the pressure distribution characteristics reflect pressure intensity distribution of each part of a body, and the movement pattern characteristics describe a time sequence rule of joint angle change. And then the pressure distribution characteristics are compared with a pre-stored pressure distribution pattern library through 1022, and a preliminary body position judgment result (such as a seat) is obtained through calculating the similarity. The motion pattern features are then matched to a predefined sequence of motion patterns by 1023, and an auxiliary body position determination result (e.g., identified as "standing transition state") is obtained by a sequence alignment algorithm. The preliminary and auxiliary posture decisions are then weighted fused by 1024 to determine the final posture state (e.g. "in seat to upright transition") according to predefined decision rules (e.g. majority vote or confidence weighting). The type of scene currently in use (e.g., a "wheelchair scene") is then identified by analyzing the contact characteristics of the device support structure (e.g., center of pressure position and support surface shape) based on the final posture state 1025, matching a predefined scene template (e.g., a bed scene typically having a large area of uniform pressure distribution). Finally, the final posture state and the current scene type are combined into a joint recognition conclusion (such as "sitting state in wheelchair scene") through 1026.
With the specific embodiment of the above scheme, in the application scenario of the lifting assisting device of the rehabilitation center A, the system extracts pressure distribution characteristics (showing that the pressure is concentrated on buttocks and thighs) and movement pattern characteristics (showing the slow extension pattern of knee joints and hip joints) from the fusion perception result. The pressure distribution characteristics are highly matched with the prestored seat mode, and a preliminary body position judging result seat is generated. The motion pattern features are matched with a predefined sequence of 'transition from sitting to standing', and an auxiliary body position judgment result 'transition from standing' is generated. The system synthesizes the two judging results, and determines the final posture state as 'in the process of converting the sitting position into the standing position'. By analyzing the contact characteristics of the equipment support structure (finding that the pressure distribution is rectangular and has moving wheel markers), the type of scene currently in is identified as "wheelchair scene". And finally, generating a combined recognition conclusion of 'the conversion state from the sitting position to the standing position in the wheelchair scene', and providing accurate input for subsequent lifting control.
According to the scheme, high-precision posture state and scene type recognition is realized through multi-feature coupling analysis, misjudgment possibility is reduced through a double verification mechanism of pressure and motion features, scene distinguishing capability is enhanced through equipment supporting structure contact feature analysis, and finally generated joint recognition conclusion provides comprehensive and accurate environment perception basis for follow-up starting control.
And step 103, performing dynamic mode mapping according to the joint identification conclusion, and outputting a target control mode matched with the current scene type and the body position.
Optionally, step 103 may specifically include the following steps:
step 1031, analyzing the current scene type and the current posture state of the target person from the joint identification conclusion;
step 1032, matching the current scene type with scene classification in a pre-stored lifting-assisting mode library to determine a candidate control mode set;
Step 1033, performing matching screening according to the current posture state and posture requirements in the candidate control mode set, and generating an adaptive control mode according to a matching screening result;
Step 1034, determining a target control mode based on the adaptive control mode and the preference characteristics of the user history.
Step 1034 may specifically include the following steps:
extracting preference data comprising lifting speed, angle adjustment and pause interval from a user history operation record, converting the preference data into a control parameter adjustment vector, carrying out combination operation on basic parameters of the adaptive control mode and the control parameter adjustment vector, and generating a target control mode meeting personalized requirements of a user according to a combination operation result.
In the above scheme, the dynamic pattern mapping refers to a process of matching and generating a control strategy suitable for the current situation from a pre-stored start-up assisting pattern library according to the scene type and the posture state in the joint recognition conclusion. The target control mode is a specific set of control instructions for driving the lifting-assist device to perform posture conversion. The pre-stored lift-assistance pattern library contains standardized control pattern templates for different scenarios (e.g. bed, wheelchair, chair) and body positions (e.g. lying, sitting, standing). The candidate control pattern set is a plurality of possible applicable control pattern groups which are preliminarily screened out through scene classification matching. The adaptive control mode is a control mode which is obtained from the candidate set and is closer to the current body position after matching and screening according to the body position requirement. The preference characteristics of the historical operation records of the user reflect the personalized use habits of the user on parameters such as the lifting speed, the angle adjustment, the pause interval and the like. The lifting speed refers to a lifting speed range preferred by a user in historical operation, the angle adjustment refers to the adjustment tendency of the preference in the user operation to back support, leg lifting and other angles, and the pause interval refers to the time length of the preference in the user operation to pause in the posture conversion process. The control parameter adjustment vector is a parameter adjustment instruction set formed by quantizing these preference data. The basic parameters are standard control parameters preset in the adaptive control mode.
In this scenario, the current scene type (e.g., wheelchair scene) and the current posture state of the target person (e.g., in the seat-to-upright transition) are first separated from the joint recognition conclusion by 1031. And secondly, matching the current scene type with scene classification in a pre-stored lifting-assisting mode library (such as matching a wheelchair scene with wheelchair scene modes in the library) through 1032, and screening out all possible applicable modes to form a candidate control mode set. And then, matching and screening the current posture state and the posture requirement of each mode in the candidate control mode set (for example, matching the posture conversion state in the process of converting the sitting position to the standing position with the posture conversion requirement in the candidate mode) through 1033, and removing the unmatched modes to generate the adaptive control mode. Finally, 1034, preference data of the lifting speed, the angle adjustment and the pause interval are extracted from the historical operation record of the user, the preference data are converted into control parameter adjustment vectors (such as the preference data are quantized into speed increasing coefficients, angle adjustment coefficients and the like), basic parameters of the adaptive control mode and the control parameter adjustment vectors are combined to operate (such as the basic speed parameters are multiplied by the speed adjustment coefficients), and a target control mode meeting the personalized requirements of the user is generated according to operation results.
In the specific embodiment of the scheme, in the application scene of the lifting assisting equipment of the rehabilitation center A, the system analyzes that the current scene type is a wheelchair scene from the joint identification conclusion, and the current posture state is in the conversion from the sitting position to the standing position. The system matches the wheelchair scene with scene classification in a pre-stored lifting-assisting mode library, and screens out all control modes suitable for the wheelchair scene to form a candidate control mode set. And then matching the state in the conversion from sitting to standing with the position requirements of all modes in the candidate set, and screening out an adaptive control mode specially designed for sitting to standing in a wheelchair scene. The system then extracts from the historical operating record of user B preference data for which faster build-up speed, smaller back angle adjustment, and brief pause are preferred, and converts these data into control parameter adjustment vectors (e.g., speed factor 1.2, angle factor 0.9, pause time 0.5 seconds). And carrying out combination operation on the vector and basic parameters of the adaptive control mode to finally generate a target control mode which meets the personalized requirements of the user B.
According to the scheme, accurate conversion from standardized control to personalized control is realized through dynamic mode mapping, basic applicability of a control mode is ensured through double matching of a scene and a body position, control parameters are further refined through introducing user history preference data, the generated target control mode meets the current environment requirements, personalized requirements of a user are met, and suitability and user experience of a boosting process are remarkably improved.
And 104, performing power instruction synthesis by using the target control mode to generate a cooperative control signal for driving the lifting assisting equipment to perform posture conversion.
Optionally, the step 104 may specifically include the following steps:
Step 1041, analyzing the motion sequence parameters of each power execution unit from the target control mode;
step 1042, generating a basic driving instruction set according to the action sequence parameters;
Step 1043, converting the basic driving instruction set into a multi-execution unit cooperative instruction with time sequence association through a motion coordination algorithm;
Step 1044, adjusting the multi-execution unit cooperative instruction in real time according to the current state feedback of the device to obtain an adjusted multi-execution unit cooperative instruction;
step 1044 may specifically include the following steps:
The method comprises the steps of acquiring running parameters and load parameters of power execution mechanisms in lifting assisting equipment in real time through a state monitoring module on the lifting assisting equipment, comparing the running parameters and the load parameters with expected parameters in a multi-execution unit cooperative instruction to obtain a difference comparison result, determining parameter adjustment amounts of the power execution mechanisms according to the difference comparison result, and applying the parameter adjustment amounts to the multi-execution unit cooperative instruction to generate an adjusted multi-execution unit cooperative instruction.
Step 1045, generating the cooperative control signal including the speed control, the angle control and the force control according to the adjusted cooperative instruction of the multiple execution units.
In the above scheme, the power command synthesis refers to a process of generating a specific driving command according to a target control mode, and the cooperative control signal is a control command set which is finally output and is used for coordinating a plurality of execution units to complete body position conversion. Each power execution unit refers to an independent component (such as a motor, a hydraulic cylinder and the like) in the lifting assisting equipment, wherein the independent component is responsible for generating the walking assisting power, and the action sequence parameters describe the action sequence, the amplitude and the time requirement of each execution unit. The basic driving instruction set is a preliminary control command set converted by action sequence parameters. A motion coordination algorithm is a processing logic for converting a base instruction into instructions that are coordinated in time by multiple execution units. The multi-execution unit cooperative instruction is a control instruction set which is obtained after being processed by a motion coordination algorithm and has an accurate time sequence relation. The adjusted multi-execution unit cooperative instruction is a final instruction after the cooperative instruction is corrected according to the real-time state feedback. The state monitoring module is a sensing unit for collecting the running state of the equipment, and the running parameters and the load parameters of each power executing mechanism respectively reflect the actual running state of the executing mechanism and the born force/moment data. The expected parameter is an ideal running value preset in the cooperative instruction, the difference comparison result is a deviation analysis result of the actual parameter and the expected value, and the parameter adjustment quantity is a numerical quantity which is calculated according to the difference and needs to be corrected.
In this scenario, the motion sequence parameters of each power execution unit are first extracted from the target control mode by 1041, where the parameters specify the motion sequence, range of travel, and timing requirements of each execution unit. Secondly, the motion sequence parameters are converted into a specific basic driving instruction set through 1042, and the abstract motion parameters are converted into electric signals or digital commands capable of directly driving the execution unit through the conversion process. Then, the basic driving instruction set is processed by 1043 using a motion coordination algorithm, and the algorithm integrates independent instructions into a multi-execution unit cooperative instruction with accurate time sequence coordination by establishing a space-time relation model among execution units, so that the synchronization and no conflict of the actions of the units are ensured. And then, the state monitoring module is utilized to collect the operation parameters (such as speed and position) and the load parameters (such as pressure and torque) of each power executing mechanism in real time through 1044, the actual parameters are compared with the expected parameters in the multi-executing unit cooperative instruction to obtain a difference comparison result, the parameter adjustment quantity required by each executing mechanism is calculated according to the difference result, and the adjustment quantity is applied to the cooperative instruction to generate an adjusted multi-executing unit cooperative instruction. Finally, a cooperative control signal comprising speed control (adjusting motion speed), angle control (adjusting joint angle) and strength control (adjusting output force/torque) is generated by 1045 according to the adjusted cooperative instruction of the multiple execution units, and the signal can directly drive the lifting-assisting equipment to execute body position conversion.
In the application scenario of the lifting assisting equipment of the rehabilitation center A, the system analyzes the action sequence parameters (including the requirements of the stretching angle, the movement speed and the force value) of the back support unit and the leg lifting unit from the target control mode. These parameters are converted into a basic drive command set that controls motor speed and travel. Through the motion coordination algorithm processing, a multi-execution unit cooperative instruction (such as raising the legs by 15 degrees simultaneously when the back is raised by 30 degrees) is generated, which ensures that the back and leg movements are synchronized. The state monitoring module acquires that the actual pressure value of the back support unit is lower than an expected value, the movement speed of the leg unit is higher than the expected value, and the system calculates the parameter adjustment quantity required to increase the back output force and reduce the leg speed. After the adjustment amounts are applied, an adjusted multi-execution unit cooperative instruction is generated, and finally a cooperative control signal containing proper speed, angle and strength control is output, so that the equipment stably completes posture conversion from sitting to standing.
According to the scheme, the abstract control mode is converted into the precisely executable driving signal through the power instruction synthesis and real-time adjustment mechanism, the action synchronism and coordination of the multiple execution units are ensured through the motion coordination algorithm, the uncertainty and individual difference in actual operation are overcome through state feedback adjustment, and finally the generated cooperative control signal has multiple control dimensions of speed, angle and strength, so that the stability, safety and adaptability of the body position conversion process are remarkably improved.
Fig. 2 is a schematic structural diagram of a stroke aged person lifting control system based on multi-scene position conversion, as shown in fig. 2, the system includes:
A control module 21 for controlling a plurality of sensors arranged on the contact surface of the lifting assisting device of the target person in advance to perform synchronous sensing so as to obtain a fusion sensing result composed of body pressure distribution data and articulation data;
The coupling analysis module 22 is configured to perform multi-feature coupling analysis based on the fusion sensing result, and generate a joint recognition conclusion including a posture state and a scene type;
The mode mapping module 23 is configured to perform dynamic mode mapping according to the joint identification conclusion, and output a target control mode that matches with the current scene type and the body position;
the command synthesis module 24 is configured to perform power command synthesis using the target control mode, and generate a cooperative control signal for driving the lifting assisting device to perform body position conversion.
The stroke elderly person assisting control system based on the multi-scene position conversion shown in fig. 2 may execute the stroke elderly person assisting control method based on the multi-scene position conversion shown in the embodiment shown in fig. 1, and its implementation principle and technical effects are not repeated. The specific manner in which the individual modules and units perform the operations in the above-described embodiment of the multi-scene position conversion-based stroke elderly person support control system has been described in detail in the embodiment related to the method, and will not be described in detail herein.
In one possible design, a multi-scenario posture-based stroke senior citizen assistance control system of the embodiment of fig. 2 may be implemented as a computing device, as shown in fig. 3, which may include a storage component 31 and a processing component 32;
the storage component 31 stores one or more computer instructions for execution by the processing component 32.
The processing component 32 is used in a multi-scene posture conversion-based stroke elderly person lift control method according to the embodiment shown in fig. 1.
Wherein the processing component 32 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 31 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Of course, the computing device may necessarily include other components as well, such as input/output interfaces, display components, communication components, and the like.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by the cloud computing platform, and at this time, the computing device may be a cloud server, and the processing component, the storage component, and the like may be a base server resource rented or purchased from the cloud computing platform.
The embodiment of the application also provides a computer storage medium which stores a computer program, and the computer program can realize the multi-scene posture conversion-based control method for assisting the elderly in the stroke when being executed by a computer.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present application.