[go: up one dir, main page]

WO2022242825A1 - Training system and method with emotion assessment - Google Patents

Training system and method with emotion assessment Download PDF

Info

Publication number
WO2022242825A1
WO2022242825A1 PCT/EP2021/062989 EP2021062989W WO2022242825A1 WO 2022242825 A1 WO2022242825 A1 WO 2022242825A1 EP 2021062989 W EP2021062989 W EP 2021062989W WO 2022242825 A1 WO2022242825 A1 WO 2022242825A1
Authority
WO
WIPO (PCT)
Prior art keywords
exercise
user
related content
emotional status
training system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2021/062989
Other languages
French (fr)
Inventor
Calin-Laurentiu POPESCU
Valerie BURES-BÖNSTRÖM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Etone Motion Analysis GmbH
Original Assignee
Etone Motion Analysis GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Etone Motion Analysis GmbH filed Critical Etone Motion Analysis GmbH
Priority to PCT/EP2021/062989 priority Critical patent/WO2022242825A1/en
Publication of WO2022242825A1 publication Critical patent/WO2022242825A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present solution in particular relates to a training system comprising at least one fitness device.
  • Fitness devices which are to be used in gyms or at home and which include at least one display for displaying exercise-related content to a user become more and more popular.
  • the exercise-related content may be a video retrieved from a local memory of the fitness device or from a memory of a computing device connected to the fitness device via a wired or wireless network.
  • exercise-related content may be provided as a stream, for example via the Internet. Based on the exercise-related content the user is instructed how to perform different fitness exercises.
  • At least one display of the fitness device may be rather large and held by a frame structure of the fitness device having more than 1 meter in length and more than 40 cm in width.
  • fitness devices which include a semi-reflective mirror arranged in front of the at least one display.
  • a semi-reflective mirror may reflect incident light from a user facing a front surface of the mirror to present a reflected image of the user. Further, the mirror may transmit incident light from a rear surface of the mirror so that exercise content displayed via the at least one the display is superimposed on at least a portion of the reflected image of the user. Thereby, the user may see his/her reflected image in combination with a visual element, like an image of a trainer or avatar in the exercise-related content, so that the user may easily notice whether a specific exercise is performed properly or not.
  • a fitness device may be equipped with camera device configured to capture the user while exercise-related content is displayed via the at least one display in order to provide electronic feedback on how well the user is the following the instructions of the exercise content and performs an exercise displayed to the user.
  • the user may also receive visual and/or acoustic feedback on her/his current training.
  • the proposed solution in particular relates to a training system comprising at least one fitness device and at least one computing device.
  • the at least one fitness device comprises at least one visual presentation module for presenting exercise-related content to a user of the fitness device.
  • the at least one fitness device further comprises at least one camera device configured to capture images including a face of the user while the at least one visual presentation module presents the exercise-related content to the user.
  • the at least one camera device may also be referred to as a visual recording module or as an image capturing unit in the following thereby emphasizing that the camera device may also be able to capture non-visible image data.
  • the camera device may just or additionally capture infrared light.
  • the at least one computing device of the training system comprises at least one processor and is configured to receive the captured images, to identify and track one or more facial characteristics of the face of the user while the at least one visual presentation module presents the exercise-related content, and to determine, at least once while the at least one visual presentation module presents the exercise- related content, an emotional status of the user based at least in part on the one or more facial characteristics.
  • the at least one computing device may then be further configured to adapt the exercise-related content based at least in part on the emotional status.
  • a proposed training system may thus improve user experience and/or optimize a current workout and/or a future workout based on a determined emotional status of the user taking into account one or more facial characteristics.
  • the proposed training system allows for analyzing a face of a user in order to electronically evaluate, for example by an artificial intelligence process and thus by machine learning algorithms, whether the user likes or dislikes the presented exercise-related content and/or struggles with performing an exercise following the instructions presented in the exercise-related content.
  • the proposed training system may allow for a real-time adaption and/or for a future adaption of a training session instructed by the exercise-related content.
  • the emotional status may be the unique criterion or one of at least two criteria for deciding on an adaption of the exercise-related content.
  • one or more additional criteria may be taken into account for deciding whether and how the exercise-related content to be presented to the user is to be adapted.
  • the at least one computing device is further configured to determine an additional emotional status based on the one or more facial characteristics of a user at a first point in time.
  • the at least one computing device may thus be configured to determine a status quo emotional status before, at or shortly after the start of the fitness exercise and in particular before, at or shortly after the start a training session comprising one or more consecutive fitness exercises presented in the exercise-related content.
  • At a subsequent and thus later second point in the time while the at least one visual presentation module presents the exercise-related content, at least one additional emotional status may be determined based on the one or more facial characteristics.
  • the at least one computing device of this example is therefore configured to determine an emotional status at least twice, firstly for calibrating the system and secondly to assess how performing an exercise instructed by the exercise-related content affects the user of the fitness device. Accordingly, the at least one computing device may be configured to adapt the exercise- related content based at least in part on the at least one additional emotional status (determined after the initial emotional status). For example, the at least one computing device may determine whether the at least one additional emotional status differs from the initial emotional status by more than a threshold and adapt the exercise-related content if the at least one additional emotional status differs from the initial emotional status by more than the threshold.
  • the exercise-related content is not adapted and thus not altered assuming that the currently presented exercise-related content is appropriate for the individual user and may thus be maintained for the current and/or also for a future training session.
  • determining the one or more facial characteristics comprises assigning a score to at least two emotion parameters representative of at least two different pre-defined emotions.
  • the at least one computing device may for example include a facial characteristics evaluation module, in particular a facial characteristics evaluation module including a machine-learned facial characteristics emotion algorithm, capable of associating certain facial characteristics and in particular micro-expressions of a face of a user with different kinds of emotions. Based on the tracked one or more facial characteristics the at least one computing device may thus be able to assign scores indicating a probability that the current expression of the face of the user expresses a certain emotion.
  • Pre-defined emotions for which scores are to be assigned may for example be disgust, fear, surprise, or happiness.
  • Pre-defining different types of emotions and assigning scores to them based on the one or more facial characteristics allows for categorizing collected data on the one or more facial characteristics and to provide for organized data sets based on which an algorithm may decide on an emotional status of the user. This also facilitates an algorithm-driven decision whether an individual exercise for the particular user should be adapted or not in order to keep the user motivated and/or the proposed training effective for the individual user.
  • the one or more emotion parameters to which scores are assigned are associated with at least one first group of emotions and with at least one second group of emotions.
  • emotion parameters of each first and second groups of emotions scores will be assigned at least once during the presentation of the exercise-related content.
  • a first group may relate to negative emotions/feelings indicating that the user dislikes and/or struggles with an exercise to be performed in front of and/or using the fitness a device following visually and, if applicable, additionally audibly presented instructions of the exercise-related content.
  • a second group may then relate to positive emotions/feelings indicating that the user does not struggle or even enjoys an exercise to be performed.
  • an evaluation algorithm may have a solid data basis for deciding on whether the exercise-related content must be adapted or not.
  • Determining the emotional status based on the one or more facial characteristics comprises combining the scores assigned to the at least two emotion parameters. This may involve using the scores in an evaluation algorithm for assessing whether the emotional status indicates that the user is satisfied with the currently presented exercise-related content and the exercise the user had to perform following the instructions in the exercise-related content and/or whether the level of intensity for the individual user performing the exercise following the instructions of the exercise-related to content is appropriate.
  • Combining scores assigned to at least two emotion parameters may particular involves separately combining scores of each group of emotions and respectively generating a combined score for each group of emotions.
  • identifying and tracking the one or more facial characteristics may result in assigning scores to a plurality of emotion parameters associated with two different groups of emotions.
  • high scores i.e., the scores individually or combined exceeding at least one threshold value
  • high scores for emotion parameters of the first group may speak for the necessity to adapt the exercise-related content and does change a workout for an individual user using the fitness device
  • high scores for emotion parameters belonging to another, second group may rather speak for keeping the exercise-related content as is.
  • the at least one computing device may be configured to apply a metric -based evaluation function using combined scores for each group of emotions for determining the emotional status.
  • Adapting the exercise-related content may for example include adapting at least one of a type of an exercise presented to the user in the exercise-related content, a number of repetitions for an exercise presented to the user in the exercise-related content, a tempo of an exercise presented to the user in the exercise-related content, and a weight to be used for an exercise presented to the user in the exercise-related content.
  • Adapting a type of exercise may, for example, include switching from one exercise to another to further individualize a workout presented to the individual user by the exercise-related content. This may also include proposing another exercise for the next training session for the user using the fitness device.
  • Adapting a number of repetitions of an exercise may include adapting the number of repetitions a user is instructed to repeat an exercise presented to the user in the exercise-related content.
  • adapting the exercise-related content may include adapting at least one visual element presented as a part of the exercise-related content on display of the fitness device.
  • a visual element may for example relate to at least one of a type of an exercise presented to the user in the exercise-related content, a number of repetitions and/or a tempo of an exercise presented to the user in the exercise-related content.
  • the at least one computing device may thus be configured to adapt a corresponding visual information for the user indicating an increase or decrease of intensity of A workout to be performed.
  • the exercise-related content includes a virtual trainer / avatar which is presented on a display of the fitness device.
  • a virtual trainer may provide instructions to the user how to perform an exercise, for example by demonstrating the exercise to be performed.
  • presenting a virtual trainer presenting the exercise- related content may include outputting audible sounds via at least one speaker of the fitness device.
  • the fitness device may be configured to output music and/or audible instructions relating to an exercise to be performed.
  • the audible instructions may also be associated with a virtual trainer presented on a display of the fitness device.
  • a presentation of the virtual trainer and/or audible sounds may be changed resulting in a corresponding adaption of the exercise- related content.
  • a tempo with which the ritual trainer performs a demonstrated exercise facial expressions of the virtual trainer and/or (motivational) gestures of the virtual trainer may be altered based on the determined emotional status of the user.
  • tone and/or volume of audible sounds may be altered for individually adapting the exercise-related content.
  • the exercise-related content may be adapted in real-time. Accordingly, the currently presented content may be adapted and a real-time feedback is thereby provided at least in part based on the emotional status of the user. Additionally or alternatively, an adapted version of the exercise-related content for later presentation may be generated. This may include storing the adapted version of the exercise-related content and/or at least one parameter and/or indicator resulting in the intended adaption of the exercise-related content for later presentation in a future training session.
  • an evaluation algorithm taking into account the emotional status may decide that the intensity level for the individual user is to be increased in a next training session.
  • a further criterion may thus be provided for intensifying a workout, i.e., for individually adapting a user- specific training schedule and thus a corresponding exercise-related content by which the user is instructed so that a next workout does not get too hard or too easy for the user.
  • Determining the emotional status of the user may also be based at least in part on biometric data user and/or motion tracking data for the user.
  • the determined emotional status may thus also take into account additional sensor data relating to biometric functions of the user and/or the user's motions while performing an exercise instructed by the exercise-related content. For example, the user's heart rate, at least one temperature of the user's body parts and/or how well the user performs an exercise (compared to a reference performance) is taken into account on determining the user’s emotional status.
  • adapting the exercise-related content may also be based at least in part on biometric data of the user and/or motion tracking data for the user. Accordingly, a decision whether and how the exercise-related content is to be adapted does not only take into account the emotional status based on one or more facial characteristics and thus for example micro-expressions on the face of the user while performing an exercise but also additional data.
  • At least one computing device additionally using biometric data of the user and/or motion tracking data for the user may be configured to generate emotion data associated with the determined emotional status and to evaluate the emotion data and at least one of the biometric data and the motion tracking data using machine learning for deciding on an adaption of the exercise-related content.
  • the at least one computing device may thus be configured to evaluate respectively measured raw data for deciding on and determining which kind and/or degree of adaption of the exercise-related content is to be triggered.
  • the at least one computing device is configured to identify and track the one or more facial characteristics by using markerless face tracking.
  • the at least one computing device may thus also implement an algorithm for markerless facial expression capturing based on the received images. Facial expression capturing allows for evaluating the emotional status of the user while performing an exercise instructed by the exercise-related content presented to the user.
  • the at least one computing device may be part of the fitness device of the training system. Nevertheless, at least one computing device of the fitness device does not mandatorily need to carry out most of the necessary calculations, e.g., for determining the emotional status.
  • the at least one computing device may be part of the fitness device and may be configured (a) to transmit emotional data (and, if applicable at least one of biometric data and motion tracking data) to a remote analysis server of the training system implementing machine learning algorithms and (b) to receive an analysis results, in response to transmitting the data to the remote analysis server, from the remote analysis server indicating whether and how the exercise-related content is to be adapted.
  • the at least one computing device in the fitness device itself may need just moderate processing power given that the most computational load for evaluating the provided data and deciding on whether and how the exercise-related content for the individual user is to be adapted is processed by the at least one remote analysis server.
  • the at least one computing device may be located remote from the fitness device.
  • a remote computing device can be connected to the fitness device via a (wired or wireless) local area network, including a short-range network (like NFC or Bluetooth) or long-range network (like Wi-Fi).
  • the computing device may be connected to the fitness device via the Internet.
  • the computing device may then include at least one server, e.g., at least one cloud server.
  • the fitness device further comprises a mirror element (“mirror” in the following) which reflects incident light from the user facing a front surface of the mirror element to present a reflected image of the user and which transmits incident light from the rear surface of the mirror.
  • the at least one the display may then be arranged behind the mirror for displaying content superimposed on at least a portion of the reflected image of the user.
  • Content and/or a visual element may therefore be displayed in addition to the reflected image of the user - not meaning that a visual element or the content is necessarily displayed completely superimposed on the reflected image.
  • the content and/or the visual element may be displayed side-by-side with the reflected image of the user or just partially superimposed on the reflected image (e.g., depending on the size of the reflected image).
  • a visual element may be just a “passive” element, for example, relating to parts of a video or a stream instructing the user how a fitness exercise is to be performed or indicating to the user parameters to be taken into account when performing a fitness exercise, such as a number of repetitions or a threshold pulse which should not be exceeded.
  • a displayed visual element may also relate to biometric data of the user.
  • a visual element may be “active” and may therefore define an interface element of a user interface of the fitness device.
  • the at least one visual element may thus define a region at which a user touching a touch-sensitive portion of a front surface of the fitness devices generates an actuation signal further processed by a computing device coupled to the at least one display. Touching a region of the front surface where visual element is displayed may thus trigger an operation event, e.g., in particular including representing a new or altered visual element.
  • a capacitive field may be applied to at least a portion of the front surface.
  • the at least one touch-sensitive portion may generate of a user-interface of the fitness device at a front side of the fitness device.
  • the at least one camera device may be further configured to capture at least one image of the user while the user performs an exercise in front of the display (for example while an exercise-related content is displayed which instructs the user how to perform an exercise).
  • the at least one computing device may be configured to generate a feedback to the user resulting from a comparison, by the at least one computing device, of the image data with reference data.
  • the reference data may be stored in a memory of the at least one computing device and may relate to a completely correct performance of a fitness exercise displayed.
  • the feedback to the user may be visual and/or audible.
  • the fitness device may further comprise a contactless user position determination system configured to determine a position of the user in front of the display in a contactless manner.
  • the contactless user position determination system may, for example, be configured to generate position data indicating a determined position of the user (for example also including a posture of the user) while the user performs an exercise.
  • the at least one computing device may be configured to generate the above-mentioned feedback additionally based on such position data. Based on the position data the at least one computing device may for example evaluate whether the user correctly performs an exercise the user is instructed to do by the displayed content. Based on the position data the at least one computing device may also be configured to adapt an appearance of at least one visual element displayed by the display.
  • This may include changing at least one of a size, a shading, a color, a hue and a brightness of the at least one visual element displayed at the device and indicating how well the user is performing an exercise in front of the display.
  • One option for effectively determining user position in a contactless manner may include using a laser positioning system.
  • the fitness device may further comprise at least one communication interface for communicating with at least one supervisor device of the training system.
  • the at least one computing device may be configured to transmit data to the at least one supervisor device via the at least one communication interface. This may, for example, include transmitting the biometric data wirelessly via the communication interface the remote supervisor device.
  • the remote supervisor device may then, for example, be used by a human trainer, mentor or therapist monitoring the performance of the user in front of the device.
  • the fitness device may be a stand-alone fitness device configured to display exercise-related content to the user.
  • the fitness device may also be configured to allow for non-fitness related activities, such as video conferencing.
  • the proposed solution also relates to a method for automatically adapting exercise- related content to be presented to a user using a fitness device.
  • the proposed method comprises presenting exercise-related content to the user via the fitness device; capturing images including a face of the user while exercise-related content is presented to the user (and the user is performing an exercise following instructions in the exercise-related content); and identifying and tracking one or more facial characteristics of the face of the user in the images while the exercise-related content is presented to the user.
  • the proposed method further comprises determining, at least once while the exercise-related content is presented to the user, an emotional status of the user based at least in part on the one or more facial characteristics, and adapting the exercise-related content based at least in part on the emotional status.
  • Embodiments of the proposed method may in particular relate to operating an embodiment of a proposed training system. Accordingly, features and advantages mentioned above and below in the context with an embodiment of a proposed training system shall therefore also apply to embodiments of the proposed method and vice versa.
  • Figure 1 is a perspective view of a fitness device of an embodiment of the proposed training system.
  • Figure 2 is a front view on the fitness a device of Figure 1 showing additional components.
  • Figure 3 is a perspective view on the fitness device of Figures 1 and 2 in operation, with a user performing a fitness exercise in front of the fitness device.
  • Figure 4 shows the fitness device with another user performing another exercise in front of the fitness device.
  • Figure 5 shows an exemplary image captured by a camera device of the fitness device additionally illustrating several identified facial characteristics of the user and assigned scores to several pre-defined emotions based on the facial characteristics.
  • Figure 6 shows a flowchart for a metric-based decision-making process according to an embodiment of proposed solution taking into account assigned scores for different pre-defined emotions and corresponding parameters for deciding on an emotional status of the user using a fitness device of Figures 1 to 4.
  • Figure 7 shows a flowchart illustrating an algorithm-based decision-making process implemented in an embodiment of a proposed training system for deciding whether and how exercise-related content and thus an individual workout for a user is to be adapted in real-time and for future training session.
  • Figure 8 shows are further flowchart illustrating a metric -based decision-making process for a certain exercise, taking into account a determined emotional status of the user during a performance of the exercise.
  • FIG 1 illustrates an embodiment of a fitness device 1 of a proposed training system 1 (sometimes also called a “smart mirror” or an “interactive screen”).
  • the fitness device 1 of the embodiment of FIG 1 may be configured as a stand-alone fitness device to be used in a gym or at home, for example used for workouts, physiotherapy and physical rehabilitation.
  • the fitness device 1 may be configured as a stand-alone fitness device to be used in a gym or at home, for example used for workouts, physiotherapy and physical rehabilitation.
  • I comprises a frame assembly including a frame structure 10, a visual presentation module comprising a display 11 and a mirror 12 (the display 11 and the mirror 12 defining a display assembly of the fitness device 1).
  • the frame structure 10 defines a continuous frame surrounding the mirror 12. The mirror 12 is thus held within the frame of the frame structure 10.
  • the display 11 includes a screen for displaying exercise-related content to a user standing in front of the fitness device 1.
  • the frame assembly of the fitness device 1 defines a front side 1A and a back side IB.
  • the exercise content displayed via the display 11 may hence be watched by a user facing the front side 1A.
  • the mirror 12 may reflect incident light from the user facing a front surface of the mirror 12 to present a reflected image of the user.
  • the mirror 12 may further transmit incident light from the rear surface of the mirror 12 so that exercise content displayed via the display 11 may be superimposed on at least a portion of the reflected image of the user.
  • exercise content displayed via the display 11 may be superimposed on at least a portion of the reflected image of the user.
  • the fitness device 1 may provide instant feedback on a performance of the user when imitating fitness exercises based on the displayed exercise content.
  • At least one camera device 5.1, 5.2, 5.3 is part of the fitness device 1 to capture images of the user during a fitness exercise in order to electronically assess an emotional status of the user while performing a fitness exercise.
  • the at least one camera device 5.1, 5.2, 5.3 may capture images for provide feedback on the user’s performance whether the user imitates a certain exercise properly and., e.g., to which extent.
  • the fitness device 1 as shown in FIG 1 A includes the display 11 having a screen with a diameter of at least 34 inches, for example more than 40 inches. Accordingly, the frame structure 10 defines a surface area at the front side of more than 0.5 m 2 . For example, a substantially rectangular surface at the front side 1 measures more than 1.5 m in height and more than 0.4 m in width.
  • the fitness device 1 of FIG 1 A may rest via at least two different components at a floor.
  • a first component is provided by bottom part 100 of the frame structure 10.
  • the bottom part 100 allows the frame structure 10 to rest on the floor and thus provides a support directly below the display 11.
  • a stand 2 is provided which allows the frame structure 10 to be positioned inclined to the vertical at the floor and to be nevertheless held in a stable position.
  • the stand 2 extends at an angle to the back side IB of the frame structure 10 so that a base portion of the stand 2 (formed by a base member 21) is configured to rest on the floor being the bottom part 100 in a specified distance to the bottom part 100.
  • the stand 2 of this embodiment is designed as a rectangular frame including a base member 21, a crossbar 22 and two parallel lateral bars 20.1, 20.2.
  • the lateral bars 20.1, 20.2 are connected to each other at a first (upper) end via the crossbar 22 and at a second (lower) end via the base member 21.
  • the stand 2 is fixed to a back plate the fitness device 1.
  • the back plate is attached to the frame structure 10 at the back side IB.
  • the fitness device 1 may also be used without the stand 2 and for example hung onto a wall.
  • the one or more camera devices 5.1, 5.2, 5.3 for provided for capturing images of the user facing the front side 1A.
  • a first camera device 5.1 may be positioned at the front side 1A in the middle of a portion above the display 11.
  • Additional second and third camera devices 5.2 and 5.3 are positioned on lateral sides so that a display recess for a screen of the display 11 is arranged between optics of the second and third camera devices 5.2 and 5.3.
  • the first, second and third camera devices 5.1, 5.2 and 5.3 may be part of a camera system of the fitness device 1 which are controlled by a software component running on at least one computing device 6 of the fitness device 1.
  • the first camera device 5.1 may be provided for communication, entertainment and/or displaying visual elements, in particular interface elements, depending on a predetermined position of the user.
  • the second and third camera devices 5.2 and 5.3 may provide image data for a motion analysis module of the fitness device (implemented by the computing device 6).
  • the motion analysis module may be provided for analyzing position and posture of a user when performing a fitness exercise in front of the fitness device 1. Accordingly, the different camera device 5.1, 5.2 and 5.3 as well as the display llare coupled to the computing device 6, and in particular its motion analysis module.
  • the computing device 6 may receive input(s) i including a video or stream to be presented on the display 11.
  • the input i may be already pre-processed for displaying the respective content.
  • the content received as input i may be pre-processed by the computing device 6, for example using artificial intelligence.
  • the input i may also include an instruction signal from a supervisor device with which the fitness device 1 communicates while a user performs an exercise in front of the fitness device 1.
  • the computing device 6 may include or be connected to a contactless user position determination system.
  • Said contactless user position determination system may comprise a calibration module recognizing proportion differences between images captured by the second and third camera devices 5.2 and 5.3.
  • the calibration module may also be configured to detect at which distance and thus position from the fitness device 1 a user is located. This may result in calibrating the motion analysis module with x and y coordinates which may then also be used for defining at least one of an appearance, a size and a position of at least one visual element to be displayed on the display 11.
  • visual elements display may be dynamically adapted based on image data provided by the camera system including the second and third camera devices 5.2, 5.3.
  • the contactless user position determination system of the fitness device may comprise a laser positioning system for determining a position of a user in front of the display 11 with respect to the fitness device 1 in order to dynamically adapt information and thus in particular visual elements displayed by the display 11.
  • the contactless user position determination system may comprise a laser positioning system 15 for determining a position of the user in front of the fitness device 1.
  • the laser positioning system 15 may in particular allow for securely determining a distance of a user to the front side 1A.
  • a proximity sensor 16 may be part of fitness device 1 in order to detect a user approaching the fitness device 1.
  • a detected approach may for example result in switching the fitness device 1 from an off-state to an on-state and/or displaying certain interface elements on the display 11 given that the user approaching the fitness device 1 is then in reach for touching the mirror 12.
  • the fitness device 1 further comprises a pyrometric device.
  • the pyrometric device may, for example, include one or more thermal cameras, one or more infrared sensors or one or more other sensors being part of a corresponding contactless temperature sensing unit.
  • the pyrometric device is capable of determining temperature values for body parts of a user in front of the fitness device 1. Temperature values may be measured contactlessly by the pyrometric device for generating corresponding temperature data.
  • the temperature data generated by the pyrometric device is received by the computing device 6 and for example further processed by a machine learning module 60 of the computing device 6.
  • the machine learning module 60 may apply machine learning algorithms for analyzing received sensor data, like the temperature data. For example, a principal component analysis (PCA) or a t-distributed stochastic neighbor embedding (t-SNE) may be used to generate data to be presented on the display 11.
  • PCA principal component analysis
  • t-SNE stochastic neighbor embedding
  • Temperature data from the pyrometric device may also be transmitted via a communication interface of a connectivity module 61 of the computing devices 6 to a remote analysis computing device, for example one or more cloud servers.
  • temperatures and their distribution over the different body parts of the user located in front of the fitness device 1 may be determined and visualized. This may for example include temperature distributions for a head, a neck, shoulders, arms, the upper and lower body and the legs of the user.
  • the fitness device 1 may in particular also include at least one of a microphone for capturing sound inputs, a video camera, a speaker 17 to accompany the visuals presented and a connectivity module 61 including at least one communication interface, e.g., for a Wi-Fi and/or Bluetooth connection.
  • the connectivity module may, e.g., allow exchange of data and multimedia content via the Internet and/or other local devices.
  • the fitness device 1 may in particular be capable and configured to transmit to and/or receive signals from at least one other device, for example from at least one other local or remote fitness device also equipped with a connectivity module, a mobile phone and/or a computing device.
  • the fitness device 1 may display exercise-related and in particular fitness content instructing a user P to perform a sequence of exercises as part of training session. While performing the exercises the performance of the user P may be tracked and mapped against reference data thereby automatically providing feedback to the user P via the display 11 on how well the instructed exercise is carried out.
  • the machine learning module 61 of the computing device 6 and/or machine learning algorithms of the remote analysis computing device may furthermore generate feedback and/or adapt feedback to the user P on the user’s performance while exercise content is displayed at the display 11 of the fitness device 1.
  • the corresponding machine learning algorithms may then also take into account the temperature data generated by the pyrometric device, other biometric data, such as a heart rate of the user P, and/or motion tracking data generated for the user P while performing an exercise.
  • the fitness device 1 may show a virtual trainer/avatar 40 on the display 11 as a part of an exercise-related content to instruct the user P in front of the fitness device 1 how to perform certain exercises of an individual training session.
  • the virtual trainer 40 is part of a visual presentation 4 of exercise-related content.
  • the exercise-related content in addition to the virtual trainer 40, also includes one or more visual elements 41 visualizing additional instructions to the user and/or biometric data.
  • Biometric data may be contactlessly captured by the fitness device 1 (e.g., using the pyrometric device) or by means of an additional device worn by the user, for example to track a heart rate of the user P while performing an exercise.
  • the fitness device 1 and in particular the machine learning module 60 of the computing device 6 further implements a facial characteristics evaluation module allowing for identifying and tracking one or more facial characteristics of a face of the user P while performing an exercise in front of the fitness device 1.
  • images captured by one or more of the camera devices 5.1, 5 to 2 and 5.3 are analyzed for detecting and tracking a face of the user P.
  • Markerless face tracking and facial expression capturing may thus for example be implemented allowing for determining micro-expressions on the face of the user P while performing an exercise.
  • These one or more facial characteristics are then used for determining an emotional status of the user P while performing exercises as instructed by the presented exercise- related content, for example by the virtual trainer 40.
  • a captured image of the user P is thus analyzed to identify a face F of the user P and virtual facial markers M.
  • Based on the virtual facial markers M micro-expressions and thus emotions can be assessed and tracked.
  • a likelihood for a certain emotion coinciding with certain (relative) positions and movements of the virtual facial markers M, in particular over time, may be trained by machine learning so that images of the face F of the user P may be associated with a set of pre-defined emotions.
  • the one or more facial characteristics defined by the virtual markers M plus biometric data and motion tracking data of the user P may be used to determine an initial (status quo) emotional status of the user P.
  • additional data relating to the facial characteristics may be captured to determine how and to which degree an emotional status of the user P changes.
  • an artificial intelligence processing set of algorithms may assign scores to a plurality of emotion parameters representative of several different pre-defined emotions.
  • a corresponding set of emotions, namely anger, disgust, fear, happiness, sadness, surprise and neutral, is illustrated in Figure 5. To this emotions scores are assigned based on the face F of the user P in the captured and analysed image.
  • different emotion parameters 101 are associated with two different groups of emotions.
  • a first set of emotion parameters are associated with “negative” emotions, in the depicted example of Figure 6 the emotions anger, disgust, fear, sadness and surprise.
  • a second set of emotions, happiness and neutral in Figure 6 are associated with second group of “positive” emotions.
  • the sets of the different emotion parameters 101 and their assigned scores are provided as emotional raw data to threshold evaluation algorithms 102A, 102B. These threshold evaluation algorithms 102A, 102B combine the scores for the emotion parameters 101 of one group for respectively calculating a combined (group) score.
  • the combined scores for the “positive” and the “negative” emotions are then further assessed in a metric -based decision algorithm 103 in order to provide for an output indicating an emotional status of the user P while performing an exercise.
  • a determined emotional status is then further computationally evaluated in an embodiment of the proposed training system in order to automatically decide on whether and how the exercise-related content to be presented to the user P as to be adapted.
  • different criteria may be evaluated. This may, for example, also included separately determining (sub-)recommendations for each criterion which are then combined to decide on the actual adaption of the exercise-related content. For example, it may be considered that motion tracking data indicates that the user P correctly imitates or not correctly imitates the instructed exercises This might speak for the instructed exercise being appropriate or being too difficult for the individual user P, at least currently, and thus for keeping, reducing or increasing at least one of a tempo, weight or number of repetitions.
  • Temperature data or other biometric data such as a measured heart rate may further speak for not changing or changing an intensity level of the instructed exercises.
  • the analyzed facial characteristics and the resulting emotional status may in turn also indicate that the user struggles with the exercise and appears rather stressed (in particular compared to an initial emotional status determined at the beginning of training) or that the user enjoys the exercises.
  • Based on the emotional status of the user P a (sub-)recommendation could thus also be to keep, reduce or increase an intensity level of the current workout.
  • An overall assessment taking into account the motion tracking data, the biometric data and the emotional data may then result in keeping the intensity of the exercises unchanged and thus as originally planned or in decreasing or increasing the intensity level.
  • step 202 in Figure 7 may be a decision depicted by step 202 in Figure 7 after having evaluated all available raw data before in a step 201.
  • step 203 a change in the exercise-related content may be triggered.
  • a corresponding adaption of the exercise-related content may take place in real-time in order to adapt the current workout.
  • the system may store that and how the workout is to be adapted for a subsequent training session i for the individual user P.
  • a decision on adapting exercise-related content may result in, for example, changing a presentation of the exercise by the virtual trainer 40, changing a weight the user P should use when performing the exercise, a duration of the exercise, a number of exercises and/or a number of repetitions, and/or a break or rest time for the user P between two subsequent exercises. Any changes in the exercise-related content may then be stored in a memory thereby replacing set i of exercises to be performed by the user in the training session with a new adapted version (step 205).
  • real-time feedback generation mechanism may thus be implemented by the machine learning module 60.
  • This real-time feedback generation mechanism reacts to emotional, motion tracking and biometric data captured for the user P.
  • the captured data may be used immediately to alter the exercise-related contend presented for example by a virtual trainer 40 or may be used for a later point in time.
  • a workout recommendation algorithm as part of the machine learning module 60 may, based on the emotional data, motion tracking data and biometric data, optimize quantitative information relating to the exercises to be performed, such as repetitions, tempo, resting time, and may also suggest alternatives for exercises to be performed in case the captured data indicates that the currently presented exercises are not of an appropriate intensity level.
  • the machine learning module 60 may also allow for influencing a training experience of the user P by adapting the current workout in real-time and also adapting future workouts by learning how the user reacted on the currently presented exercises. This allows for a further optimization in the training for the user P.
  • it may also be provided for changing a presentation of the virtual trainer 40. For example, a tempo with which the virtual trainer 40 performs a demonstrated exercise, facial expressions of the virtual trainer 40 and/or motivational gestures of the original trainer 40 may be changed, if applicable, also accompanied by changing tone and/or volume of outputted audible sounds. A corresponding change may result from a deep learning algorithm of the machine learning module 60.
  • Figure 8 shows an example for an automatically triggered adaption of a workout to be presented by the fitness device 1 to the user P based on the emotional status of the user P.
  • the user P is for example instructed to do dumbbell bicep curls with a weight of 12 kg aiming for a target repetition of 15.
  • an emotional status of the user P is determined based on the sensor raw data provided by the camera device 5.4 and in particular on analyzed facial characteristics M of the captured face F of the user P.
  • the computing device 6 determines 68% of negative emotions, hence a probability of 68% of the user struggling with the exercise performed.
  • Respective probabilities/emotional status results 1020A, 1020B are determined while the exercise is still ongoing, for example at the 8th repetition of the exercise.
  • step 103 the metric -based decision is reached based on the emotional status results 1020A, 1020B resulting in a metric decision feedback whether the exercise should be continued as originally instructed or changed.
  • the provided (raw) data on the emotional status results in a finding 103-1 that the exercise is too hard for the user P.
  • a metric decision recommendation 103-2 of the process therefore selects one of several (here three) possible action options 104 for the exercise instructions given to the user P. Whereas 10% (out of 100%) of the evaluated data speak for keeping the exercise as started and thus for maintain an intensity level of the exercise, 30% speak for increasing an intensity level and 60% speak for decreasing the intensity level.
  • the algorithms executed on the computing device 6 thus trigger an adaptation of the exercise-related content presented to the user P by the fitness device 1 causing (immediate or later) presentation of instructions 105 to the user P to reduce the weight from 12 kg to 8 kg.
  • the instruction 105 might thus for example indicate a corresponding reduction in the weight of the dumbbells for the rest of the current set or the next set in the ongoing training session or for a subsequent training session.
  • Corresponding instructions 105 may presented visually to the user P via the display 11 and/or audibly via speaker 17.
  • the fitness device one may also allow for a user to override or choose a level of adaption and guidance by the virtual trainer 40.
  • the computing device 6 may also implement a readjustment mechanism based on a user rating at the end of a training session. Thereby, the system can optimize initial assumptions made for the individual user P. In particular, the user P can rate how intense and likable the workout and the virtual trainer 40 were so that the system may (re)optimize a corresponding configuration for the exercise-related content for the next training session for the user P.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A training system, comprising at least one fitness device (1) and at least one computing device (6), wherein the at least one fitness device (1) comprises at least one visual presentation module (11) for presenting exercise-related content to a user (P) of the fitness device (1), and at least one camera device (5.1-5.3) configured to capture images including a face (F) of the user (P) while the at least one visual presentation module (11) presents the exercise-related content. The at least one computing device (6) comprises at least one processor and is configured to receive the images, identify and track one or more facial characteristics (M) of the face (F) of the user (P) while the at least one visual presentation module (11) presents the exercise-related content, and determine, at least once while the at least one visual presentation module (11) presents the exercise-related content, an emotional status of the user (P) based at least in part on the one or more facial characteristics. Further, the least one computing device (6) is configured to adapt the exercise- related content based at least in part on the emotional status.

Description

TRAINING SYSTEM AND METHOD WITH EMOTION ASSESSMENT
TECHNICAL FIELD
[0001] The present solution in particular relates to a training system comprising at least one fitness device.
BACKGROUND
[0002] Fitness devices which are to be used in gyms or at home and which include at least one display for displaying exercise-related content to a user become more and more popular. The exercise-related content may be a video retrieved from a local memory of the fitness device or from a memory of a computing device connected to the fitness device via a wired or wireless network. Alternatively or additionally, exercise-related content may be provided as a stream, for example via the Internet. Based on the exercise-related content the user is instructed how to perform different fitness exercises. In order to provide a new and motivating experience to the user and providing the user with sufficient details on how to carry out specific exercise properly, at least one display of the fitness device may be rather large and held by a frame structure of the fitness device having more than 1 meter in length and more than 40 cm in width.
[0003] In this context fitness devices are also known which include a semi-reflective mirror arranged in front of the at least one display. Such a mirror may reflect incident light from a user facing a front surface of the mirror to present a reflected image of the user. Further, the mirror may transmit incident light from a rear surface of the mirror so that exercise content displayed via the at least one the display is superimposed on at least a portion of the reflected image of the user. Thereby, the user may see his/her reflected image in combination with a visual element, like an image of a trainer or avatar in the exercise-related content, so that the user may easily notice whether a specific exercise is performed properly or not.
[0004] In addition or as an alternative to a mirror, a fitness device may be equipped with camera device configured to capture the user while exercise-related content is displayed via the at least one display in order to provide electronic feedback on how well the user is the following the instructions of the exercise content and performs an exercise displayed to the user. The user may also receive visual and/or acoustic feedback on her/his current training. [0005] It is an object of the present solution to provide for a device, in particular a fitness device which may provide for an improved user experience, for example, by an enhanced usability and/or an improved user interface for controlling operations at the device.
SUMMARY
[0006] The proposed solution in particular relates to a training system comprising at least one fitness device and at least one computing device. The at least one fitness device comprises at least one visual presentation module for presenting exercise-related content to a user of the fitness device. The at least one fitness device further comprises at least one camera device configured to capture images including a face of the user while the at least one visual presentation module presents the exercise-related content to the user. The at least one camera device may also be referred to as a visual recording module or as an image capturing unit in the following thereby emphasizing that the camera device may also be able to capture non-visible image data. For example, the camera device may just or additionally capture infrared light. The at least one computing device of the training system comprises at least one processor and is configured to receive the captured images, to identify and track one or more facial characteristics of the face of the user while the at least one visual presentation module presents the exercise-related content, and to determine, at least once while the at least one visual presentation module presents the exercise- related content, an emotional status of the user based at least in part on the one or more facial characteristics. The at least one computing device may then be further configured to adapt the exercise-related content based at least in part on the emotional status.
[0007] According to an exemplary embodiment, a proposed training system may thus improve user experience and/or optimize a current workout and/or a future workout based on a determined emotional status of the user taking into account one or more facial characteristics. In this context the proposed training system allows for analyzing a face of a user in order to electronically evaluate, for example by an artificial intelligence process and thus by machine learning algorithms, whether the user likes or dislikes the presented exercise-related content and/or struggles with performing an exercise following the instructions presented in the exercise-related content. The proposed training system may allow for a real-time adaption and/or for a future adaption of a training session instructed by the exercise-related content. Generally, the emotional status may be the unique criterion or one of at least two criteria for deciding on an adaption of the exercise-related content. In an exemplary embodiment, one or more additional criteria may be taken into account for deciding whether and how the exercise-related content to be presented to the user is to be adapted.
[0008] In an exemplary embodiment, the at least one computing device is further configured to determine an additional emotional status based on the one or more facial characteristics of a user at a first point in time. The at least one computing device may thus be configured to determine a status quo emotional status before, at or shortly after the start of the fitness exercise and in particular before, at or shortly after the start a training session comprising one or more consecutive fitness exercises presented in the exercise-related content. At a subsequent and thus later second point in the time, while the at least one visual presentation module presents the exercise-related content, at least one additional emotional status may be determined based on the one or more facial characteristics. The at least one computing device of this example is therefore configured to determine an emotional status at least twice, firstly for calibrating the system and secondly to assess how performing an exercise instructed by the exercise-related content affects the user of the fitness device. Accordingly, the at least one computing device may be configured to adapt the exercise- related content based at least in part on the at least one additional emotional status (determined after the initial emotional status). For example, the at least one computing device may determine whether the at least one additional emotional status differs from the initial emotional status by more than a threshold and adapt the exercise-related content if the at least one additional emotional status differs from the initial emotional status by more than the threshold. If the at least one additional emotional status does not differ from the initial emotional status by more than the threshold the exercise-related content is not adapted and thus not altered assuming that the currently presented exercise-related content is appropriate for the individual user and may thus be maintained for the current and/or also for a future training session.
[0009] In an exemplary embodiment determining the one or more facial characteristics comprises assigning a score to at least two emotion parameters representative of at least two different pre-defined emotions. The at least one computing device may for example include a facial characteristics evaluation module, in particular a facial characteristics evaluation module including a machine-learned facial characteristics emotion algorithm, capable of associating certain facial characteristics and in particular micro-expressions of a face of a user with different kinds of emotions. Based on the tracked one or more facial characteristics the at least one computing device may thus be able to assign scores indicating a probability that the current expression of the face of the user expresses a certain emotion. Pre-defined emotions for which scores are to be assigned may for example be disgust, fear, surprise, or happiness. Pre-defining different types of emotions and assigning scores to them based on the one or more facial characteristics allows for categorizing collected data on the one or more facial characteristics and to provide for organized data sets based on which an algorithm may decide on an emotional status of the user. This also facilitates an algorithm-driven decision whether an individual exercise for the particular user should be adapted or not in order to keep the user motivated and/or the proposed training effective for the individual user.
[0010] For example, the one or more emotion parameters to which scores are assigned are associated with at least one first group of emotions and with at least one second group of emotions. To emotion parameters of each first and second groups of emotions scores will be assigned at least once during the presentation of the exercise-related content. A first group may relate to negative emotions/feelings indicating that the user dislikes and/or struggles with an exercise to be performed in front of and/or using the fitness a device following visually and, if applicable, additionally audibly presented instructions of the exercise-related content. A second group may then relate to positive emotions/feelings indicating that the user does not struggle or even enjoys an exercise to be performed. Depending on the scores over the parameters of the at least two different groups an evaluation algorithm may have a solid data basis for deciding on whether the exercise-related content must be adapted or not.
[0011] Determining the emotional status based on the one or more facial characteristics in one embodiment comprises combining the scores assigned to the at least two emotion parameters. This may involve using the scores in an evaluation algorithm for assessing whether the emotional status indicates that the user is satisfied with the currently presented exercise-related content and the exercise the user had to perform following the instructions in the exercise-related content and/or whether the level of intensity for the individual user performing the exercise following the instructions of the exercise-related to content is appropriate. Combining scores assigned to at least two emotion parameters may particular involves separately combining scores of each group of emotions and respectively generating a combined score for each group of emotions. In an exemplary embodiment, identifying and tracking the one or more facial characteristics may result in assigning scores to a plurality of emotion parameters associated with two different groups of emotions. Whereas high scores (i.e., the scores individually or combined exceeding at least one threshold value) for emotion parameters of the first group may speak for the necessity to adapt the exercise-related content and does change a workout for an individual user using the fitness device, high scores for emotion parameters belonging to another, second group may rather speak for keeping the exercise-related content as is.
[0012] In an exemplary embodiment the at least one computing device may be configured to apply a metric -based evaluation function using combined scores for each group of emotions for determining the emotional status.
[0013] Adapting the exercise-related content (taking account the user’s emotional response on the currently performed exercise as instructed by the exercise-related content) may for example include adapting at least one of a type of an exercise presented to the user in the exercise-related content, a number of repetitions for an exercise presented to the user in the exercise-related content, a tempo of an exercise presented to the user in the exercise-related content, and a weight to be used for an exercise presented to the user in the exercise-related content. Adapting a type of exercise may, for example, include switching from one exercise to another to further individualize a workout presented to the individual user by the exercise-related content. This may also include proposing another exercise for the next training session for the user using the fitness device. Adapting a number of repetitions of an exercise may include adapting the number of repetitions a user is instructed to repeat an exercise presented to the user in the exercise-related content.
[0014] Additionally or alternatively, adapting the exercise-related content may include adapting at least one visual element presented as a part of the exercise-related content on display of the fitness device. Such a visual element may for example relate to at least one of a type of an exercise presented to the user in the exercise-related content, a number of repetitions and/or a tempo of an exercise presented to the user in the exercise-related content. In response to a determined emotional status the at least one computing device may thus be configured to adapt a corresponding visual information for the user indicating an increase or decrease of intensity of A workout to be performed.
[0015] In an exemplary embodiment the exercise-related content includes a virtual trainer / avatar which is presented on a display of the fitness device. Such a virtual trainer may provide instructions to the user how to perform an exercise, for example by demonstrating the exercise to be performed. In addition or in the alternative to presenting a virtual trainer presenting the exercise- related content may include outputting audible sounds via at least one speaker of the fitness device. For example, the fitness device may be configured to output music and/or audible instructions relating to an exercise to be performed. The audible instructions may also be associated with a virtual trainer presented on a display of the fitness device. Based at least in part on the determined emotional status during presentation of the exercise-related content a presentation of the virtual trainer and/or audible sounds may be changed resulting in a corresponding adaption of the exercise- related content. For example, a tempo with which the ritual trainer performs a demonstrated exercise, facial expressions of the virtual trainer and/or (motivational) gestures of the virtual trainer may be altered based on the determined emotional status of the user. Also tone and/or volume of audible sounds may be altered for individually adapting the exercise-related content. Thereby, a (further) individualized user experience and/or (further) individualized workout session for a user using a corresponding embodiment of the proposed training system may be provided.
[0016] Generally, the exercise-related content may be adapted in real-time. Accordingly, the currently presented content may be adapted and a real-time feedback is thereby provided at least in part based on the emotional status of the user. Additionally or alternatively, an adapted version of the exercise-related content for later presentation may be generated. This may include storing the adapted version of the exercise-related content and/or at least one parameter and/or indicator resulting in the intended adaption of the exercise-related content for later presentation in a future training session. For example, if it was determined, based on micro-expressions of the user while performing instructed exercises, that the user did not struggle with the exercises performed, an evaluation algorithm taking into account the emotional status may decide that the intensity level for the individual user is to be increased in a next training session. By the emotional status a further criterion may thus be provided for intensifying a workout, i.e., for individually adapting a user- specific training schedule and thus a corresponding exercise-related content by which the user is instructed so that a next workout does not get too hard or too easy for the user.
[0017] Determining the emotional status of the user may also be based at least in part on biometric data user and/or motion tracking data for the user. The determined emotional status may thus also take into account additional sensor data relating to biometric functions of the user and/or the user's motions while performing an exercise instructed by the exercise-related content. For example, the user's heart rate, at least one temperature of the user's body parts and/or how well the user performs an exercise (compared to a reference performance) is taken into account on determining the user’s emotional status.
[0018] Additionally or alternatively, adapting the exercise-related content may also be based at least in part on biometric data of the user and/or motion tracking data for the user. Accordingly, a decision whether and how the exercise-related content is to be adapted does not only take into account the emotional status based on one or more facial characteristics and thus for example micro-expressions on the face of the user while performing an exercise but also additional data.
[0019] Generally, at least one computing device additionally using biometric data of the user and/or motion tracking data for the user may be configured to generate emotion data associated with the determined emotional status and to evaluate the emotion data and at least one of the biometric data and the motion tracking data using machine learning for deciding on an adaption of the exercise-related content. The at least one computing device may thus be configured to evaluate respectively measured raw data for deciding on and determining which kind and/or degree of adaption of the exercise-related content is to be triggered.
[0020] In exemplary embodiments the at least one computing device is configured to identify and track the one or more facial characteristics by using markerless face tracking. The at least one computing device may thus also implement an algorithm for markerless facial expression capturing based on the received images. Facial expression capturing allows for evaluating the emotional status of the user while performing an exercise instructed by the exercise-related content presented to the user.
[0021] The at least one computing device may be part of the fitness device of the training system. Nevertheless, at least one computing device of the fitness device does not mandatorily need to carry out most of the necessary calculations, e.g., for determining the emotional status. For example, the at least one computing device may be part of the fitness device and may be configured (a) to transmit emotional data (and, if applicable at least one of biometric data and motion tracking data) to a remote analysis server of the training system implementing machine learning algorithms and (b) to receive an analysis results, in response to transmitting the data to the remote analysis server, from the remote analysis server indicating whether and how the exercise-related content is to be adapted. By using a remote analysis server for the decision-making process, the at least one computing device in the fitness device itself may need just moderate processing power given that the most computational load for evaluating the provided data and deciding on whether and how the exercise-related content for the individual user is to be adapted is processed by the at least one remote analysis server.
[0022] Alternatively, the at least one computing device may be located remote from the fitness device. A remote computing device can be connected to the fitness device via a (wired or wireless) local area network, including a short-range network (like NFC or Bluetooth) or long-range network (like Wi-Fi). In an exemplary embodiment, the computing device may be connected to the fitness device via the Internet. Generally, the computing device may then include at least one server, e.g., at least one cloud server.
[0023] In an exemplary embodiment, the fitness device further comprises a mirror element (“mirror” in the following) which reflects incident light from the user facing a front surface of the mirror element to present a reflected image of the user and which transmits incident light from the rear surface of the mirror. The at least one the display may then be arranged behind the mirror for displaying content superimposed on at least a portion of the reflected image of the user. Content and/or a visual element may therefore be displayed in addition to the reflected image of the user - not meaning that a visual element or the content is necessarily displayed completely superimposed on the reflected image. The content and/or the visual element may be displayed side-by-side with the reflected image of the user or just partially superimposed on the reflected image (e.g., depending on the size of the reflected image).
[0024] Generally, a visual element may be just a “passive” element, for example, relating to parts of a video or a stream instructing the user how a fitness exercise is to be performed or indicating to the user parameters to be taken into account when performing a fitness exercise, such as a number of repetitions or a threshold pulse which should not be exceeded. Additionally or alternatively, a displayed visual element may also relate to biometric data of the user. Further, also additionally or alternatively, a visual element may be “active” and may therefore define an interface element of a user interface of the fitness device. In such an embodiment, the at least one visual element may thus define a region at which a user touching a touch-sensitive portion of a front surface of the fitness devices generates an actuation signal further processed by a computing device coupled to the at least one display. Touching a region of the front surface where visual element is displayed may thus trigger an operation event, e.g., in particular including representing a new or altered visual element.
[0025] For providing the fitness device with a touch-sensitive portion a capacitive field may be applied to at least a portion of the front surface. The at least one touch-sensitive portion may generate of a user-interface of the fitness device at a front side of the fitness device.
[0026] In an exemplary embodiment, the at least one camera device (image capturing unit / visual recording device) may be further configured to capture at least one image of the user while the user performs an exercise in front of the display (for example while an exercise-related content is displayed which instructs the user how to perform an exercise). The at least one computing device may be configured to generate a feedback to the user resulting from a comparison, by the at least one computing device, of the image data with reference data. The reference data may be stored in a memory of the at least one computing device and may relate to a completely correct performance of a fitness exercise displayed. The feedback to the user may be visual and/or audible.
[0027] In an exemplary embodiment, the fitness device may further comprise a contactless user position determination system configured to determine a position of the user in front of the display in a contactless manner. The contactless user position determination system may, for example, be configured to generate position data indicating a determined position of the user (for example also including a posture of the user) while the user performs an exercise. The at least one computing device may be configured to generate the above-mentioned feedback additionally based on such position data. Based on the position data the at least one computing device may for example evaluate whether the user correctly performs an exercise the user is instructed to do by the displayed content. Based on the position data the at least one computing device may also be configured to adapt an appearance of at least one visual element displayed by the display. This may include changing at least one of a size, a shading, a color, a hue and a brightness of the at least one visual element displayed at the device and indicating how well the user is performing an exercise in front of the display. One option for effectively determining user position in a contactless manner may include using a laser positioning system.
[0028] In an exemplary embodiment, the fitness device may further comprise at least one communication interface for communicating with at least one supervisor device of the training system. The at least one computing device may be configured to transmit data to the at least one supervisor device via the at least one communication interface. This may, for example, include transmitting the biometric data wirelessly via the communication interface the remote supervisor device. The remote supervisor device may then, for example, be used by a human trainer, mentor or therapist monitoring the performance of the user in front of the device.
[0029] Generally, the fitness device may be a stand-alone fitness device configured to display exercise-related content to the user. In an exemplary embodiment, the fitness device may also be configured to allow for non-fitness related activities, such as video conferencing.
[0030] The proposed solution also relates to a method for automatically adapting exercise- related content to be presented to a user using a fitness device. The proposed method comprises presenting exercise-related content to the user via the fitness device; capturing images including a face of the user while exercise-related content is presented to the user (and the user is performing an exercise following instructions in the exercise-related content); and identifying and tracking one or more facial characteristics of the face of the user in the images while the exercise-related content is presented to the user. The proposed method further comprises determining, at least once while the exercise-related content is presented to the user, an emotional status of the user based at least in part on the one or more facial characteristics, and adapting the exercise-related content based at least in part on the emotional status.
[0031] Embodiments of the proposed method may in particular relate to operating an embodiment of a proposed training system. Accordingly, features and advantages mentioned above and below in the context with an embodiment of a proposed training system shall therefore also apply to embodiments of the proposed method and vice versa.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Figure 1 is a perspective view of a fitness device of an embodiment of the proposed training system.
[0033] Figure 2 is a front view on the fitness a device of Figure 1 showing additional components.
[0034] Figure 3 is a perspective view on the fitness device of Figures 1 and 2 in operation, with a user performing a fitness exercise in front of the fitness device.
[0035] Figure 4 shows the fitness device with another user performing another exercise in front of the fitness device.
[0036] Figure 5 shows an exemplary image captured by a camera device of the fitness device additionally illustrating several identified facial characteristics of the user and assigned scores to several pre-defined emotions based on the facial characteristics.
[0037] Figure 6 shows a flowchart for a metric-based decision-making process according to an embodiment of proposed solution taking into account assigned scores for different pre-defined emotions and corresponding parameters for deciding on an emotional status of the user using a fitness device of Figures 1 to 4. [0038] Figure 7 shows a flowchart illustrating an algorithm-based decision-making process implemented in an embodiment of a proposed training system for deciding whether and how exercise-related content and thus an individual workout for a user is to be adapted in real-time and for future training session.
[0039] Figure 8 shows are further flowchart illustrating a metric -based decision-making process for a certain exercise, taking into account a determined emotional status of the user during a performance of the exercise.
DETAILED DESCRIPTION
[0040] FIG 1 illustrates an embodiment of a fitness device 1 of a proposed training system 1 (sometimes also called a “smart mirror” or an “interactive screen”). The fitness device 1 of the embodiment of FIG 1 may be configured as a stand-alone fitness device to be used in a gym or at home, for example used for workouts, physiotherapy and physical rehabilitation. The fitness device
I comprises a frame assembly including a frame structure 10, a visual presentation module comprising a display 11 and a mirror 12 (the display 11 and the mirror 12 defining a display assembly of the fitness device 1). The frame structure 10 defines a continuous frame surrounding the mirror 12. The mirror 12 is thus held within the frame of the frame structure 10. The display
II is held behind the mirror 12. The display 11 includes a screen for displaying exercise-related content to a user standing in front of the fitness device 1. The frame assembly of the fitness device 1 defines a front side 1A and a back side IB. The exercise content displayed via the display 11 may hence be watched by a user facing the front side 1A.
[0041] The mirror 12 may reflect incident light from the user facing a front surface of the mirror 12 to present a reflected image of the user. The mirror 12 may further transmit incident light from the rear surface of the mirror 12 so that exercise content displayed via the display 11 may be superimposed on at least a portion of the reflected image of the user. By superimposing exercise content on at least a portion of the selected image of the user the fitness device 1 may provide instant feedback on a performance of the user when imitating fitness exercises based on the displayed exercise content.
[0042] At least one camera device 5.1, 5.2, 5.3 is part of the fitness device 1 to capture images of the user during a fitness exercise in order to electronically assess an emotional status of the user while performing a fitness exercise. In addition, the at least one camera device 5.1, 5.2, 5.3 may capture images for provide feedback on the user’s performance whether the user imitates a certain exercise properly and., e.g., to which extent.
[0043] The fitness device 1 as shown in FIG 1 A includes the display 11 having a screen with a diameter of at least 34 inches, for example more than 40 inches. Accordingly, the frame structure 10 defines a surface area at the front side of more than 0.5 m2. For example, a substantially rectangular surface at the front side 1 measures more than 1.5 m in height and more than 0.4 m in width.
[0044] The fitness device 1 of FIG 1 A may rest via at least two different components at a floor. A first component is provided by bottom part 100 of the frame structure 10. The bottom part 100 allows the frame structure 10 to rest on the floor and thus provides a support directly below the display 11. In addition, a stand 2 is provided which allows the frame structure 10 to be positioned inclined to the vertical at the floor and to be nevertheless held in a stable position. The stand 2 extends at an angle to the back side IB of the frame structure 10 so that a base portion of the stand 2 (formed by a base member 21) is configured to rest on the floor being the bottom part 100 in a specified distance to the bottom part 100.
[0045] As further illustrated by FIG 1A, the stand 2 of this embodiment is designed as a rectangular frame including a base member 21, a crossbar 22 and two parallel lateral bars 20.1, 20.2. The lateral bars 20.1, 20.2 are connected to each other at a first (upper) end via the crossbar 22 and at a second (lower) end via the base member 21. At the crossbar 22 the stand 2 is fixed to a back plate the fitness device 1. The back plate is attached to the frame structure 10 at the back side IB.
[0046] The fitness device 1 may also be used without the stand 2 and for example hung onto a wall.
[0047] As can be seen from the front view of FIG 2, the one or more camera devices 5.1, 5.2, 5.3 for provided for capturing images of the user facing the front side 1A. For example, a first camera device 5.1 may be positioned at the front side 1A in the middle of a portion above the display 11. Additional second and third camera devices 5.2 and 5.3 are positioned on lateral sides so that a display recess for a screen of the display 11 is arranged between optics of the second and third camera devices 5.2 and 5.3. [0048] The first, second and third camera devices 5.1, 5.2 and 5.3 may be part of a camera system of the fitness device 1 which are controlled by a software component running on at least one computing device 6 of the fitness device 1. The first camera device 5.1 may be provided for communication, entertainment and/or displaying visual elements, in particular interface elements, depending on a predetermined position of the user. The second and third camera devices 5.2 and 5.3 may provide image data for a motion analysis module of the fitness device (implemented by the computing device 6). The motion analysis module may be provided for analyzing position and posture of a user when performing a fitness exercise in front of the fitness device 1. Accordingly, the different camera device 5.1, 5.2 and 5.3 as well as the display llare coupled to the computing device 6, and in particular its motion analysis module.
[0049] In addition, the computing device 6 may receive input(s) i including a video or stream to be presented on the display 11. The input i may be already pre-processed for displaying the respective content. Alternatively, the content received as input i may be pre-processed by the computing device 6, for example using artificial intelligence. The input i may also include an instruction signal from a supervisor device with which the fitness device 1 communicates while a user performs an exercise in front of the fitness device 1.
[0050] In addition, the computing device 6 may include or be connected to a contactless user position determination system. Said contactless user position determination system may comprise a calibration module recognizing proportion differences between images captured by the second and third camera devices 5.2 and 5.3. Thereby, the calibration module may also be configured to detect at which distance and thus position from the fitness device 1 a user is located. This may result in calibrating the motion analysis module with x and y coordinates which may then also be used for defining at least one of an appearance, a size and a position of at least one visual element to be displayed on the display 11. Hence, visual elements display may be dynamically adapted based on image data provided by the camera system including the second and third camera devices 5.2, 5.3. For example, depending on the distance of a user to the fitness device 1 a displayed visual element may be augmented and/or displayed with an increased brightness. Additionally or alternatively, the contactless user position determination system of the fitness device may comprise a laser positioning system for determining a position of a user in front of the display 11 with respect to the fitness device 1 in order to dynamically adapt information and thus in particular visual elements displayed by the display 11.
[0051] In addition or in alternative to the second and third camera devices 5.2 and 5.3, the contactless user position determination system may comprise a laser positioning system 15 for determining a position of the user in front of the fitness device 1. The laser positioning system 15 may in particular allow for securely determining a distance of a user to the front side 1A.
[0052] In addition, a proximity sensor 16 may be part of fitness device 1 in order to detect a user approaching the fitness device 1. A detected approach may for example result in switching the fitness device 1 from an off-state to an on-state and/or displaying certain interface elements on the display 11 given that the user approaching the fitness device 1 is then in reach for touching the mirror 12.
[0053] The fitness device 1 further comprises a pyrometric device. The pyrometric device may, for example, include one or more thermal cameras, one or more infrared sensors or one or more other sensors being part of a corresponding contactless temperature sensing unit. The pyrometric device is capable of determining temperature values for body parts of a user in front of the fitness device 1. Temperature values may be measured contactlessly by the pyrometric device for generating corresponding temperature data. The temperature data generated by the pyrometric device is received by the computing device 6 and for example further processed by a machine learning module 60 of the computing device 6. The machine learning module 60 may apply machine learning algorithms for analyzing received sensor data, like the temperature data. For example, a principal component analysis (PCA) or a t-distributed stochastic neighbor embedding (t-SNE) may be used to generate data to be presented on the display 11.
[0054] Temperature data from the pyrometric device may also be transmitted via a communication interface of a connectivity module 61 of the computing devices 6 to a remote analysis computing device, for example one or more cloud servers.
[0055] Based on analysis results provided by the machine learning module 60 or via the remote analysis computing device temperatures and their distribution over the different body parts of the user located in front of the fitness device 1 may be determined and visualized. This may for example include temperature distributions for a head, a neck, shoulders, arms, the upper and lower body and the legs of the user.
[0056] The fitness device 1 may in particular also include at least one of a microphone for capturing sound inputs, a video camera, a speaker 17 to accompany the visuals presented and a connectivity module 61 including at least one communication interface, e.g., for a Wi-Fi and/or Bluetooth connection. The connectivity module may, e.g., allow exchange of data and multimedia content via the Internet and/or other local devices. Based on the connectivity module the fitness device 1 may in particular be capable and configured to transmit to and/or receive signals from at least one other device, for example from at least one other local or remote fitness device also equipped with a connectivity module, a mobile phone and/or a computing device.
[0057] The fitness device 1 may display exercise-related and in particular fitness content instructing a user P to perform a sequence of exercises as part of training session. While performing the exercises the performance of the user P may be tracked and mapped against reference data thereby automatically providing feedback to the user P via the display 11 on how well the instructed exercise is carried out.
[0058] The machine learning module 61 of the computing device 6 and/or machine learning algorithms of the remote analysis computing device may furthermore generate feedback and/or adapt feedback to the user P on the user’s performance while exercise content is displayed at the display 11 of the fitness device 1. The corresponding machine learning algorithms may then also take into account the temperature data generated by the pyrometric device, other biometric data, such as a heart rate of the user P, and/or motion tracking data generated for the user P while performing an exercise.
[0059] As illustrated in Figure 3, the fitness device 1 may show a virtual trainer/avatar 40 on the display 11 as a part of an exercise-related content to instruct the user P in front of the fitness device 1 how to perform certain exercises of an individual training session. The virtual trainer 40 is part of a visual presentation 4 of exercise-related content. The exercise-related content, in addition to the virtual trainer 40, also includes one or more visual elements 41 visualizing additional instructions to the user and/or biometric data. Biometric data may be contactlessly captured by the fitness device 1 (e.g., using the pyrometric device) or by means of an additional device worn by the user, for example to track a heart rate of the user P while performing an exercise.
[0060] In the present case the fitness device 1 and in particular the machine learning module 60 of the computing device 6 further implements a facial characteristics evaluation module allowing for identifying and tracking one or more facial characteristics of a face of the user P while performing an exercise in front of the fitness device 1. Accordingly, images captured by one or more of the camera devices 5.1, 5 to 2 and 5.3 are analyzed for detecting and tracking a face of the user P. Markerless face tracking and facial expression capturing may thus for example be implemented allowing for determining micro-expressions on the face of the user P while performing an exercise. These one or more facial characteristics are then used for determining an emotional status of the user P while performing exercises as instructed by the presented exercise- related content, for example by the virtual trainer 40.
[0061] Based on the determined emotional status - in addition to biometric data and/or motion tracking data for the user P - it may automatically be decided, in particular in real-time, whether presented fitness exercises have to be adapted. Taking into account emotions of the user P during performance of an exercise allows for not only improving user experience but also to better individualize exercises for a current training session and/or a future training session of the individual user P.
[0062] In an exemplary embodiment, as further illustrated in Figure 5, a captured image of the user P is thus analyzed to identify a face F of the user P and virtual facial markers M. Based on the virtual facial markers M micro-expressions and thus emotions can be assessed and tracked. A likelihood for a certain emotion coinciding with certain (relative) positions and movements of the virtual facial markers M, in particular over time, may be trained by machine learning so that images of the face F of the user P may be associated with a set of pre-defined emotions.
[0063] For example, before, at or shortly after the start of a fitness exercise or of a training session comprising one or more consecutive fitness exercises, the one or more facial characteristics defined by the virtual markers M plus biometric data and motion tracking data of the user P may be used to determine an initial (status quo) emotional status of the user P. During training as instructed by the virtual trainer 40 additional data relating to the facial characteristics may be captured to determine how and to which degree an emotional status of the user P changes. In this context, an artificial intelligence processing set of algorithms may assign scores to a plurality of emotion parameters representative of several different pre-defined emotions. A corresponding set of emotions, namely anger, disgust, fear, happiness, sadness, surprise and neutral, is illustrated in Figure 5. To this emotions scores are assigned based on the face F of the user P in the captured and analysed image.
[0064] As a further illustrated in the flowchart of Figure 6, different emotion parameters 101 are associated with two different groups of emotions. A first set of emotion parameters are associated with “negative” emotions, in the depicted example of Figure 6 the emotions anger, disgust, fear, sadness and surprise. In addition, a second set of emotions, happiness and neutral in Figure 6, are associated with second group of “positive” emotions. The sets of the different emotion parameters 101 and their assigned scores are provided as emotional raw data to threshold evaluation algorithms 102A, 102B. These threshold evaluation algorithms 102A, 102B combine the scores for the emotion parameters 101 of one group for respectively calculating a combined (group) score. The combined scores for the “positive” and the “negative” emotions are then further assessed in a metric -based decision algorithm 103 in order to provide for an output indicating an emotional status of the user P while performing an exercise.
[0065] A determined emotional status is then further computationally evaluated in an embodiment of the proposed training system in order to automatically decide on whether and how the exercise-related content to be presented to the user P as to be adapted. In this context different criteria may be evaluated. This may, for example, also included separately determining (sub-)recommendations for each criterion which are then combined to decide on the actual adaption of the exercise-related content. For example, it may be considered that motion tracking data indicates that the user P correctly imitates or not correctly imitates the instructed exercises This might speak for the instructed exercise being appropriate or being too difficult for the individual user P, at least currently, and thus for keeping, reducing or increasing at least one of a tempo, weight or number of repetitions. Temperature data or other biometric data such as a measured heart rate may further speak for not changing or changing an intensity level of the instructed exercises. The analyzed facial characteristics and the resulting emotional status may in turn also indicate that the user struggles with the exercise and appears rather stressed (in particular compared to an initial emotional status determined at the beginning of training) or that the user enjoys the exercises. Based on the emotional status of the user P a (sub-)recommendation could thus also be to keep, reduce or increase an intensity level of the current workout. An overall assessment taking into account the motion tracking data, the biometric data and the emotional data may then result in keeping the intensity of the exercises unchanged and thus as originally planned or in decreasing or increasing the intensity level.
[0066] As shown in the flow diagram of figure 7 keeping the exercise-related content unchanged may be a decision depicted by step 202 in Figure 7 after having evaluated all available raw data before in a step 201. In case the overall assessment results in a decision to adapt an intensity level a change in the exercise-related content may be triggered (step 203). A corresponding adaption of the exercise-related content may take place in real-time in order to adapt the current workout. In addition or in the alternative, the system may store that and how the workout is to be adapted for a subsequent training session i for the individual user P. As can be seen in Figure 7 in step 204 a decision on adapting exercise-related content may result in, for example, changing a presentation of the exercise by the virtual trainer 40, changing a weight the user P should use when performing the exercise, a duration of the exercise, a number of exercises and/or a number of repetitions, and/or a break or rest time for the user P between two subsequent exercises. Any changes in the exercise-related content may then be stored in a memory thereby replacing set i of exercises to be performed by the user in the training session with a new adapted version (step 205).
[0067] In an embodiment of the proposed training system are real-time feedback generation mechanism may thus be implemented by the machine learning module 60. This real-time feedback generation mechanism reacts to emotional, motion tracking and biometric data captured for the user P. The captured data may be used immediately to alter the exercise-related contend presented for example by a virtual trainer 40 or may be used for a later point in time. A workout recommendation algorithm as part of the machine learning module 60 may, based on the emotional data, motion tracking data and biometric data, optimize quantitative information relating to the exercises to be performed, such as repetitions, tempo, resting time, and may also suggest alternatives for exercises to be performed in case the captured data indicates that the currently presented exercises are not of an appropriate intensity level. The machine learning module 60 may also allow for influencing a training experience of the user P by adapting the current workout in real-time and also adapting future workouts by learning how the user reacted on the currently presented exercises. This allows for a further optimization in the training for the user P. In this context it may also be provided for changing a presentation of the virtual trainer 40. For example, a tempo with which the virtual trainer 40 performs a demonstrated exercise, facial expressions of the virtual trainer 40 and/or motivational gestures of the original trainer 40 may be changed, if applicable, also accompanied by changing tone and/or volume of outputted audible sounds. A corresponding change may result from a deep learning algorithm of the machine learning module 60.
[0068] Figure 8 shows an example for an automatically triggered adaption of a workout to be presented by the fitness device 1 to the user P based on the emotional status of the user P. In the example of Figure 8 the user P is for example instructed to do dumbbell bicep curls with a weight of 12 kg aiming for a target repetition of 15. While performing the exercise as instructed in front of the fitness device 1 an emotional status of the user P is determined based on the sensor raw data provided by the camera device 5.4 and in particular on analyzed facial characteristics M of the captured face F of the user P. Based on a machine-learned evaluation, the computing device 6 determines 68% of negative emotions, hence a probability of 68% of the user struggling with the exercise performed. At the same time, just 10% of positive emotions are determined, speaking for a significantly lower probability that the user enjoys the exercise and performs it without excessive effort. Respective probabilities/emotional status results 1020A, 1020B are determined while the exercise is still ongoing, for example at the 8th repetition of the exercise.
[0069] In a following step 103 the metric -based decision is reached based on the emotional status results 1020A, 1020B resulting in a metric decision feedback whether the exercise should be continued as originally instructed or changed. In the present case the provided (raw) data on the emotional status results in a finding 103-1 that the exercise is too hard for the user P. A metric decision recommendation 103-2 of the process therefore selects one of several (here three) possible action options 104 for the exercise instructions given to the user P. Whereas 10% (out of 100%) of the evaluated data speak for keeping the exercise as started and thus for maintain an intensity level of the exercise, 30% speak for increasing an intensity level and 60% speak for decreasing the intensity level. In the illustrated example, the algorithms executed on the computing device 6 thus trigger an adaptation of the exercise-related content presented to the user P by the fitness device 1 causing (immediate or later) presentation of instructions 105 to the user P to reduce the weight from 12 kg to 8 kg. The instruction 105 might thus for example indicate a corresponding reduction in the weight of the dumbbells for the rest of the current set or the next set in the ongoing training session or for a subsequent training session. Corresponding instructions 105 may presented visually to the user P via the display 11 and/or audibly via speaker 17.
[0070] Generally, the fitness device one may also allow for a user to override or choose a level of adaption and guidance by the virtual trainer 40. Furthermore, the computing device 6 may also implement a readjustment mechanism based on a user rating at the end of a training session. Thereby, the system can optimize initial assumptions made for the individual user P. In particular, the user P can rate how intense and likable the workout and the virtual trainer 40 were so that the system may (re)optimize a corresponding configuration for the exercise-related content for the next training session for the user P.
[0071] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be implemented in another embodiment, even if not specifically shown or described. The same elements may also be varied in one or more ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.

Claims

WHAT IS CLAIMED IS:
1. A training system, comprising: a) at least one fitness device, comprising: at least one visual presentation module (11) for presenting exercise-related content to a user (P) of the fitness device (1); and at least one camera device (5.1-5.3) configured to capture images including a face (F) of the user (P) while the at least one visual presentation module (11) presents the exercise- related content; and b) at least one computing device (6) comprising at least one processor and configured to: receive the images, identify and track one or more facial characteristics (M) of the face (F) of the user (P) while the at least one visual presentation module (11) presents the exercise-related content, determine, at least once while the at least one visual presentation module (11) presents the exercise-related content, an emotional status of the user (P) based at least in part on the one or more facial characteristics, and adapt the exercise-related content based at least in part on the emotional status.
2. The training system of claim 1, wherein the at least one computing device (6) is further configured to: determine an initial emotional status based on the one or more facial characteristics at a first point in time, determine at least one additional emotional status based on the one or more facial characteristics at a subsequent second point in time while the at least one visual presentation module (11) presents the exercise-related content, and adapt the exercise-related content based at least in part on the at least one additional emotional status.
3. The training system of claim 2, wherein the at least one computing device (6) is further configured to: determine whether the at least one additional emotional status differs from the initial emotional status by more than a threshold, and adapt the exercise-related content if the at least one additional emotional status differs from the initial emotional status by more than the threshold.
4. The training system of any one the preceding claims, wherein determining the one or more facial characteristics comprises assigning a score to at least two emotion parameters representative of at least two different pre-defined emotions.
5. The training system of claim 4, wherein one or more emotion parameters are associated with at least one first group of emotions and one or more emotion parameters are associated with at least one second group of emotions.
6. The training system of claim 4 or 5, wherein determining the emotional status based on the one or more facial characteristics comprises combining the scores assigned to the at least two emotion parameters.
7. The training system of claims 5 and 6, wherein determining the emotional status based on the one or more facial characteristics further comprises separately combining the scores of each group of emotions to respectively generate a combined score for each group of emotions.
8. The training system of claim 7, wherein the at least one computing device (6) is further configured to apply a metric-based evaluation function using the combined scores for each group of emotions for determining the emotional status.
9. The training system of any of the preceding claims, wherein adapting the exercise- related content includes adapting at least one of
- a type of an exercise presented to the user in the exercise-related content,
- a number of repetitions of an exercise presented to the user in the exercise-related content,
- a tempo of an exercise presented to the user in the exercise-related content, and
- a weight to be used for an exercise presented to the user in the exercise-related content.
10. The training system of any of the preceding claims, wherein presenting the exercise-related content includes presenting a virtual trainer (40) on a display (11) of the fitness device (1) and/or outputting audible sounds via at least one speaker (17) of the fitness device (1); and adapting the exercise-related content includes changing a visual presentation of the virtual trainer (40) and/or changing the audible sounds.
11. The training system of any of the preceding claims, wherein adapting the exercise- related content includes adapting the exercise-related content in real-time and/or generating an adapted version of the exercise-related content for presentation in a later training session of the user using the fitness device.
12. The training system of any of the preceding claims, wherein determining the emotional status of the user (P) is also based at least in part on biometric data of the user (P) and/or motion tracking data for the user (P).
13. The training system of any of the preceding claims, wherein adapting the exercise- related content is also based at least in part on biometric data of the user (P) and/or motion tracking data for the user (P).
14. The training system of claim 12 or 13, wherein the at least one computing device (6) is further configured to
- generate emotion data associated with the determined emotional status and
- evaluate the emotion data and at least one of the biometric data and the motion tracking data using machine learning for deciding on an adaption of the exercise-related content.
15. The training system of any of the preceding claims, wherein the at least one computing device (6) is part of fitness device (1) or located remote from the fitness device (1).
16. The training system of any of the preceding claims, wherein the at least one computing device (6) is configured to identify and track the one or more facial characteristics (M) by using markerless face tracking.
17. A method for automatically adapting exercise-related content to be presented to a user (P) using a fitness device (1), comprising: presenting exercise-related content to the user (P) via the fitness device (1); capturing images including a face (F) of the user (P) while the exercise-related content is presented to the user (P); identifying and tracking one or more facial characteristics (M) of the face (F) of the user (P) in the images while the exercise-related content is presented to the user (P); determining, at least once while the exercise-related content is presented to the user (P), an emotional status of the user (P) based at least in part on the one or more facial characteristics, and adapting the exercise-related content based at least in part on the emotional status.
PCT/EP2021/062989 2021-05-17 2021-05-17 Training system and method with emotion assessment Ceased WO2022242825A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/062989 WO2022242825A1 (en) 2021-05-17 2021-05-17 Training system and method with emotion assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/062989 WO2022242825A1 (en) 2021-05-17 2021-05-17 Training system and method with emotion assessment

Publications (1)

Publication Number Publication Date
WO2022242825A1 true WO2022242825A1 (en) 2022-11-24

Family

ID=76076328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/062989 Ceased WO2022242825A1 (en) 2021-05-17 2021-05-17 Training system and method with emotion assessment

Country Status (1)

Country Link
WO (1) WO2022242825A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086500A1 (en) * 2012-10-09 2016-03-24 Kc Holdings I Personalized avatar responsive to user physical state and context
WO2021081649A1 (en) * 2019-10-30 2021-05-06 Lululemon Athletica Canada Inc. Method and system for an interface to provide activity recommendations

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086500A1 (en) * 2012-10-09 2016-03-24 Kc Holdings I Personalized avatar responsive to user physical state and context
WO2021081649A1 (en) * 2019-10-30 2021-05-06 Lululemon Athletica Canada Inc. Method and system for an interface to provide activity recommendations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RINCON JAIME ANDRES ET AL: "Towards a Cognitive Assistant System for Elderly", INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE, vol. 18, no. 2, 1 October 2020 (2020-10-01), pages 1 - 16, XP055884152 *

Similar Documents

Publication Publication Date Title
US12048868B2 (en) Systems and methods for computer vision and machine-learning based form feedback
US12217543B2 (en) Host data system for sport and vocational activities
US20220072380A1 (en) Method and system for analysing activity performance of users through smart mirror
US20220080260A1 (en) Pose comparison systems and methods using mobile computing devices
US20230047787A1 (en) Controlling progress of audio-video content based on sensor data of multiple users, composite neuro-physiological state and/or content engagement power
US20150196805A1 (en) Fuzzy logic-based evaluation and feedback of exercise performance
WO2019068035A1 (en) Directing live entertainment using biometric sensor data for detection of neurological state
US20110152033A1 (en) Physical training system
US11890505B2 (en) Systems and methods for gestural detection and control in immersive and interactive flume swimming pools
JP2020048827A (en) Information processing device, support method, and support system
KR20220098064A (en) User customized exercise method and system
US11801433B2 (en) Golf instruction method, apparatus and analytics platform
CN118178978A (en) A respiratory rehabilitation training system, method, device and storage medium
WO2022089769A1 (en) Device with a display and a contactless temperature sensing unit
WO2022242825A1 (en) Training system and method with emotion assessment
US12172067B2 (en) System and method for providing a fitness experience to a user
US20200349442A1 (en) Auto feed forward/backward augmented reality learning system
KR20220096279A (en) Smart mirror-based posture correction system
KR102158218B1 (en) Smart mirror for processing motivation scenario, method of performing thereof and motivation scenario processing system including the smart mirror
WO2021196584A1 (en) Laser induction system and method, computer-readable storage medium and electronic device
CN119225516A (en) A motion information interaction method and device in a screen projection scenario
US20230356033A1 (en) Systems and methods for computer vision and machine-learning based form feedback
US12330023B2 (en) Platform for visual tracking of user fitness
JP7078296B1 (en) Reaction evaluation device, reaction evaluation system, reaction evaluation method
JP2025115991A (en) Information processing device, method, program, and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21727417

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21727417

Country of ref document: EP

Kind code of ref document: A1