GB2639968A - Control device - Google Patents
Control deviceInfo
- Publication number
- GB2639968A GB2639968A GB2404590.8A GB202404590A GB2639968A GB 2639968 A GB2639968 A GB 2639968A GB 202404590 A GB202404590 A GB 202404590A GB 2639968 A GB2639968 A GB 2639968A
- Authority
- GB
- United Kingdom
- Prior art keywords
- command
- input signals
- user
- control
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0382—Plural input, i.e. interface arrangements in which a plurality of input device of the same type are in communication with a PC
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0383—Remote input, i.e. interface arrangements in which the signals generated by a pointing device are transmitted to a PC at a remote location, e.g. to a PC in a LAN
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0384—Wireless input, i.e. hardware and software details of wireless interface arrangements for pointing devices
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A control system 10 for use as input device for devices 4 such as computing devices, game consoles, or vehicles and drones, comprises a capture module 16, an interpretation module, a command emulator, and an output interface 18. The capture module scans 16 for gestures, facial movements and optional voice commands as input signal. The interpretation module maps gestures and other input signals to a control command, based on a command map. The command emulator issues the control command via the output interface 18, thereby controlling the devices 4 in response to a gesture or other input signal. The system 10 comprises a mapping module that allows a user 2 to create or alter command maps, and to select an active command map. This provides a control system 10 configurable by selection of a command map, selected from a set of predefined gestures, for different applications or games. Head movements, body positions and movements may be identified as input signals. Configurations comprise creating control commands corresponding to the duration of input signals; link two input signals to the same control command; issuing a sequence of control commands; defining an alternative map selection command; associate target users with command maps and ignore input signals not associated with target users; preventing use of the same control command by two active command maps.
Description
Control device
Field of the Invention
The present invention relates to a control device, such as a gamepad, providing an input interface. More specifically, the present invention relates to a programmable/mappable device that can interpret user gestures, movement, and optional voice input, and that is self-contained and portable so that it can be used with different controllable devices, such as different games consoles.
Background
The use of gestures as control input has been used in several applications including research, motion capture, healthcare, rehabilitation, and others. Gesture-capturing devices such as the Microsoft Kinect (RTM) or Nintendo Wii (RTM) allow user to use gestures to control software, specifically video games.
The present invention seeks to provide improved functionality to such devices.
Summary of the Invention
In accordance with a first aspect of the invention, there is provided a control system as defined in claim 1, for use as an input device for a device-to-be-controlled, the control system comprising a capture module, an interpretation module, a command emulator, and an output interface.
The capture module comprises a configuration allowing it to monitor an area to recognise input signals made by a user within said area. The capture module may comprise a camera for image and video capture and a microphone for sound capture.
The capture module may comprise infrared or thermal imaging capability and/or distance sensing capability.
The interpretation module comprises a configuration to identify input signals, as captured by the capture module, and to determine whether or not identified input signals correspond to a control command, based on a command map linking one or more input signals to one or more control commands.
The command emulator is configured to issue the control command, as determined by the interpretation module, as an output signal via the output interface, such that a device-to-be-controlled may receive the control command via the output interface.
The control system further comprises a mapping module for creating one or more command maps, and the control system allows setting one of the command maps as an active command map for use by the interpretation module.
While certain modules of the system may be provided as hardware components, several aspects of the system may be operated using computer-implemented methods.
In some embodiments, the system comprises a configuration allowing it to identify gestures and/or poses as input signals, and to determine whether or not identified gestures correspond to a control command.
The capture module may be implemented in the form of a camera, of sufficient resolution and frame rate, to allow it to discern gestures and movement by a user.
Specifically, the capture module may capture movements, as well as absence thereof, during a video sequence, for use by the interpretation module.
This enables the interpretation module to distinguish between body poses, the duration of poses, gestures, the speed of carrying out a gesture and/or the duration during which a specific gesture is maintained.
In some embodiments, the system comprises a configuration allowing it to identify head movements as input signals, and to determine whether or not identified head movements correspond to a control command.
Head movements may include turning, tilting, and other movements. Head movements may include facial movements.
In some embodiments, the system comprises a configuration allowing it to identify body positions and/or movement within the area monitored by the capture module, and to determine whether or not identified body positions and/or movement correspond to a control command.
The area monitored by the capture module may be considered a field of view within which a user may move. To this end, the system may distinguish subregions of the field of view, e.g. divided into thirds, and distinguish between movement between a left third, a middle third, and a right third of a monitored area.
In some embodiments, the system comprises a configuration allowing it to monitor for sound signals or voice from user, and to use the sound or voice as input signal.
The sound signals may be recorded using a microphone or other suitable device, for analysis by the interpretation module. For instance, the interpretation module may be programmed to recognise voice commands such as "jump", "crouch", "shoot", "shield", "dock", "pause" and the like.
In some embodiments, the system comprises a configuration allowing it to determine a duration of one or more input signals, and to create a control command corresponding to the duration of the one or more input signals.
In this manner, a user may reproduce a "hold" command.
In some embodiments, the system comprises a configuration allowing two input signals to be linked to the same control command.
The interpretation module may be able to combine two input signals into a single command. This may allow a user to use alternative input signals to be used for the same command. Likewise, this may allow a user to include a second gesture as a conditional command. In this manner, an isolated input signal (gesture and/or voice) will not result in an accidental command unless carried out together with a second input signal (gesture and/or voice).
The configuration may require two or more input signals to be carried out at the same time, sequentially with overlap, or sequentially without overlap within a pre-determined period of time.
In some embodiments, the system comprises a configuration allowing one input signal to be linked to multiple control commands, to be issued simultaneously via the output interface.
The interpretation module may be able to convert a single input signal into multiple commands. For instance, a diagonal arm movement may be interpreted as simultaneously issued, individual commands "up" and "right", if that is the appropriate key combination of an emulated controller.
In some embodiments, the system comprises a configuration allowing it to determine a sequence of input signals, and to issue a sequence of control commands, each control command corresponding to an input signal of the command map, via the output interface.
In some embodiments, the system comprises a configuration allowing it to determine time intervals between input signals in said sequence of input signals, and to issue control command corresponding to a sequence with the same time intervals determined between input signals.
In this manner, the system repeats not only the input signals from a user, but also pauses between the input signals. This allows a user to ensure a pause is made, according to the input signals provided by the user, between commands issued by the command emulator.
In some embodiments, the system is configured to identify a best match of an identified input signal to the input signals comprised in the active command map, and to determine control command based on the best match.
The interpretation module may apply a confidence threshold that allows an input signal to be interpreted as command even if there is a degree of mismatch compared to a reference signal, if the degree of mismatch is within the confidence threshold.
In some embodiments, the system comprises a configuration allowing a user to define an alternative map selection command, wherein recognition of the alternative map selection command replaces the active command map with an inactive map of the one or more command maps.
This allows a user to exchange the command map used by the interpretation module and/or command emulator.
In some embodiments, the system comprises a configuration allowing a user to identify a first target user within the area monitored by the capture module, and to associate a first target user with a command map.
In some embodiments, the system comprises comprising a configuration allowing a user to identify one or more further target users within the area monitored by the capture module, and to associate each one of the one or more further target users with a user-specific, different command map.
In some embodiments, the system comprises a configuration to issue control commands only for identified input signals from a target user.
In some embodiments, the system comprises a configuration to ignore input signals not associated with a target user.
In some embodiments, the system is configured to use a plurality of active command maps contemporaneously.
The plurality of active command maps may be user-specific maps.
In some embodiments, the system comprises, when configured with a plurality of active command maps, a configuration to prevent use of the same user control command by two active command maps.
This allows multiple users to use the same gestures while avoiding that a gesture by one user results in a command output for another user.
In some embodiments, the output interface comprises a plurality of channels, each channel used for issuing control commands from one active command map.
In this manner, a single control device may be used to monitor an area used by multiple users, and to emulate control signals for different data communication ports. E.g., the control device may comprise two output ports (e.g. USB1 and USB2), and issue commands from a first user to a first output port, and issue commands from a second user a second output port.
In some embodiments, the system comprises a setting to adjust a monitored body region of a user, the monitored body region comprising one of a head region, a torso region, a combined head-and-torso region, a lower limb region, and a full body region.
The torso region may include an upper limb region, comprising arm movements, poses and/or gestures, and hand movements, poses and/or gestures. The lower limb region may comprise leg movements, poses and/or gestures, and/or foot movements, poses and/or gestures. As such, the system may monitor for the movement of a head, hands, arms, legs, feet, as well as torso movements such as bending, squatting and/or crouching.
In some embodiments, the system is comprised in a housing comprising a mounting location for attachment of a stand or clamp.
In some embodiments, the system is comprised in a housing comprising an integral stand.
In some embodiments, the system is comprised in a housing comprising an integral clamp.
In some embodiments, the output interface comprises a wireless communications protocol.
The wireless communications protocol may be any suitable protocol such as WiFi, Bluetooth (TRM) etc. In some embodiments, the output interface comprises a wired communications protocol.
The wired communications protocol may be any suitable protocol such as LAN, USB, etc. One or more steps of the method may be implemented in the form of software instructions. The software may be incorporated in the system of the first aspect. The system may comprise a processor and software instructions implemented by the processor. The software may be provided on a non-transitory storage medium, for instance in the form of a computer program or application.
Description of the Figures
Exemplary embodiments of the invention will now be described with reference to the Figures, in which: Figure 1 is a schematic illustration of an embodiment in operation; Figure 2 is a schematic illustration of components of an embodiment; Figure 3 illustrates an exemplary user interface for use with an embodiment; Figure 4 shows steps of an exemplary configuration method; and Figure 5 shows steps of an exemplary method implemented in embodiments.
Description
Figure 1 shows an environment 1 such as a room, used by a user 2 to interact with a controllable device 4. The device 4 may be a games console operatively connected to, or comprising, a display 6. The display 6 may be a screen, projector or other suitable display. The display 6 may be integral with the device 4, or connected to the device 6 via an interface.
Mounted on the controllable device 4 is, here, a control device 10 constituting part of a control system of the invention. The control device 10 comprises a housing 12 and a mount 14 in the form of a clamp, tripod, grip feet, or the like, by which it is supported on, or affixed to, the device 4 or to the display 6, or onto a suitable support such as a motorised stage, as the case may be. By way of the mount 14, the device 10 may be installed at a location and position convenient for a user, to remain in place for prolonged periods of time, while permitting the device 10 to be reused when the controllable device 4 or display 6 is changed or upgraded. The control device 10 is
portable.
The control device 10 comprises a camera module 16 operative to obtain images or video of an area 20 of the environment 1. The control device 10 is connected via an interface 18 to an input port of the controllable device 4. The interface 18 may be constituted by, or comprise, one or more wired channels, such as a universal serial bus (USB) or other suitable wired connection. The interface 18 may be constituted by, or comprise, one or more wireless channels such as for communication with the controllable device 4. Furthermore, the device 10 comprises a user indicator 15, such as a LED configuration or ambilight.
As will be appreciated, by setting up the control device 10 such that its camera module 16 monitors the area 20, the device 10 is able to monitor movements, poses, gestures, head movements, facial movements, head and/or eye movement within the area 20. The user may define the size of the area 20, for instance in a room or exercise area.
As such the control device 10 can be used by a user moving between different locations of the area 20.
The control device 10 comprises a processing system configured to recognise movements, gestures, and poses of the user 2, to be able to discern different movements of a user 2 and to interpret the movements as an input signal to carry out a particular command.
Several suitable types of camera module 16, interface 18, and mounting system for the mount 4 will be known to a skilled user, and are not described in detail herein. It will be appreciated that parameters such as frame rate, data transfer rate and/or image resolution may have to be selected in a manner appropriate to the input signal to be monitored. E.g. a system configured to monitor for arm gestures, head movements, and user movement (e.g., walking within a field of view of the camera module 16) may be able to operate sufficiently at a lower image resolution than a system monitoring eye movements or finger movements. The user signals may be analysed by a machine learning algorithm.
A consideration underlying the development of the invention was that different gestures, poses and user movements may be mapped to a general controller command. By emulating the control signals issued by a human interface controller, the device 10 is able to provide a controller function, controlled by movements, specifically gestures, poses and facial movement, as well as walking movement, of the user 2.
Figure 2 illustrates components of the control device 10. The control device 10 comprises a processor 30 and memory comprising instructions to be carried out by the processor 30, a camera module 16, a gesture recognition module 32, and an optional voice recognition module 33. The camera module 16 constitutes a capture module. The gesture recognition module 32 and the voice recognition module 33 constitute an example of an interpretation module.
The control device 10 comprises a mapping module 34 and a database 35, comprising a gestures-and-poses database 40, an optional sound database 41, a command database 36, and an emulated devices database 38. The device 10 further comprises a data interface comprising a wired communications interface, such as the interface 18 and/or a wireless communications interface. The processor 30 and the interface 18 provide the functionality of a command emulator and output interface.
The gestures-and-poses database 40 may comprise a repertoire of data for the identification of head movements, gestures, poses, and/or walking movements to be recognised gesture recognition module 32. Exemplary head movements, gestures and poses may include head tilting, nodding, one arm stretched, two arms stretched, crouching down, jumping up, a shooting position, boxing stance, and others.
It will be appreciated that two or more of the different components illustrated in Figure 2 may be comprised within a single component or common component. A mapping module 34 and the databases may be stored in a single memory module. The memory module may be integral with the device and/or provided as a drive or memory storage device, e.g. as a memory card. One or more databases of the database 35 and the mapping module 34 may be stored in memory of the device 10. In some embodiments, one or more of the databases of the database 35 may be stored outside the device 10, e.g. in a networked storage area and/or on cloud storage. It is envisaged that a consumer device such as the controllable device 4 may comprise pre-installed configuration data to be loaded into the database 35 upon connection of the device 10 the controllable device 4.
Figure 3 illustrates a menu 50 for presentation to a user, the menu 50 providing an exemplary configuration interface. The menu 50 may be arranged for display on a display device 8, such as a mobile device, tablet or the like. The menu 50 may be displayed on the display 6 of the controllable device 4.
The menu 50 comprises, for presentation to a user, a title area 52, a profile area 54, a command selection area 56, an emulated controller area 58, an input signal area 60, a confirmation button 62 and a cancel button 64. Each area of the menu 50 may be considered a sub-menu. The areas may not necessarily be displayed on one screen.
Some of the areas may be presented after opening a separate window. The menu 50 may comprise additional menu buttons (not shown in Figure 3) for a user to access help, technical information, version data and/or legal information.
The menu 50 may comprise a user select button 66 to assign a mapping configuration to a user. In this manner, the menu 50 may allow configuration of multiple user configurations to be saved on the same device, such that a single device 10 can be used to receive input signals from multiple users.
Optionally, and as illustrated in Figure 3, the menu 50 may comprise a delete button 68 to access a delete function to allow a user to remove mapping configurations. The menu 50 allows a user to define a mapping configuration for a particular device and/or for a particular application run on a device.
The title area 52 may display information in relation the device 10 and information such as text or images indicating to a user that the display of device 7 is presenting a menu permitting configuration of the device 10.
The emulated controller area 58 provides a list of controller types to be emulated, such as a first controller type 58a, a second controller type 58b, a third controller type 58c, a fourth controller type 58d and a fifth controller type 58e. The list of controllers maybe populated from controllers saved in the emulated devices database 38. As will be appreciated, different controller types 58a to 58e may comprise different input configurations and button layout, and therefore may allow a different number of input signals to be assigned to the controller type. The input configurations for each controller type may be stored in the command database 36.
In the example illustrated in Figure 3, the third controller type 58c is a generic controller type and selected by a user, highlighted by a menu indicator 59, here in the form of a frame. However, it will be appreciated that there are different options to show a number of available controllers and to show the way in which a confirmed selection is illustrated.
The command selection area 56 presents the commands executable by the controller selected in emulated controller area 58. As will be appreciated, selection of a controller type in the emulated controller area 58 causes the command selection area 56 to be updated to match the selected controller highlighted by the menu indicator 59.
The input signal area 60 shows the input signal, such as a type of gesture, posture, movement, or other user signal, and optionally a sound or voice command, associated with the commands in the command selection area 56. To this end, a selection field 61 may be presented to highlight a particular command to be updated. A user may use the selection field 61 to select one of the commands of the command selection area 56, and to assign an input signal to the selected command. The input signals available for association with a command may be retrieved from the gestures-and-poses database and, where present, from the sound database 41. In this manner, a user can assign different input signals to each one of the commands actuatable by the controller illustrated in the emulated controller area 58, or to the controller highlighted by the menu indicator 59.
The profile area 54 provides a number of input fields. A title field 54a allows a user to enter an identifier for a mapping configuration. The profile area 54 may comprise status indicators such a status indicator 54b providing an indication if a mapping configuration has been named with an identifier, and a status indicator 55 providing an indication if all commands of a device have been assigned an input signal.
The different controller types 58a to 59e may be illustrated by symbols and/or descriptive text. Controller types with at least one complete mapping configuration may be marked in one colour, e.g. green. Controller types without at least one complete mapping configuration may be marked in another colour, e.g. grey or red.
While not illustrated in Figure 3, it will be appreciated that any one or more of the profile area 54, the command selection area 56, the emulated controller area 58, and the input signal area 60 may open a submenu to offer a wider range of selections.
Several mapping configurations may be stored in the mapping module 34, and one of the mapping configurations may be selected by user as an active mapping configuration. A user may select multiple active mapping configurations, for instance for a multi-user configuration.
The control device 10 may be pre-configured with one or more mapping configurations for a range of controller types 58a to 58e, such that the device 10 can be used without a need for configuration via the menu 50 by a user. However, in some variants, the device may comprise a configuration requiring a user to set up a mapping configuration before permitting selection of a controller type.
In use, the device 10 receives image and/or sound data captured by the camera module 16. The gesture recognition module 32 and optional voice recognition module 33 analyse the data from the camera module 16 to identify if a user 2 has made a movement or pose in the area 20 that can be correlated with a movement, pose or gesture stored in the gestures-and-poses database 40 and/or the sound database 41. If an input signal corresponds to an input signal included in the active mapping configuration, as stored in the mapping module 34, the processor 30 issues the command associated with the input signal via the interface 18 to the controllable device 4. As will be appreciated, in this manner the control device 10 allows a user to use gestures, poses and movements to control the controllable device 4 without a need for a conventional controller.
The device 10 allows a user to program one single universal gesture map for use with the controllable device 4. In addition, a user may create different mapping configurations for different applications, or for different sub-routines or levels. To provide an illustrative example, a user may assign active gestures for a laterally scrolling jump-and-run game, such that a forward-moving gesture, such as walking on the spot, is emulated as a "right button", if the "right button" is the main progressing button in a laterally scrolling game. Likewise, a user may assign a bow-drawing gesture for an archery application or gestures indicating the operation of a driving wheel for a racing application. In this manner, different mapping configurations, i.e. different combinations of gesture, pose and/or movement commands, may be used to create generic controller commands, using a single programmed device 10.
The gesture recognition module 32 and the voice recognition module 33 may employ a machine learning algorithm to recognise a pose, gesture, and/or movement. The interpretation may apply a level of granularity depending on the input signals defined in the active mapping configuration. The device 10 may allow a user to train a machine learning model to recognise user-specific inputs, that a user may assign to emulated commands via the menu 50.
The device 10 may comprise a training mode in which a user may perform gestures to be interpreted by the device 10. The training mode may be used by a user to fine-tune a particular gesture as input command. To provide an illustrative example, the device may comprise a gesture "stretched arm" in the gestures-and-poses database. If a user is not able to perform a stretched-arm gesture, for instance due to impairment, the training mode allows a user to train the device with a gesture she or he is able to perform. The training mode is thought to be useful during recovery, physiotherapy, training, etc. Figure 4 shows a method 70 via which a mapping configuration for a command map may be programmed into the device 10. In step 72, a device configuration menu, such as illustrated in Figure 3, is presented to a user. In step 74, a user selects a controller that is to be emulated by the device 10 in a particular command map. Step 74 may be carried out using the emulated controller area 58 of the menu 50. In step 76, commands are defined of the controller to be emulated. The commands may be defined upon selection of the controller to be emulated, and may be retrieved from an emulated devices database 38. In steps 78 and 80, input signals are selected and assigned to the commands of the controller to be emulated. The input signals may be retrieved from a gestures-and-poses database 40 and a sound database 41. In an optional step 82, a determination is made if all commands of the controller to be emulated have been assigned at least one input signal. For instance, if the determination indicates that a command map is complete, or if the command map is incomplete, this may be indicated via a status indicator 55. In an optional step 84, a software application -e.g., a game -is assigned or otherwise associated with the command map defined in the mapping configuration. In step 86, the mapping configuration is stored as a command map. For instance, the command map may be stored in the mapping module 34. The mapping module 34 may comprise a plurality of mapping configurations, e.g. different mapping configurations for different controlled devices 4, and/or for different software applications. In step 88, a user may select one of the mapping configurations as an active mapping configuration.
As will be appreciated, multiple active mapping configurations may be active simultaneously for multiple users.
Figure 5 shows a method 90 of using a control device 10. In step 92, the device 10 records (e.g., films) the observable area. Recorded data are analysed to identify input signals such as user movement, poses, and/or gestures, or optional sound. In step 94, the input signals are matched to commands in the active command map of the mapping configuration. In step 96, the output signal corresponding to command is identified. To this end, in step 98, a group of output commands may be identified for a single input signal. Likewise, in step 98a, a single output command may be identified corresponding to multiple input signals.
In an optional step 100, the duration of the outputted commands is matched to the duration of the input signals. In an optional step 102, a determination is made if a sequence of input signals corresponds to a sequence of commands. E.g., a simplified input sequence may correspond to a more complex command sequence. In an optional step 104, intervals between input signals are identified and maintained between outputted commands. In step 106, an emulated output signal is provided via the interface, such as interface 18, to the controlled device 4. The emulated output signal is determined by the input signal by the user 2.
Referring to Figure 1, the control device 10 may be configured to monitor a widest-possible area, such as an area 20. However, in some embodiments, the control device 10 is configurable to monitor a smaller area, or to follow a part of a user. For instance, the device 10 may be configurable to follow a full height 22a of a standing user, or a head-and-torso region 22b of a user, or a body part region 22c of a user, such as a head region or limb region.
The device 10 is able to monitor the area 20 for user movement and/or for the presence of a user 2 within the area 20. This enables the device 10 to remain active in the presence of a user 2, even while the user 2 does not carry out a gesture or movement that is to be interpreted as a control command. The active status may be indicated to a user 2 via the user indicator 15, to indicate that the device 10 is perceiving a user 2 within the area 20, whether or not a control command is issued for a controllable device 4.
While the control device has been developed for use as an input device for games used with games consoles, computers and home computers, the control device may also be used in the entertainment sector, booths such as custom exhibition booths, as well as in rehabilitation and/or other healthcare environments. The control device may be used with any controllable device accepting a controller input, including vehicle controllers and drone controllers.
The illustrated embodiment shows a self-contained control device 10 to be connected to a controllable device 4, such as a games console. In a variation of the illustrated embodiment, the control device 10 is constituted by a mobile device, such as a tablet or mobile phone, configured with software (such as an "app") to use a camera arrangement and optionally microphone arrangement of the mobile device as the camera module 16. A display screen of the mobile device may be used as the display 6. As will be appreciated, the mobile device may not necessarily comprise an integral mount 14. The control device 10 may be coupled with a motorised stage, such as a rotating stage, comprising a cradle or mount, such as a suitable clamp, to support the control device 10. For instance, the motorised stage may be controllable to rotate or translate the position of the cradle or mount. As will be appreciated, a mobile phone or table seated in the cradle or secured to the mount may be rotated or moved according to a control command from the device 10. The output interface may, in that case, be used to issue control commands to an application executed on the mobile device. Alternatively, or in addition, control commands may control the operation of the motorised stage. In this manner, a movement to a side, e.g. left or right, may be used as a user input to create a control command to instruct the motorised stage to follow the user to the left, or right, as the case may be. In this manner, the display 6 may follow a user. To provide an illustrative example, the cradle with mobile device may be mounted on a stand in the centre of a room, and a user may use an exercise application that requires them to move around the stand, wherein the mobile device rotates on the stand such that the display faces towards the user.
For the recognition of gestures and user movements, the device 10 may employ machine learning and artificial intelligence systems, which may also be trained by a user. The exact method of recognising gestures is not disclosed in detail herein and a range of different methods will be known to a skilled user.
When used with machine learning, the device 10 may also be programmed to interpret commands only of a pre-defined user, wherein the pre-defined user may be based on a command input. For instance, the device 10 may be instructed via a voice command to accept commands from a person identified as user for a user-specific command map, for instance "User 1 wears a red shirt". Provided the instruction comprise a unique identifier, suitable machine learning algorithms are able to automatically determine who of a plurality of users wears a red shirt, and proceed to interpret gestures from the identified person (Here: "User 1") only. As will be appreciated, in this manner, the control device may be controlled to monitor for inputs from different users simultaneously.
Embodiments of the invention allow a simplified gesture sequence to be mapped to a more complex sequence, e.g. to use a sequence of two or three gestures to create an output sequence of four or five controls commands. To provide an illustrative example, the device may be configured by a user to interpret a shifting left-right-left as a 360° rotation command sequence for an in-game character, that is issued to an application as a rotation sequence "up-left-down-right", from the control device 10 to the controllable device 4.
The device 10 may be designed in the form of a self-contained device. Alternatively, the device 10 may be provided in the form of a software product to be installed on a suitable device, such as a mobile phone or tablet. For instance, a mobile phone within sufficient processing power, video capture capability and communication interface may be positioned such that it monitors an area for user inputs, an emulates a control command based on a user gesture, movement or pose, to provide the control command to a controllable device.
In a variation of the above embodiments, the device 10 may comprise a configuration to scan or otherwise receive an input from the electronic device that identifies a particular application or game, to allow the device to self-configure by assigning an active command map for a specific application or game.
The illustrated control device 10 is intended to be a portable device, that can be configured, or personalised, by a user with command maps. However, it will be appreciated that the invention may be integrated with a games console if desired.
Whilst the principle of the invention has been illustrated using exemplary embodiments, it will be understood that the invention is not limited to exemplary embodiments and that the invention may be embodied by other variants defined within the scope of the appended claims.
Claims (25)
- CLAIMS: 1. A control system for use as an input device for a device-to-be-controlled, the control system comprising a capture module, an interpretation module, a command emulator, and an output interface, wherein the capture module comprises a configuration allowing it to monitor an area to recognise input signals made by a user within said area, wherein the interpretation module comprises a configuration to identify input signals and to determine whether or not identified input signals correspond to a control command, based on a command map linking one or more input signals to one or more control commands, wherein the command emulator is configured to issue the control command via the output interface, such that a device-to-be-controlled may receive the control command via the output interface, wherein the control system further comprises a mapping module for creating one or more command maps, and wherein the control system allows setting one of the command maps as an active command map for use by the interpretation module.
- 2. The system according to claim 1, comprising a configuration allowing it to identify gestures and/or poses as input signals, and to determine whether or not identified gestures correspond to a control command.
- 3. The system according to claim 1 or 2, comprising a configuration allowing it to identify head movements as input signals, and to determine whether or not identified head movements correspond to a control command.
- 4. The system according to any one of the preceding claims, comprising a configuration allowing it to identify body positions and/or movement within the area monitored by the capture module, and to determine whether or not identified body positions and/or movement correspond to a control command.
- 5. The system according to any one of claims 2 to 4, further comprising a configuration allowing it to monitor for sound signals or voice from user, and to use the sound or voice as input signal.
- 6. The system according to any one of the preceding claims, comprising a configuration allowing it to determine a duration of one or more input signals, and to create a control command corresponding to the duration of the one or more input signals.
- 7. The system according to any one of the preceding claims, comprising a configuration allowing two input signals to be linked to the same control command.
- 8. The system according to any one of the preceding claims, comprising a configuration allowing one input signal to be linked to multiple control commands, to be issued simultaneously via the output interface.
- 9. The system according to any one of the preceding claims, comprising a configuration allowing it to determine a sequence of input signals, and to issue a sequence of control commands, each control command corresponding to an input signal of the command map, via the output interface.
- 10. The system according to claim 9, comprising a configuration allowing it to determine time intervals between input signals in said sequence of input signals, and to issue control command corresponding to a sequence with the same time intervals determined between input signals.
- 11. The system according to any one of the preceding claims, wherein the system is configured to identify a best match of an identified input signal to the input signals comprised in the active command map, and to determine control command based on the best match.
- 12. The system according to any one of the preceding claims, comprising a configuration allowing a user to define an alternative map selection command, wherein recognition of the alternative map selection command replaces the active command map with an inactive map of the one or more command maps.
- 13. The system according to any one of the preceding claims, comprising a configuration allowing a user to identify a first target user within the area monitored by the capture module, and to associate a first target user with a command map.
- 14. The system according to any one of the preceding claims, comprising a configuration allowing a user to identify one or more further target users within the area monitored by the capture module, and to associate each one of the one or more further target users with a command map.
- 15. The system according to claim 13 or 14, comprising a configuration to issue control commands only for identified input signals from a target user.
- 16. The system according to any one of claims 13 to 15, comprising a configuration to ignore input signals not associated with a target user.
- 17. The system according to any one of the preceding claims, configured to use a plurality of active command maps contemporaneously.
- 18. The system according to claim 17, comprising, when configured with a plurality of active command maps, a configuration to prevent use of the same control command by two active command maps.
- 19. The system according to any one of the preceding claims, wherein the output interface comprises a plurality of channels, each channel used for issuing control commands from one active command map.
- 20. The system according to any one of the preceding claims, comprising a setting to adjust a monitored body region of a user, the monitored body region comprising one of a head region, a torso region, a combined head-and-torso region, a lower limb region, and a full body region.
- 21. The system according to any one of the preceding claims, comprised in a housing comprising a mounting location for attachment of a stand or clamp.
- 22. The system according to any one of the preceding claims, comprised in a housing comprising an integral stand.
- 23. The system according to any one of the preceding claims, comprised in a housing comprising an integral clamp.
- 24. The system according to any one of the preceding claims, wherein the output interface comprises a wireless communications protocol.
- 25. The system according to any one of the preceding claims, wherein the output interface comprises a wired communications protocol.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2404590.8A GB2639968A (en) | 2024-03-28 | 2024-03-28 | Control device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2404590.8A GB2639968A (en) | 2024-03-28 | 2024-03-28 | Control device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202404590D0 GB202404590D0 (en) | 2024-05-15 |
| GB2639968A true GB2639968A (en) | 2025-10-08 |
Family
ID=91023504
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2404590.8A Pending GB2639968A (en) | 2024-03-28 | 2024-03-28 | Control device |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2639968A (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080036743A1 (en) * | 1998-01-26 | 2008-02-14 | Apple Computer, Inc. | Gesturing with a multipoint sensing device |
| US20080253613A1 (en) * | 2007-04-11 | 2008-10-16 | Christopher Vernon Jones | System and Method for Cooperative Remote Vehicle Behavior |
| US20140157209A1 (en) * | 2012-12-03 | 2014-06-05 | Google Inc. | System and method for detecting gestures |
-
2024
- 2024-03-28 GB GB2404590.8A patent/GB2639968A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080036743A1 (en) * | 1998-01-26 | 2008-02-14 | Apple Computer, Inc. | Gesturing with a multipoint sensing device |
| US20080253613A1 (en) * | 2007-04-11 | 2008-10-16 | Christopher Vernon Jones | System and Method for Cooperative Remote Vehicle Behavior |
| US20140157209A1 (en) * | 2012-12-03 | 2014-06-05 | Google Inc. | System and method for detecting gestures |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202404590D0 (en) | 2024-05-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6155448B2 (en) | Wireless wrist computing and controlling device and method for 3D imaging, mapping, networking and interfacing | |
| TWI716706B (en) | Ai-assisted operating system | |
| US20140188009A1 (en) | Customizable activity training and rehabilitation system | |
| US20160314620A1 (en) | Virtual reality sports training systems and methods | |
| CN111095150A (en) | Robot as personal trainer | |
| US9084933B1 (en) | Method and system for physiologically modulating action role-playing open world video games and simulations which use gesture and body image sensing control input devices | |
| US20140121010A1 (en) | Method and system for video gaming using game-specific input adaptation | |
| US20180181367A1 (en) | Method for providing virtual space, program and apparatus therefor | |
| KR102449439B1 (en) | Apparatus for unmanned aerial vehicle controlling using head mounted display | |
| US12059621B2 (en) | Dynamic game models | |
| EP4311584A2 (en) | Data processing apparatus and method | |
| US20250153035A1 (en) | Custom TouchSync Editor for a Game Controller | |
| EP4140549A1 (en) | Ai onboarding assistant | |
| JP2020510471A (en) | Video game control method and apparatus | |
| CN103785169A (en) | Mixed reality arena | |
| JP7356827B2 (en) | Program, information processing method, and information processing device | |
| GB2574205A (en) | Robot interaction system | |
| GB2639968A (en) | Control device | |
| US20170246534A1 (en) | System and Method for Enhanced Immersion Gaming Room | |
| KR20150097050A (en) | learning system using clap game for child and developmental disorder child | |
| US10242241B1 (en) | Advanced mobile communication device gameplay system | |
| TW202042019A (en) | Auto feed forward/backward augmented reality learning system | |
| WO2018154327A1 (en) | Computer interface system and method | |
| EP4020964B1 (en) | Camera control method and apparatus, and terminal device | |
| US11435972B2 (en) | Immersive multimedia system, immersive interactive method and movable interactive unit |