HK1244071A - Method and apparatus for human-computer interaction - Google Patents
Method and apparatus for human-computer interaction Download PDFInfo
- Publication number
- HK1244071A HK1244071A HK18100646.8A HK18100646A HK1244071A HK 1244071 A HK1244071 A HK 1244071A HK 18100646 A HK18100646 A HK 18100646A HK 1244071 A HK1244071 A HK 1244071A
- Authority
- HK
- Hong Kong
- Prior art keywords
- scene
- movable control
- finger
- touch event
- response
- Prior art date
Links
Description
Technical Field
Various embodiments of the present disclosure relate to interaction between a user and a mobile electronic device, and more particularly, to methods and apparatus for human-machine interaction.
Background
Today, various portable electronic devices have been developed to provide a user-friendly interface that facilitates user operation. Examples of such portable electronic devices include, but are not limited to, smart phones, Mobile Internet Devices (MIDs), tablet computers, Ultra Mobile Personal Computers (UMPCs), Personal Digital Assistants (PDAs), web pads, handheld Personal Computers (PCs), interactive entertainment computers, and gaming terminals. These electronic devices include touch screens (simply referred to as touch screens) that make them more user friendly and easier to use.
Mobile touch screen applications are applications developed based on touch technology that run on portable electronic devices. For example, a mobile touch screen game is an electronic game application that a user operates on a portable electronic device over a mobile communication network, including, for example, a character game, a strategy game, an action game, and the like. In mobile touch screen games of the type described above, it is often desirable to control the walking, turning, and other actions of a character in a game scene.
The first person shooter Game (FPS Game) and the third person shooter Game (TPS Game) in the current mobile touch screen games are generally controlled by a two-rocker operating system with fixed shooting key positions. The double-rocker operating system solves the requirements of walking, steering, firing and other control of FPS or TPS games to a certain extent. For example, a user sliding the left rocker may cause the character to walk in a horizontal direction, while sliding the right rocker may cause the perspective (virtual camera lens) to rotate, i.e., turn. In this operation mode, when different operations such as shooting, jumping, squatting, etc. need to be realized, the user must stop the operation of any one of the double rockers in order to vacate the finger to click the corresponding virtual key, thereby completing the operations such as shooting, opening the mirror, jumping, squatting, etc.
Therefore, the above-described solution of the dual-rocker operation system has a problem that the user cannot perform another operation such as shooting, opening the mirror, and the like while turning. This is because the dual-joystick operating system only simply imitates the operation key position of the handle in the home machine Game (Console Game), however, only the thumb of the left hand and the right hand of the user can operate on the touch screen, and the rest four fingers need to bear the weight in the non-operation area below the mobile device. Therefore, the user can use the thumb to click the virtual key such as shooting or opening the mirror only when stopping touching the virtual key for walking or turning, which results in that the games such as FPS and TPS, which require precise operations, cannot perfectly perform the necessary complex operations such as "aim at while running", "aim at while shooting", and the like, making it difficult for the user to normally experience the games.
Furthermore, similar problems exist in other applications where simultaneous view angle rotation and additional control actions are required.
Disclosure of Invention
In view of one or more of the above and other potential problems, various embodiments of the present disclosure provide a method and apparatus for human-computer interaction.
According to a first aspect of the present disclosure, there is provided a method for human-computer interaction, comprising: displaying a scene and a movable control for triggering a predefined action in the scene in a graphical user interface; detecting a touch event on a touch screen; in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen; and triggering the predefined action in the scene in response to the touch event being a single-finger click of the movable control.
According to an exemplary embodiment of the present disclosure, the method further comprises: in response to the touch event being a single-finger click on a region of the touch screen outside of the movable control, moving the movable control to a touch point of the single-finger click on the touch screen.
According to an exemplary embodiment of the present disclosure, the method further comprises: in response to the touch event being a single-finger press of the movable control, the predefined action is continually triggered in the scene.
According to an exemplary embodiment of the present disclosure, the method further comprises: in response to the touch event being a single-finger drag on the movable control, the predefined action is continuously triggered in the scene and the perspective of the scene is changed.
According to an exemplary embodiment of the present disclosure, the method further comprises: displaying in the graphical user interface at least one fixed control for triggering a respective action in the scene.
According to an exemplary embodiment of the present disclosure, the movable control is located in a region of the graphical user interface corresponding to a right half-screen of the touch screen.
According to an exemplary embodiment of the present disclosure, the scene is a first-person shooter game scene, and wherein the predefined action is a shooting action or a mirror opening action.
According to a second aspect of the present disclosure, there is provided an apparatus for human-computer interaction, comprising: a display module configured to display a scene and a movable control for triggering a predefined action in the scene in a graphical user interface; a detection module configured to detect a touch event on a touch screen; a perspective conversion module configured to change a perspective of the scene in response to the touch event being a single-finger swipe on the touchscreen; a tracking module configured to cause the movable control to track a trajectory through which the single-finger slide passes on the touchscreen in response to the touch event being a single-finger slide on the touchscreen; and a triggering module configured to trigger the predefined action in the scene in response to the touch event being a single-finger click of the movable control.
According to an example embodiment of the present disclosure, the tracking module is further configured to move the movable control to a tap point of the single-finger click on the touch screen in response to the touch event being a single-finger click on a region of the touch screen outside of the movable control.
According to an exemplary embodiment of the present disclosure, the trigger module is further configured to continuously trigger the predefined action in the scene in response to the touch event being a single-finger-length press of the movable control.
According to an exemplary embodiment of the present disclosure, the triggering module is further configured to continuously trigger the predefined action in the scene in response to the touch event being a single-finger drag of the movable control, and the perspective conversion module is further configured to change the perspective of the scene in response to the touch event being a single-finger drag of the movable control.
According to an exemplary embodiment of the disclosure, the display module is further configured to display in the graphical user interface at least one fixed control for triggering a respective action in the scene.
According to an exemplary embodiment of the present disclosure, the movable control is located in a region of the graphical user interface corresponding to a right half-screen of the touch screen.
According to an exemplary embodiment of the present disclosure, the scene is a first-person shooter game scene, and wherein the predefined action is a shooting action or a mirror opening action.
According to a third aspect of the present disclosure, there is provided a method for human-computer interaction, comprising: displaying a first-person shooter game scene and a movable control for triggering a shooting action of a gun without a sighting telescope in the first-person shooter game scene in a graphical user interface; detecting a touch event on a touch screen; in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen; and in response to the touch event being a touch of the movable control, triggering the shooting action in the first-person shooting game scenario.
According to a fourth aspect of the present disclosure, there is provided a method for human-computer interaction, comprising: displaying a first-person shooting game scene and a movable control used for triggering a gun with a sighting telescope to perform mirror opening, mirror closing and shooting actions in the first-person shooting game scene in a graphical user interface; detecting a touch event on a touch screen; in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen; triggering the open mirror action in the first-person shooter game scene in response to the touch event being a single-finger click of the movable control; triggering the open mirror action in the first-person shooter game scene and changing a perspective of the first-person shooter game scene in response to the touch event being a single-finger drag on the movable control; and in response to ceasing to touch the movable control, triggering the mirror closing action and the shooting action in the first-person shooter game scene.
According to a fifth aspect of the present disclosure, a computer-readable storage medium is provided. The computer readable storage medium has computer readable program instructions stored thereon for performing the steps of the method described above.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising any one of the above-mentioned apparatuses for human-computer interaction.
In the embodiments of the present disclosure, by providing a movable control for triggering a predefined action on a graphical user interface, the movable control can track a touch position when a finger of a user slides on a touch screen, so that the predefined action can be conveniently performed using the movable control while controlling the view angle of a scene to rotate.
Drawings
These and other objects, features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings, wherein:
FIG. 1 shows a flow diagram of a method for human-computer interaction, according to an embodiment of the present disclosure; and
FIG. 2 shows a block diagram of an apparatus for human-computer interaction, according to an embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In describing example embodiments, the terms "include" and its derivatives mean "include but are not limited to" and include "and" include ". The term "responsive to" means "at least partially responsive to". The term "one embodiment" or "the embodiment" means "at least one embodiment".
Embodiments of the present disclosure will be described in detail below. As will be understood from the following description, one of the basic concepts of the present disclosure resides in: by arranging a movable control for triggering the predefined action on the graphical user interface, the movable control can track the touch position when the finger of the user slides on the touch screen, so that the predefined action can be conveniently executed by using the movable control while the visual angle of the scene is controlled to rotate.
FIG. 1 shows a flow diagram of a method for human-computer interaction, according to an embodiment of the present disclosure. As shown in fig. 1, a method for human-computer interaction may include: step 101, displaying a scene and a movable control used for triggering a predefined action in the scene in a graphical user interface; step 102, detecting a touch event on a touch screen; step 103, in response to the touch event being a single-finger sliding on the touch screen, changing the view angle of the scene, and enabling the movable control to track the track which the single finger slides on the touch screen; and step 104, in response to the touch event being a single-finger click of the movable control, triggering a predefined action in the scene.
In one embodiment, the method may further include: in response to the touch event being a single-finger click on a region of the touch screen outside of the movable control, the movable control is moved to the touch point of the single-finger click on the touch screen.
In one embodiment, the method may further include: in response to the touch event being a single finger press of the movable control, a predefined action is continually triggered in the scenario described above.
In one embodiment, the method may further include: in response to the touch event being a single-finger drag on the movable control, a predefined action is continuously triggered in the scene and the perspective of the scene is changed.
In one embodiment, the method may further include: and displaying at least one fixed control used for triggering the corresponding action in the scene in the graphical user interface.
In one embodiment, the movable control is located in a region of the graphical user interface corresponding to the right half of the touch screen.
In one embodiment, the scene is a first person shooter game scene, and wherein the predefined action is a shooting action or a mirror opening action.
The principles of the present disclosure will be specifically explained below using a first-person shooter game application as an example.
When a first-person shooter game application is launched in the mobile device, a first-person shooter game scene and a movable control that can respond to and track touches will be displayed in the graphical user interface. For example, when the movable control is touched, predefined actions may be triggered in a shooting game scenario, such as a shooting action of a firearm without a scope, an opening action, a closing action, and a shooting action of a firearm with a scope, and so forth. For example, touching the movable control will trigger a shooting action when a character in the first-person shooter game scene is currently using a firearm without a sighting telescope, and touching the movable control will trigger an open-mirror action, a close-mirror action, and a shooting action when a character in the first-person shooter game scene is currently using a firearm with a sighting telescope.
A touch event will be detected when it occurs within a responsive area of the touch screen (e.g., an area of the right half of the touch screen that does not contain any controls). After the occurrence of the touch event is detected, the specific action of the user touch operation is identified. For example, when the touch operation of the user is recognized as single-finger clicking, the movable control can automatically move to the touch point of the finger, so that the intelligent following effect is achieved.
When the movable control tracks the touch point, a single-finger sliding operation may be performed. At this point, the movable control continues to track the sliding touch point. At the same time, the virtual camera lens will be controlled to steer following the sliding direction in the graphical user interface, thus enabling lens steering (i.e., changing the angle of view of the game scene). Therefore, the requirement of the user on aiming the game by using one-time operation of a single finger can be met.
The movable control may be touched directly when a firing motion of the firearm without the scope is desired. At this time, it is determined on which coordinate the current movable control (which may be regarded as a firing button) is drawn, and then the coordinate of the movable control is compared with the coordinate of the touched position. When the two overlap each other, a predefined action is generated by which the weapon starts to fire, thus enabling a single-operation firing function of the firearm without the sighting telescope.
When the movable control is touched to slide (namely, the movable control is dragged by a single finger), the operation information of the user is analyzed into sliding and long pressing. By processing attributes of the moving operation such as direction, speed, acceleration and the like, a predefined action of shooting while steering the virtual camera lens can be generated, thereby realizing the function of shooting while steering of the gun without the sighting telescope. Therefore, for the gun without the sighting telescope, the sighting and shooting functions can be simultaneously realized through the movable control, namely the virtual button, and the operation complexity is greatly simplified.
When using a sniping weapon with a sighting telescope, the state of the weapon used by the user is recorded and the operating mode is adjusted to the sniping mode. At the moment, the sighting telescope can be opened by directly touching the movable control, so that the function of opening the sighting telescope is realized. Meanwhile, the function of preventing the firearm from shooting in advance can be realized by adjusting the operation mode to the sighting telescope mode.
When the sighting telescope is opened, sliding operation is executed on the touch screen, and the lens of the virtual camera can rotate along with sliding, so that the sighting function in the open state is realized.
When the sighting telescope is opened, the movable control is released, and a predefined action of closing the sighting telescope and shooting can be triggered, so that the functions of closing the sighting telescope and shooting are realized. Therefore, for the gun with the sighting telescope, the functions of opening the telescope, aiming, closing the telescope and shooting can be simultaneously realized through the movable control which is the virtual button, and the operation complexity is greatly simplified.
Accordingly, a method for human-machine interaction for manipulating a firearm without a sighting telescope in a first-person shooter game is also provided in the present disclosure. The method comprises the following steps: displaying a first-person shooting game scene and a movable control for triggering a shooting action of a gun without a sighting telescope in the first-person shooting game scene in a graphical user interface; detecting a touch event on a touch screen; in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe on the touchscreen; and triggering a shooting action in the first-person shooting game scenario in response to the touch event being a touch of the movable control.
In addition, another method for human-computer interaction is provided in the present disclosure for manipulating a gun with a sighting telescope in a first-person shooter game. The method comprises the following steps: displaying a first-person shooting game scene and a movable control used for triggering a gun with a sighting telescope to perform mirror opening, mirror closing and shooting actions in the first-person shooting game scene in a graphical user interface; detecting a touch event on a touch screen; in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe on the touchscreen; triggering a mirror-opening action in a first-person shooter game scene in response to the touch event being a single-finger click of a movable control; triggering an open mirror action in the first-person shooter game scene and changing a perspective of the first-person shooter game scene in response to the touch event being a single-finger drag of the movable control; and triggering a mirror-closing action and a shooting action in the first-person shooting game scene in response to stopping the touching of the movable control.
In the above embodiments of the present disclosure, by providing a movable control for triggering a predefined action on a graphical user interface, the movable control can track a touch position when a finger of a user slides on a touch screen, so that the predefined action can be conveniently performed using the movable control while controlling the view angle of a scene to rotate.
FIG. 2 shows a block diagram of an apparatus for human-computer interaction, according to an embodiment of the present disclosure. As shown in fig. 2, the apparatus for human-computer interaction may include: a display module 201 configured to display a scene and a movable control for triggering a predefined action in the scene in a graphical user interface; a detection module 202 configured to detect a touch event on a touch screen; a perspective conversion module 203 configured to change a perspective of the scene in response to the touch event being a single-finger swipe on the touch screen; a tracking module 204 configured to cause the movable control to track a trajectory through which the single-finger slides on the touch screen in response to the touch event being a single-finger slide on the touch screen; and a triggering module 205 configured to trigger a predefined action in the scene in response to the touch event being a single-finger click of the movable control.
In one embodiment, the tracking module 204 may be further configured to move the movable control to a touch point of a single-finger click on the touch screen in response to the touch event being a single-finger click on a region of the touch screen outside of the movable control.
In one embodiment, the triggering module 205 may be further configured to continuously trigger a predefined action in the scene in response to the touch event being a single-finger-length press of the movable control.
In one embodiment, the triggering module 205 is further configured to continuously trigger the predefined action in the scene in response to the touch event being a single-finger drag on the movable control, and the perspective conversion module 203 is further configured to change the perspective of the scene in response to the touch event being a single-finger drag on the movable control.
In one embodiment, the display module 201 may be further configured to display at least one fixed control in the graphical user interface for triggering a corresponding action in the scene.
In one embodiment, the movable control is located in a region of the graphical user interface corresponding to the right half of the touch screen.
In one embodiment, the scene is a first person shooter game scene and wherein the predefined action is a shooting action or a mirror opening action.
The apparatus for human-computer interaction described in fig. 2 corresponds to the method for human-computer interaction shown in fig. 1. Therefore, the principle described above in connection with fig. 1 can be applied to the apparatus for human-computer interaction in fig. 2, and will not be described in detail herein.
In an embodiment of the present disclosure, a computer-readable storage medium may also be provided. The computer readable storage medium has computer readable program instructions stored thereon for performing the steps of the method described above.
In an embodiment of the present disclosure, an electronic device, such as a smart phone, a Mobile Internet Device (MID), a tablet computer, an Ultra Mobile Personal Computer (UMPC), a Personal Digital Assistant (PDA), a web pad, a handheld Personal Computer (PC), an interactive entertainment computer, a game terminal, and the like, may also be provided. The electronic device may comprise any of the means for human-computer interaction described above.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (18)
1. A method for human-computer interaction, comprising:
displaying a scene and a movable control for triggering a predefined action in the scene in a graphical user interface;
detecting a touch event on a touch screen;
in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen; and
triggering the predefined action in the scene in response to the touch event being a single-finger click of the movable control.
2. The method of claim 1, further comprising:
in response to the touch event being a single-finger click on a region of the touch screen outside of the movable control, moving the movable control to a touch point of the single-finger click on the touch screen.
3. The method of claim 1, further comprising:
in response to the touch event being a single-finger press of the movable control, the predefined action is continually triggered in the scene.
4. The method of claim 1, further comprising:
in response to the touch event being a single-finger drag on the movable control, the predefined action is continuously triggered in the scene and the perspective of the scene is changed.
5. The method of claim 1, further comprising:
displaying in the graphical user interface at least one fixed control for triggering a respective action in the scene.
6. The method of claim 1, wherein the movable control is located in an area of the graphical user interface corresponding to a right half-screen of the touch screen.
7. The method of any of claims 1-6, wherein the scene is a first-person shooter game scene, and wherein the predefined action is a shooting action or a mirror opening action.
8. An apparatus for human-computer interaction, comprising:
a display module configured to display a scene and a movable control for triggering a predefined action in the scene in a graphical user interface;
a detection module configured to detect a touch event on a touch screen;
a perspective conversion module configured to change a perspective of the scene in response to the touch event being a single-finger swipe on the touchscreen;
a tracking module configured to cause the movable control to track a trajectory through which the single-finger slide passes on the touchscreen in response to the touch event being a single-finger slide on the touchscreen; and
a trigger module configured to trigger the predefined action in the scene in response to the touch event being a single-finger click of the movable control.
9. The apparatus of claim 8, wherein the tracking module is further configured to move the movable control to a tap point of the single-finger tap on the touchscreen in response to the touch event being a single-finger tap on a region of the touchscreen outside of the movable control.
10. The apparatus of claim 8, wherein the triggering module is further configured to continuously trigger the predefined action in the scene in response to the touch event being a single-finger-length press of the movable control.
11. The apparatus of claim 8, wherein,
the trigger module is further configured to, in response to the touch event being a single-finger drag of the movable control, persistently trigger the predefined action in the scene, and
the perspective conversion module is further configured to change a perspective of the scene in response to the touch event being a single-finger drag of the movable control.
12. The apparatus of claim 8, wherein the display module is further configured to display at least one fixed control in the graphical user interface for triggering a respective action in the scene.
13. The apparatus of claim 8, wherein the movable control is located in an area of the graphical user interface corresponding to a right half-screen of the touch screen.
14. The apparatus of any of claims 8-13, wherein the scene is a first person shooter game scene, and wherein the predefined action is a shooting action or a mirror opening action.
15. A method for human-computer interaction, comprising:
displaying a first-person shooter game scene and a movable control for triggering a shooting action of a gun without a sighting telescope in the first-person shooter game scene in a graphical user interface;
detecting a touch event on a touch screen;
in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen; and
triggering the shooting action in the first-person shooter game scene in response to the touch event being a touch of the movable control.
16. A method for human-computer interaction, comprising:
displaying a first-person shooting game scene and a movable control used for triggering a gun with a sighting telescope to perform mirror opening, mirror closing and shooting actions in the first-person shooting game scene in a graphical user interface;
detecting a touch event on a touch screen;
in response to the touch event being a single-finger swipe on the touchscreen, changing a perspective of the first-person shooter game scene and causing the movable control to track a trajectory through which the single-finger swipe is made on the touchscreen;
triggering the open mirror action in the first-person shooter game scene in response to the touch event being a single-finger click of the movable control;
triggering the open mirror action in the first-person shooter game scene and changing a perspective of the first-person shooter game scene in response to the touch event being a single-finger drag on the movable control; and
triggering the mirror closing action and the shooting action in the first-person shooter game scene in response to ceasing the touching of the movable control.
17. A computer readable storage medium having computer readable program instructions stored thereon for performing the steps of the method of any of claims 1-7 and 15-16.
18. A mobile electronic device comprising the apparatus of any of claims 8 to 14.
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1244071A true HK1244071A (en) | 2018-07-27 |
| HK1244071A1 HK1244071A1 (en) | 2018-07-27 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107132988B (en) | Virtual objects condition control method, device, electronic equipment and storage medium | |
| TW201727470A (en) | Method and apparatus for human-computer interaction (HCI) including a display module, a detection module, a viewing angle conversion module, a tracking module, and a touch-trigger module | |
| US10500504B2 (en) | Shooting game control method and apparatus, storage medium, processor, and terminal | |
| CN107678647B (en) | Virtual shooting subject control method and device, electronic equipment and storage medium | |
| US11559736B2 (en) | Response method, apparatus and terminal to a control | |
| US10592049B2 (en) | Systems and methods for using hover information to predict touch locations and reduce or eliminate touchdown latency | |
| CN105760076B (en) | game control method and device | |
| US10705619B2 (en) | System and method for gesture based data and command input via a wearable device | |
| CA2737084C (en) | Bimanual gesture based input and device control system | |
| CN106155553A (en) | Virtual objects motion control method and device | |
| US20100088595A1 (en) | Method of Tracking Touch Inputs | |
| CN107297073B (en) | Method and device for simulating peripheral input signal and electronic equipment | |
| CN108553892B (en) | Virtual object control method and device, storage medium and electronic equipment | |
| CN107533373A (en) | Via the input of the sensitive collision of the context of hand and object in virtual reality | |
| KR20190020174A (en) | Radar-based gesture sensing and data transmission | |
| CN102346592A (en) | Conversion of touch input | |
| US20150193000A1 (en) | Image-based interactive device and implementing method thereof | |
| CN107273037A (en) | Virtual object control method and device, storage medium, electronic equipment | |
| CN112162665A (en) | Operation method and device | |
| CN105630350A (en) | Virtual character control method and device | |
| EP3353629A1 (en) | Trackpads and methods for controlling a trackpad | |
| CN110442263A (en) | Touch screen processing method, device, storage medium and electronic equipment | |
| CN103353826A (en) | Display equipment and information processing method thereof | |
| TW202144984A (en) | Equipment control method and device, storage medium and electronic equipment | |
| EP4310637B1 (en) | System and method for remotely controlling extended reality by virtual mouse |