CN111603770B - Virtual environment picture display method, device, equipment and medium - Google Patents
Virtual environment picture display method, device, equipment and medium Download PDFInfo
- Publication number
- CN111603770B CN111603770B CN202010437875.3A CN202010437875A CN111603770B CN 111603770 B CN111603770 B CN 111603770B CN 202010437875 A CN202010437875 A CN 202010437875A CN 111603770 B CN111603770 B CN 111603770B
- Authority
- CN
- China
- Prior art keywords
- virtual environment
- virtual
- virtual object
- wheel disc
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5258—Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/303—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a method, a device, equipment and a medium for displaying a virtual environment picture, and relates to the field of virtual environments. The method comprises the following steps: displaying a first virtual environment picture, wherein the first virtual environment picture is obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center; responsive to an adjustment instruction of the observation center, adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object; and displaying a second virtual environment picture, wherein the second virtual environment picture is obtained by observing the virtual environment by taking a second position relative to the first virtual object as an observation center. The method and the device can dynamically change the observation center of the camera model, thereby meeting the visual field requirement expected by the user.
Description
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method, a device, equipment and a medium for displaying a virtual environment picture.
Background
The fight game is a game in which a plurality of user accounts play a competition in the same scene. Alternatively, the fight game may be a multiplayer online tactical competition game (Multiplayer Online Battle Arena Games, MOBA).
In a typical MOBA game, there is a three-dimensional virtual environment in which a plurality of virtual objects belonging to two hostile camps are active to occupy the hostile camps. Each user controls one master virtual object in the three-dimensional virtual environment using a client. For the game picture displayed by any client, the camera model corresponding to the master control virtual object is collected in the three-dimensional virtual environment. In general, the camera model takes the master control virtual object as an observation center to acquire a picture of the three-dimensional virtual environment, and a game picture is obtained. The main control virtual object is positioned at the central position of the game picture.
The camera model has a limited field of view, and is not necessarily the optimal field of view desired by the user, and the information displayed on the game screen is limited.
Disclosure of Invention
The embodiment of the application provides a display method, a device, equipment and a medium of a virtual environment picture, which can enable a user to manually adjust the view field range observed by a camera model, so as to obtain the optimal view field range expected by the user. The technical scheme is as follows:
According to one aspect of the present application, there is provided a method for displaying a virtual environment screen, the method including:
displaying a first virtual environment picture, wherein the first virtual environment picture is obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
responsive to an adjustment instruction of the observation center, adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object;
and displaying a second virtual environment picture, wherein the second virtual environment picture is obtained by observing the virtual environment by taking a second position relative to the first virtual object as an observation center.
According to another aspect of the present application, there is provided a display apparatus of a virtual environment screen, the apparatus including:
the display module is used for displaying a first virtual environment picture, wherein the first virtual environment picture is obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
an adjustment module for adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to an adjustment instruction of the observation center;
The display module is configured to display a second virtual environment picture, where the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
According to another aspect of the present application, there is provided a computer apparatus including a processor and a memory, in which at least one instruction, at least one program, a code set, or an instruction set is stored, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by the processor to implement the display method of a virtual environment picture as described in the above aspect.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by the processor to implement the display method of a virtual environment picture as described in the above aspect.
The beneficial effects that technical scheme that this application embodiment provided include at least:
By responding to the adjustment instruction of the observation center, the observation center is modified from the first position relative to the first virtual object to the second position, so that a user can customize the observation center of the camera model, the optimal visual field range expected by the user is obtained, and more effective information is displayed in the virtual environment picture as much as possible.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a state synchronization technique provided by another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a frame synchronization technique provided by another exemplary embodiment of the present application;
fig. 4 is an interface schematic diagram of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
Fig. 5 is an interface schematic diagram of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
fig. 6 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of a camera model provided in accordance with another exemplary embodiment of the present application with its viewing center changed;
fig. 8 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
fig. 9 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 10 is a schematic illustration of a field of view adjustment control for adjusting an anchor position of a camera model provided in accordance with another exemplary embodiment of the present application;
fig. 11 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
fig. 12 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 13 is a schematic illustration of the adjustable range of the observation center of a camera model provided in another exemplary embodiment of the present application;
FIG. 14 is a graphical illustration of the calculation of the offset value of a rocker in the wheel disc area provided by another exemplary embodiment of the present application;
FIG. 15 is a schematic view of a dead zone region provided by another exemplary embodiment of the present application;
fig. 16 is a method flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 17 is a side view of a camera model in a three-dimensional virtual environment provided in accordance with another exemplary embodiment of the present application;
fig. 18 is a block diagram of a display device of a virtual environment screen provided in another exemplary embodiment of the present application;
fig. 19 is a block diagram of a terminal provided in another exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, the terms involved in the embodiments of the present application will be briefly described:
virtual environment: is a virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulated world of a real world, a semi-simulated and semi-fictional three-dimensional world, or a purely fictional three-dimensional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. Optionally, the virtual environment is further used for a virtual environment fight between at least two virtual objects, in which virtual environment there are virtual resources available for the at least two virtual objects. Optionally, the virtual environment includes a symmetric lower left corner area and upper right corner area, and the virtual objects belonging to two hostile camps occupy one of the areas respectively, and take target buildings/points/bases/crystals deep in the opposite area as victory targets.
Virtual object: refers to movable objects in a virtual environment. The movable object may be at least one of a virtual character, a virtual animal, and a cartoon character. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual objects may be three-dimensional virtual models, each having its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment. Alternatively, the virtual object is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the virtual object may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited by embodiments of the present application.
The multi-person online tactical competition refers to: in the virtual environment, different virtual teams belonging to at least two hostile camps occupy respective map areas respectively, and play a game with a certain winning condition as a target. Such victory conditions include, but are not limited to: at least one of occupying a data point or destroying a hostile data point, clicking a virtual object of the hostile, guaranteeing the survival of the virtual object per se in a specified scene and time, seizing a certain resource, and comparing and exceeding the other party in a specified time. Tactical competition can be performed in units of offices, and maps of each tactical competition can be the same or different. Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5.
MOBA game: the game is that a plurality of points are provided in a virtual environment, and users in different camps control virtual objects to fight in the virtual environment, occupy the points or destroy hostile camping points. For example, a MOBA game may divide a user into two hostile camps, disperse user-controlled virtual objects in the virtual environment that compete with each other to destroy or preempt all points of the hostile as a winning condition. The MOBA game is in units of plays, and the duration of a play of the MOBA game is from the time when the game starts to the time when the winning condition is achieved.
The user interface UI (User Interface) controls, any visual controls or elements that can be seen (not precluded from displaying) on the user interface of the application, such as, for example, controls for pictures, input boxes, text boxes, buttons, labels, etc., some of which control UI controls control the first virtual object to release skills in response to user operations, such as, for example, skill controls. The user triggers a skill control to control the first virtual object to release the skill.
FIG. 1 is a block diagram illustrating a computer system according to an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a multi-person online fight program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on a screen of the first terminal 110. The client may be any one of a military Simulation program, a large fleeing shooting Game, a Virtual Reality (VR) application program, an augmented Reality (Augmented Reality, AR) program, a three-dimensional map program, a Virtual Reality Game, an augmented Reality Game, a First-person shooting Game (FPS), a Third-person shooting Game (Third-Personal Shooting Game, TPS), a multiplayer online tactical Game (Multiplayer Online Battle Arena Games, MOBA), a strategy Game (strategy Game, SLG). In this embodiment, the client is exemplified as a MOBA game. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment to perform an activity, and the first virtual object may be referred to as a first virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as an emulated persona or a cartoon persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual environment, and the client 131 may be a multi-person online fight program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on a screen of the second terminal 130. The client may be any of a military simulation program, a fleeing game, a VR application program, an AR program, a three-dimensional map program, a virtual reality game, an augmented reality game, FPS, TPS, MOBA, SLG, which in this embodiment is exemplified by a MOBA game. The second terminal 130 is a terminal used by the second user 113, and the second user 113 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, and the second virtual object may be referred to as a first virtual object of the second user 113. Illustratively, the second virtual object is a second virtual character, such as an emulated persona or a cartoon persona.
Optionally, the first avatar and the second avatar are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first avatar and the second avatar may belong to different camps, different teams, different organizations, or have hostile relationships.
Alternatively, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may refer broadly to one of the plurality of terminals and the second terminal 130 may refer broadly to another of the plurality of terminals, the present embodiment being illustrated with only the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but in different embodiments there are a plurality of other terminals 140 that can access the server 120. Optionally, there are one or more terminals 140 corresponding to the developer, a development and editing platform for supporting the client of the virtual environment is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the client installation package from the server 120 to implement the update of the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
Server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for clients supporting a three-dimensional virtual environment. Optionally, the server 120 takes on primary computing work and the terminal takes on secondary computing work; alternatively, the server 120 takes on secondary computing work and the terminal takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 120 and the terminals.
In one illustrative example, server 120 includes a processor 122, a user account database 123, an engagement service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load instructions stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as an avatar of the user account, a nickname of the user account, and a combat index of the user account, where the user account is located; the combat service module 124 is configured to provide a plurality of combat rooms for users to combat, such as 1V1 combat, 3V3 combat, 5V5 combat, etc.; the user-oriented I/O interface 125 is used to establish communication exchanges of data with the first terminal 110 and/or the second terminal 130 via a wireless network or a wired network.
The server 120 may employ synchronization techniques to make the picture presentation uniform among multiple clients. Exemplary synchronization techniques employed by the server 120 include: state synchronization techniques or frame synchronization techniques.
State synchronization technique
In an alternative embodiment based on fig. 1, the server 120 employs a state synchronization technique to synchronize with multiple clients. In the state synchronization technique, as shown in fig. 2, combat logic operates in a server 120. When a state change occurs in a certain virtual object in the virtual environment, the server 120 transmits a state synchronization result to all clients, such as clients 1 to 10.
In an exemplary example, the client 1 transmits a request to the server 120 for requesting the virtual object 1 to release the frost skill, and the server 120 determines whether the frost skill is allowed to be released, and what the injury value to the other virtual object 2 is when the frost skill is allowed to be released. The server 120 then sends the skill release results to all clients, which update the local data and interface presentation based on the skill release results.
Frame synchronization technique
In an alternative embodiment based on fig. 1, the server 120 employs a frame synchronization technique to synchronize with multiple clients. In the frame synchronization technique, as shown in fig. 3, combat logic operates in each client. Each client sends a frame synchronization request to the server, where the frame synchronization request carries the local data changes of the client. After receiving a certain frame synchronization request, the server 120 forwards the frame synchronization request to all clients. After each client receives the frame synchronization request, the frame synchronization request is processed according to local combat logic, and local data and interface performances are updated.
In connection with the description of the virtual environment and the description of the implementation environment, the method for displaying the virtual environment picture provided in the embodiment of the present application is described, and the execution body of the method is exemplified as a client running on the terminal shown in fig. 1. The terminal is operated with a client, which is an application supporting a virtual environment.
Referring to fig. 4 in combination, during a virtual environment based athletic procedure, a user interface is displayed on the client. An exemplary user interface includes: a virtual environment screen 22 and a HUD (Head Up Display) panel 24. The virtual environment screen 22 is a screen obtained by observing the virtual environment using the angle of view corresponding to the virtual object 26. The HUD panel 24 includes a plurality of human-machine interaction controls, such as a move control, three or four skill release controls, and attack buttons, among others.
Illustratively, each virtual object has a one-to-one correspondence of camera models in the virtual environment. The virtual object 26 in fig. 4 corresponds to a camera model 28. The center of view (or focus) of the camera model 28 is the virtual object 26, and the center of view is the intersection of the virtual environment and the center line of view of the camera model 28 along the direction of view, also referred to as the focus of the camera model 28. The object located in the observation center is located in the center of the virtual environment screen acquired by the camera model 28. As the virtual object 26 moves within the virtual environment, the three-dimensional coordinates (also referred to as anchor points) of the camera model 28 within the virtual environment also move following the movement of the virtual object 26. The camera model 28 has a lens height relative to the virtual object 26. The camera model 28 looks down at the virtual object 26 at an oblique angle (e.g., 45 degrees).
The frames captured by the camera model 28 in the virtual environment are the virtual environment frames 22 displayed on the client.
The embodiment of the application provides a scheme for dynamically changing the anchor point of the camera model 28, so that the visual field of the virtual environment picture 22 is dynamically changed, and the method and the device more meet the expectations of users.
In the example shown in fig. 5, the terminal displays a user interface including a virtual environment screen and a human-machine interaction control superimposed on the virtual environment screen. The man-machine interaction control comprises: a field of view adjustment control 32, a direction of movement control 34, a skill release control 36, and an attack control 38, as shown in fig. 5 (a). By default, the virtual environment frame is a frame acquired in the virtual environment by the camera model 28 with the first location where the virtual object 26 is located as the viewing center. When the user wishes to change the field of view, the field of view adjustment control 32 is pressed. The field of view adjustment control 32 changes from a smaller eye display configuration to a larger wheel display configuration, as shown in fig. 5 (b). In the wheel display configuration, the field of view adjustment control 32 includes a wheel region and a rocker located at the center of the wheel region. Wherein the rocker corresponds to an anchor position of the camera model 28 and the wheel area corresponds to an adjustable range of anchor positions of the camera model 28. Illustratively, the adjustable range is a circular range.
Illustratively, as the user drags the rocker to change position in the wheel area, the anchor point of the camera model 28 in the virtual environment will also change. After changing the anchor point position, the viewing center of the camera model 28 will switch from a first position where the virtual object 26 is located to a second position where there is an offset with respect to the virtual object 26. The positional relationship of the second position with respect to the first position corresponds to the positional relationship of the rocker with respect to the center position of the wheel disc area, as shown in fig. 5 (c).
When the user's finger is released, the camera model 28 will capture a picture in the virtual environment as a viewing center at a second position offset from the virtual object 26, as shown in fig. 5 (d). Since the location of the virtual environment 26 may change at all times, the second location, at which there is an offset relative to the virtual object 26, also changes at all times.
The operation of the whole process may include the steps of:
clicking a rocker:
the user clicks the rocker on the view adjustment control and when the user's finger is pressed, the anchor point position of the camera model returns to the default position. The default position is the anchor point position when the user-controlled first virtual object is at the center of the screen.
Dragging the rocker:
1. when the user drags the rocker on the view adjustment control, the anchor point position of the camera model will shift from the default position. After the user's finger is lifted, the anchor point position of the camera model stays at the last offset position.
2. What the user drags the rocker changes is the relative position of the anchor point position of the camera model and the first virtual object, i.e. the anchor point position of the camera model is bound to the first virtual object. When the first virtual object moves, the anchor point position of the camera model also moves.
Fig. 6 is a flowchart illustrating a method for displaying a virtual environment screen according to an exemplary embodiment of the present application. The embodiment is exemplified by the method applied to the client. The method comprises the following steps:
the first virtual object is a virtual object controlled by a client, but the possibility that the first virtual object is controlled by other clients or artificial intelligence modules is not excluded. The client controls the activities of the first virtual object in the virtual environment according to the received user operation (or man-machine operation). Illustratively, the activity of the first virtual object in the virtual environment includes: at least one of walking, running, jumping, climbing, lying down, attacking, releasing skills, picking up props, and sending messages.
The first virtual environment screen is a screen obtained by observing the virtual environment by the camera model with the first position relative to the first virtual object as an observation center. Optionally, the camera model also has a lens height with respect to the first virtual object. The virtual environment picture is a two-dimensional picture displayed on the client after the picture collection of the three-dimensional virtual environment. The shape of the virtual environment screen is illustratively determined according to the shape of the display screen of the terminal or according to the shape of the user interface of the client. Taking the example that the display screen of the terminal is rectangular, the virtual environment screen is also displayed as a rectangular screen.
A camera model bound with the first virtual object is arranged in the virtual environment. The first virtual environment screen is a screen captured by the camera model with a certain observation position in the virtual environment as an observation center. When the first virtual environment picture is acquired, the observation center is a first position relative to the first virtual object. For example, the first location is where the first virtual object is located. Taking the example that the first virtual environment picture is a rectangular picture, the intersection point of the diagonal lines of the rectangle in the first virtual environment picture is the object positioned in the observation center.
In general, the camera model bound to the first virtual object takes the position of the first virtual object as an observation center, and the position of the first virtual object in the virtual environment is the first position. When the virtual environment is a three-dimensional virtual environment, the observation center is the three-dimensional coordinates of the first virtual object in the virtual environment. For example, if the ground in the virtual environment is a horizontal plane, the height coordinate of the observation center is 0, and the observation center may be approximately expressed as a two-dimensional coordinate on the horizontal plane.
the adjustment instruction is an instruction for adjusting the observation center. The adjustment instruction is triggered by the user, but does not exclude the possibility that the adjustment instruction is triggered automatically by artificial intelligence (Artificial Intelligence, AI) in the terminal, or sent by the server.
When the terminal is an electronic device with a touch screen, the adjustment instruction may be an instruction triggered by a touch operation on the touch screen; when the terminal is an electronic device with a handle peripheral, the adjustment instruction may be an instruction triggered on a physical key of the handle; when the terminal is VR or AR, the adjustment instruction may be an instruction triggered by an eye or voice of the user, and the triggering manner of the adjustment instruction in the embodiment of the present application is not limited.
The first location is, for example, a location where the first virtual object is located, or the first location is a location where there is a first offset relative to the location where the first virtual object is located.
The second location is, for example, a location at which there is a second offset relative to the location at which the first virtual object is located. Typically, the first and second positions are different.
Since the position of the first virtual object is dynamically changed, the second position relative to the first virtual object is also changed following the change in the position of the first virtual object.
since the observation centers of the first virtual environment screen and the second virtual environment screen are different, the fields of view of the first virtual environment screen and the second virtual environment screen are different.
As schematically shown in fig. 7, in the first virtual environment screen, the center of the screen is a first position 72 relative to the first virtual object, where the first position 72 is the position where the first virtual object is located, and the first virtual object is located in the center of the screen; in the second virtual environment screen, the screen center is the second position 74 relative to the first virtual object, and the second position 74 is a position having a second offset in the upper right direction relative to the first virtual object, and at this time, the first virtual object is positioned at the lower left corner of the screen center, and the user's upper right field of view of the first virtual object is increased and the lower left field of view is decreased.
In summary, in the method provided in this embodiment, by modifying the observation center from the first position relative to the first virtual object to the second position in response to the adjustment instruction of the observation center, the user may customize the observation center of the camera model, thereby obtaining an optimal field of view that satisfies the user's own expectations, and displaying more effective information in the virtual environment image as much as possible.
The adjustment instruction may be triggered by a drag operation by a user on a view adjustment control displayed on the touch display screen.
Fig. 8 is a flowchart illustrating a method for displaying a virtual environment screen according to another exemplary embodiment of the present application. The method may be performed by the client shown in fig. 1. The method comprises the following steps:
the client displays a first user interface, the first user interface comprising: a first virtual environment screen 90 acquired by the camera model, and a view adjustment control 32 superimposed on the first virtual environment screen.
A camera model bound with the first virtual object is arranged in the virtual environment. The first virtual environment picture takes the position of the first virtual object as an observation center in the camera model, and the position of the first virtual object in the virtual environment is the first position.
Illustratively, when the virtual environment is a three-dimensional virtual environment, the observation center is a three-dimensional coordinate of the first virtual object in the virtual environment. For example, if the ground in the virtual environment is a horizontal plane, the height coordinate of the observation center is 0, and the observation center may be approximately expressed as a two-dimensional coordinate on the horizontal plane.
Optionally, the view adjustment control 32 includes: wheel disc area a and rocker b. The wheel area a is a circular area and the rocker b is a circular button. The area of the rocker b is smaller than the area of the wheel disc area a. In the case where the drag operation of the user is not received, the jog b is displayed at the center of the wheel area a. Upon receiving a drag operation by the user, the rocker b may change position within the circular area of the wheel area a.
The rocker b corresponds to an observation center of the camera model (i.e., an anchor point position of the camera model), the wheel disc area a corresponds to a default observation center of the camera model, for example, a position of the first virtual object is located, and a circular area range of the wheel disc area a corresponds to an adjustable range of the observation center of the camera model.
since the wheel disc region a corresponds to the default viewing center of the camera model, the circular region range of the wheel disc region a corresponds to the adjustable range of the viewing center of the camera model. Therefore, after the client receives the drag operation of the user, in response to a drag instruction triggered when the rocker is dragged in the wheel area, the observation center is adjusted from a first position relative to the first virtual object to a second position relative to the first virtual object according to the position of the dragged rocker in the wheel area. Specifically, the steps S1 to S5 are as follows, as shown in fig. 9:
s1, responding to a dragging instruction triggered when a rocker is dragged in a wheel disc area, and calculating a wheel disc transverse offset value and a wheel disc longitudinal offset value of the position of the dragged rocker in the wheel disc area relative to the central position of the wheel disc area;
referring to fig. 10, assuming that the position of the dragged rocker in the wheel disc area is P2 and the center position of the wheel disc area is P1, the wheel disc lateral offset value is the projected length (P2-P1) x of (P2-P1) in the horizontal direction and the wheel disc longitudinal offset value is the projected length (P2-P1) z of (P2-P1) in the vertical direction.
Wherein the wheel lateral offset value and the wheel longitudinal offset value are two-dimensional offsets on the UI layer, and the camera model is in a three-dimensional virtual environment, requiring mapping of the two-dimensional offsets to three-dimensional offsets.
Referring to fig. 10, an x-axis, a y-axis, and a z-axis are provided in the three-dimensional virtual environment. Wherein the y-axis corresponds to the lens height of the camera model. In this embodiment, the wheel lateral offset value is set to correspond to the offset value of the camera model in the x-axis, and the wheel longitudinal offset value corresponds to the offset value of the camera model in the z-axis, both of which do not affect the position of the camera model in the y-axis. That is, the lens height of the camera model is a fixed value or otherwise adjusted, but is independent of the adjustment logic in this embodiment.
S2, determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value;
the first camera offset value is an offset value of the camera model in the x-axis and the second camera offset value is an offset value of the camera model in the z-axis. The X-axis and the z-axis are two coordinate axes parallel to the ground plane in the virtual environment, and the X-axis is perpendicular to the z-axis.
Optionally, the wheel disc lateral offset distance and the first camera offset value are in positive correlation; the wheel disc longitudinal offset distance and the second camera offset value are in positive correlation. For example, the wheel disc lateral offset distance and the first camera offset value are in a proportional relationship; the wheel longitudinal offset distance and the second camera offset value are in a proportional relationship.
S3, calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference;
the first anchor point location moves following the movement of the location of the first virtual object.
Optionally, the first anchor point position is a three-dimensional coordinate of the camera model in the virtual environment when the observation center of the camera model is the first virtual object.
S4, shifting the first anchor point position according to the first camera shift value and the second camera shift value, and calculating to obtain a second anchor point position of the camera model;
and shifting the first anchor point position on the x-axis according to the first camera offset value, shifting the first anchor point position on the z-axis according to the second camera offset value, and calculating to obtain the second anchor point position of the camera model.
And S5, offsetting the camera model according to the second anchor point position, wherein the observation center of the offset camera model is a second position relative to the first virtual object.
And S6, responding to the change of the position of the first virtual object in the virtual environment, and executing the step of shifting the first anchor point position according to the first camera shift value and the second camera shift value again, and calculating to obtain the second anchor point position of the camera model.
Since the first virtual object can be moved in the three-dimensional virtual environment, when the position of the first virtual object in the three-dimensional virtual environment changes, the client performs the step of calculating a second position relative to the first virtual object according to the offset direction and the second offset distance based on the position of the first virtual object in the virtual environment.
The viewing center of the camera model (relative to the second position of the first virtual object) will still dynamically change following the position of the first virtual object.
Before the current adjustment is finished and the next adjustment (manual or automatic adjustment is started), the observation center of the camera model is kept at the second position, that is, the camera model continuously performs picture acquisition according to the second position relative to the first virtual object as the observation center.
and after the observation center is adjusted to be a second position, the camera model observes the virtual environment by taking the second position relative to the first virtual object as the observation center, and a second virtual environment picture is obtained. The client also superimposes and displays a view adjustment control on the second virtual environment picture, thereby displaying a second user interface.
Optionally, if the view adjustment control is opaque, the blocked area in the second virtual environment screen does not need to be displayed; and if the visual field adjusting control is transparent or semitransparent, overlapping the blocked area in the second virtual environment picture and the visual field adjusting control into a fusion image for displaying.
Since the observation centers of the first virtual environment screen and the second virtual environment screen are different, the fields of view of the first virtual environment screen and the second virtual environment screen are different.
In summary, according to the method provided by the embodiment, the offset information of the camera model in the three-dimensional environment is calculated by adopting the wheel disc offset information of the rocker on the two-dimensional plane, so that the rocker change of the visual field adjustment control and the three-dimensional offset of the camera model are in positive correlation, and the operation effect of 'what you see is what you get' is achieved.
In an alternative embodiment based on fig. 8, the drag instruction includes: at least two sub-instructions ordered in time. Typical drag instructions include: and a touch start sub-instruction, a plurality of touch movement sub-instructions and a touch end sub-instruction which are sequenced according to time, wherein each sub-instruction carries real-time touch coordinates of a user on the touch screen. The touch start sub-instruction may be considered the first sub-instruction, and the touch end sub-instruction may be considered the last sub-instruction. As shown in fig. 11, the method further includes:
the display area of the default display mode is smaller than that of the wheel disc display mode.
Alternatively, the default display form is a smaller eye button. The smaller eye buttons do not obscure too much of the screen content on the second virtual environment screen.
In step 805, in response to receiving the last sub-instruction in the drag instruction, the view adjustment control is switched from the wheel display form to the default display form.
In summary, according to the method provided by the embodiment, by providing two display modes for the visual field adjustment control, the visual field adjustment control in the default display mode has a smaller display area, so that the influence on the picture content in the virtual environment picture can be reduced.
In an alternative embodiment based on fig. 8, the viewing center of the camera model may be adjusted multiple times as the user is engaged in a single game in order to ensure operational consistency of each adjustment process. As shown in fig. 12, the method further includes:
In step 803b, in response to receiving the first sub-instruction in the drag instruction, the observation center is reset to the location of the first virtual object in the virtual environment.
When the client receives a drag operation on the visual field adjustment control, the client triggers a drag instruction on the visual field adjustment control. The drag instruction includes: and according to the time-ordered touch start sub-instruction, a plurality of touch movement sub-instructions and a touch end sub-instruction, wherein each sub-instruction carries real-time touch coordinates of a user on the touch screen.
In response to receiving the touch start sub-instruction, the client resets the observation center to the location of the first virtual object in the virtual environment. That is, the client resets the coordinates of the camera model in the virtual environment to the first anchor point location.
In summary, the method provided in this embodiment resets the observation center to the position where the first virtual object is located in the virtual environment when the drag instruction on the visual field adjustment control is received, so that the user can obtain a consistent operation feeling when adjusting the observation center each time, and thus the observation center can be quickly and accurately adjusted to the desired position.
In an alternative embodiment, the adjustment range of the viewing center of the camera model is limited.
As shown in fig. 13, the client sets maximum offset values of the observation center of the camera model in four directions of up, down, left, and right: UP, DOWN, LEFT, RIGHT. UP is the maximum offset value in the upper direction, DOWN is the maximum offset value in the lower direction, LEFT is the maximum offset value in the LEFT direction, and RIGHT is the maximum offset value in the RIGHT direction.
When the visual field adjustment control is implemented by adopting a wheel disc control, the wheel disc control comprises: a wheel area and a rocker located on the wheel area. The position of the rocker over the wheel area (offset position relative to the wheel center) corresponds to the viewing center of the camera model (offset position relative to the first virtual object). The circular area extent of the wheel disc area corresponds to the maximum adjustable extent of the viewing center. Each position on the wheel area is "one-to-one" with respect to the maximum adjustable range of the viewing center in the virtual environment.
In an alternative embodiment, the anchor point position of the camera model is calculated as follows:
as shown in fig. 14, the angle between the rocker and the horizontal direction is α, the distance between the rocker and the circle of the wheel disc area is a, and the radius of the circle of the wheel disc area is B. If the rocker is offset to the upper left, the anchor point position of the camera model is also offset to the upper left from the default anchor point position by the same angle.
LEFT offset of camera model = cos α LEFT (a/B);
upward offset of camera model = sin α UP (a/B).
In an alternative embodiment, three variables are first defined as follows:
cameliaoffset: the offset value of the observation center of the camera model relative to the first virtual object is measured. The control module of the camera model reads the value, and adds the offset value to the logic of the camera model following the first virtual object, thereby achieving the function of offset of the observation center of the camera model.
IsMoved: to distinguish whether a user's operation on the joystick is a click operation or a drag operation. When the user presses the rocker button, it is set to no (false); when the dragging distance of the user to the rocker exceeds the dead zone area, the dragging distance is set to true.
Loadzone (dead zone): configured by the plan, a threshold value for distinguishing whether a touch operation on a rocker is a click operation or a drag distance of a drag operation in a wheel area is schematically shown in fig. 15, and an edge of a dead zone is shown as a dotted circle 40.
As shown in fig. 16, the method for displaying a virtual environment screen includes the steps of:
The user presses the rocker of the field of view adjustment control, triggering the generation of a touch start event on the touch screen. The touch start sub-event carries the two-dimensional coordinates of the user's pressed location on the touch screen.
the client resets the offset value CameraOffset of the camera model to zero. This clears the offset value of the last camera model.
the client sets the value of IsMoved to false.
the user drags the rocker of the visual field adjusting control, and a touch movement event is continuously triggered (according to the reporting frequency of the touch) on the touch screen. Each touch movement event carries two-dimensional coordinates of the user's touch location on the touch screen.
and the client assigns the two-dimensional coordinate of the dragged rocker on the touch screen as P2.
p1 is the two-dimensional coordinate of the center point of the wheel area on the touch screen.
In the process of dragging the rocker by a user, the client judges whether the displacement |P2-P1| of the dragged rocker position relative to the center point of the wheel disc area reaches a dead zone threshold value or not. I.e., whether |P2-P1| is greater than or equal to deadZone.
If the dragging distance reaches the dead zone threshold, then step 1607 is performed; if the drag distance does not reach the dead zone threshold, then IsMoved=true is maintained.
the client sets the value of IsMoved to true.
In step 1608, an offset value of the camera model is calculated as a vector (P2-P1).
Firstly, the client reads five preset values:
LEFT: maximum offset value in the left direction;
RIGHT: maximum offset value in the right direction;
DOWN: maximum offset value in the lower direction;
UP: maximum offset value in the upper direction;
max_radius: maximum drag distance of the rocker in the wheel area.
Because at the User Interface (UI) layer is a two-dimensional plane, (P2-P1) is a two-dimensional vector, and the anchor point position of the camera model is in a three-dimensional scene, the offset vector of the camera model is a three-dimensional vector. Assuming that the lens height of the camera model is kept unchanged, it is necessary to fix the y-coordinate offset component (coordinate component corresponding to the lens height) in the anchor point position of the camera model by 0, and then calculate the lens offset with the following steps:
1. the length of (P2-P1) is calculated as length,
2. the formula for calculating the offset value cameraoffset. X of the camera model in the x-axis (horizontal direction) is as follows
If (p 2-p 1). X >0, the representation is offset to the right;
cameraOffset.x=(p2-p1).x/MAX_RADIUS*RIGHT;
other cases, represent a left shift;
cameraOffset.x=(p2-p1).x/MAX_RADIUS*LEFT。
3. the formula for calculating the offset value cameraoffset. Z of the camera model in the z-axis (horizontal direction) is as follows:
if (p 2-p 1). Z >0, the representation is offset upward;
cameraOffset.z=(p2-p1).y/MAX_RADIUS*UP;
other cases, represent a left shift;
cameraOffset.z=(p2-p1).y/MAX_RADIUS*DOWN。
referring to fig. 16 in combination, it can be seen that the client typically configures a height to be the height distance between the first virtual object actor and the camera model, and configures an inclination angel to be the angle at which the camera model looks downward. And then the client calculates a default anchor point position of the camera model through the current position of the first virtual object. Since the camera does not change relative to the x-axis of the first virtual character due to the game setting, only the y-and z-axes need to be changed, the calculation formula of the camera model in the virtual environment (following the first virtual object) is as follows:
cameraPos.x=ActorPos.x;
cameraPos.y=ActorPos.y+height*cos(angle);
camearPos.z=ActorPos.z–height*san(angle);
after the default anchor point position of the camera model is calculated, adding the calculated offset value (camera offset. X, camera offset. Z) to the camera model to obtain the final anchor point position of the camera model, and also achieving the lens offset effect on the camera model.
cameraPos=cameraPos+cameraOffset。
The arrangement of the x-axis, the y-axis and the z-axis in the three-dimensional virtual environment is shown in fig. 4.
Fig. 18 is a block diagram of a display device of a virtual environment screen according to an exemplary embodiment of the present application. The device comprises:
a display module 1820, configured to display a first virtual environment screen, where the first virtual environment screen is a screen obtained by observing the virtual environment with a first position relative to a first virtual object as an observation center;
an adjustment module 1840 for adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to an adjustment instruction of the observation center;
the display module 1820 is configured to display a second virtual environment screen, where the second virtual environment screen is a screen obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
In an optional embodiment of the present application, a field of view adjustment control is displayed on the first virtual environment screen, where the field of view adjustment control includes a rocker and a wheel area, and the rocker is located in the wheel area;
The adjustment module 1840 is further configured to adjust the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object according to a position of the dragged rocker in the wheel disc area in response to a drag instruction triggered when the rocker is dragged in the wheel disc area.
In an optional embodiment of the present application, the adjustment module 1840 is further configured to calculate a wheel disc lateral offset value and a wheel disc longitudinal offset value of a position of the rocker in the wheel disc area after dragging relative to a center position of the wheel disc area in response to a dragging command triggered when the rocker is dragged in the wheel disc area;
determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value;
calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference;
shifting the first anchor point position according to the first camera shifting value and the second camera shifting value, and calculating to obtain a second anchor point position of the camera model;
And offsetting the camera model according to the second anchor point position, wherein the observation center of the offset camera model is a second position relative to the first virtual object.
In an alternative embodiment of the present application, the wheel disc lateral offset distance and the first camera offset value are in positive correlation;
and the longitudinal offset distance of the wheel disc and the offset value of the second camera are in positive correlation.
In an optional embodiment of the present application, the adjusting module 1840 is further configured to, in response to a change in a position of the first virtual object in the virtual environment, execute the calculating, based on the position of the first virtual object in the virtual environment and based on the offset direction and the second offset distance, to obtain a second position relative to the first virtual object.
In an optional embodiment of the present application, the drag instruction includes: the adjustment module 1840 is further configured to reset the observation center to a location of the first virtual object in the virtual environment in response to receiving a first sub-instruction of the drag instruction.
In an optional embodiment of the present application, the drag instruction includes: the display module 1820 is further configured to switch the view adjustment control from a default display configuration to a wheel display configuration in response to receiving a first sub-command of the drag command, where the wheel display configuration includes the rocker and the wheel region;
and the display area of the default display mode is smaller than that of the wheel disc display mode.
It should be noted that: in the display device for a virtual environment screen provided in the above embodiment, only the division of the above functional modules is used as an example, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the display device of the virtual environment picture provided in the above embodiment and the display method embodiment of the virtual environment picture belong to the same concept, and detailed implementation processes of the display device and the display method embodiment of the virtual environment picture are detailed in the method embodiment, and are not repeated here.
The application also provides a computer device (terminal or server), which comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize the display method of the virtual environment picture provided by each method embodiment. It should be noted that the computer device may be a computer device as provided in fig. 19 below.
Fig. 19 shows a block diagram of a computer device 1900 according to an exemplary embodiment of the present application. The computer device 1900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Computer device 1900 may also be referred to as a user device, portable computer device, laptop computer device, desktop computer device, or the like.
Generally, the computer device 1900 includes: a processor 1901 and a memory 1902.
In some embodiments, computer device 1900 may optionally further comprise: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1904, touch display 1905, camera 1906, audio circuitry 1907, positioning assembly 1908, and power supply 1909.
The Radio Frequency circuit 1904 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1904 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1904 may also include NFC (Near Field Communication ) related circuits, which are not limited in this application.
The display 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display 1905 is a touch display, display 1905 also has the ability to collect touch signals at or above the surface of display 1905. The touch signal may be input as a control signal to the processor 1901 for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1905 may be one, providing a front panel of the computer device 1900; in other embodiments, the display 1905 may be at least two, each disposed on a different surface of the computer device 1900 or in a folded configuration; in still other embodiments, the display 1905 may be a flexible display disposed on a curved surface or a folded surface of the computer device 1900. Even more, the display screen 1905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Typically, the front camera is disposed on a front panel of the computer device and the rear camera is disposed on a rear surface of the computer device. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1901 for processing, or inputting the electric signals to the radio frequency circuit 1904 for realizing voice communication. For purposes of stereo acquisition or noise reduction, multiple microphones may be provided at different locations of computer device 1900, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuit 1904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1907 may also include a headphone jack.
The location component 1908 is used to locate the current geographic location of the computer device 1900 for navigation or LBS (Location Based Service), a location-based service. The positioning component 1908 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of russia.
A power supply 1909 is used to power the various components in the computer device 1900. The power supply 1909 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1909 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyroscope sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.
Acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with computer device 1900. For example, the acceleration sensor 1911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1901 may control the touch display 1905 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by the acceleration sensor 1911. Acceleration sensor 1911 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1912 may detect the body direction and the rotation angle of the computer device 1900, and the gyro sensor 1912 may cooperate with the acceleration sensor 1911 to collect 3D actions of the user on the computer device 1900. The processor 1901 may implement the following functions based on the data collected by the gyro sensor 1912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1913 may be disposed on a side border of computer device 1900 and/or an underlying layer of touch display 1905. When the pressure sensor 1913 is disposed on a side frame of the computer device 1900, a user's grip signal on the computer device 1900 may be detected, and the processor 1901 may perform left-right hand recognition or shortcut operation based on the grip signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display screen 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1914 is used to collect a fingerprint of the user, and the processor 1901 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1914, or identifies the identity of the user based on the collected fingerprint by the fingerprint sensor 1914. Upon recognizing that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, and the like. The fingerprint sensor 1914 may be disposed on the front, back, or side of the computer device 1900. When a physical key or vendor Logo is provided on the computer device 1900, the fingerprint sensor 1914 may be integrated with the physical key or vendor Logo.
The optical sensor 1915 is used to collect ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch display 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is turned high; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 based on the ambient light intensity collected by the optical sensor 1915.
A proximity sensor 1916, also referred to as a distance sensor, is typically provided on the front panel of the computer device 1900. The proximity sensor 1916 is used to capture the distance between the user and the front of the computer device 1900. In one embodiment, when the proximity sensor 1916 detects a gradual decrease in the distance between the user and the front of the computer device 1900, the processor 1901 controls the touch display 1905 to switch from the bright screen state to the off-screen state; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually increases, the processor 1901 controls the touch display 1905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 19 is not limiting and that more or fewer components than shown may be included or that certain components may be combined or that a different arrangement of components may be employed.
The memory further includes one or more programs stored in the memory, the one or more programs including a display method for performing the virtual environment screen provided by the embodiments of the present application.
The application provides a computer readable storage medium, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to realize the method for displaying the virtual environment picture provided by each method embodiment.
The application also provides a computer program product, when the computer program product runs on a computer, the computer is caused to execute the method for displaying the virtual environment picture provided by each method embodiment.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.
Claims (8)
1. A method for displaying a virtual environment picture, the method comprising:
displaying a first virtual environment picture, wherein the first virtual environment picture is obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center, a visual field adjusting control is displayed on the first virtual environment picture, the visual field adjusting control comprises a rocker and a wheel disc area, and the rocker is positioned in the wheel disc area;
Responding to a dragging instruction triggered when the rocker is dragged in the wheel disc area, and calculating a wheel disc transverse offset value and a wheel disc longitudinal offset value of the position of the rocker in the wheel disc area after dragging relative to the central position of the wheel disc area;
determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value;
calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference, wherein the anchor point position is a three-dimensional coordinate of the camera model in the virtual environment;
shifting the first anchor point position according to the first camera shifting value and the second camera shifting value, and calculating to obtain a second anchor point position of the camera model;
shifting the camera model according to the second anchor point position, so that the observation center of the camera model is adjusted from a first position relative to the first virtual object to a second position relative to the first virtual object, and the lens height of the camera model is a fixed value in the shifting process;
And displaying a second virtual environment picture, wherein the second virtual environment picture is obtained by observing the virtual environment by taking a second position relative to the first virtual object as an observation center.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the transverse offset distance of the wheel disc and the offset value of the first camera are in positive correlation;
and the longitudinal offset distance of the wheel disc and the offset value of the second camera are in positive correlation.
3. The method according to claim 1, wherein the method further comprises:
and in response to the change of the position of the first virtual object in the virtual environment, executing the calculation again by taking the position of the first virtual object in the virtual environment as a reference, and obtaining a second position relative to the first virtual object according to the offset direction and the offset distance.
4. A method according to any one of claims 1 to 3, wherein the drag instruction comprises: at least two sub-instructions ordered in time, the method further comprising:
and in response to receiving a first sub-instruction in the dragging instruction, resetting the observation center to the position of the first virtual object in the virtual environment.
5. A method according to any one of claims 1 to 3, wherein the drag instruction comprises: at least two sub-instructions ordered in time, the method further comprising:
in response to receiving a first sub-instruction in the drag instruction, switching the view adjustment control from a default display form to a wheel display form, wherein the wheel display form comprises the rocker and the wheel region;
and the display area of the default display mode is smaller than that of the wheel disc display mode.
6. A display device of a virtual environment picture, the device comprising:
the display module is used for displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center, a visual field adjustment control is displayed on the first virtual environment picture, the visual field adjustment control comprises a rocker and a wheel disc area, and the rocker is positioned in the wheel disc area;
the adjusting module is used for responding to a dragging instruction triggered when the rocker is dragged in the wheel disc area, and calculating a wheel disc transverse offset value and a wheel disc longitudinal offset value of the position of the dragged rocker in the wheel disc area relative to the central position of the wheel disc area; determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value; calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference, wherein the anchor point position is a three-dimensional coordinate of the camera model in the virtual environment; shifting the first anchor point position according to the first camera shifting value and the second camera shifting value, and calculating to obtain a second anchor point position of the camera model; shifting the camera model according to the second anchor point position, so that the observation center of the camera model is adjusted from a first position relative to the first virtual object to a second position relative to the first virtual object, and the lens height of the camera model is a fixed value in the shifting process;
The display module is configured to display a second virtual environment picture, where the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
7. A computer device, characterized in that it comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which is loaded and executed by the processor to implement the method for displaying a virtual environment picture according to any one of claims 1 to 5.
8. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the method of displaying a virtual environment picture according to any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010437875.3A CN111603770B (en) | 2020-05-21 | 2020-05-21 | Virtual environment picture display method, device, equipment and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010437875.3A CN111603770B (en) | 2020-05-21 | 2020-05-21 | Virtual environment picture display method, device, equipment and medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111603770A CN111603770A (en) | 2020-09-01 |
| CN111603770B true CN111603770B (en) | 2023-05-05 |
Family
ID=72194464
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010437875.3A Active CN111603770B (en) | 2020-05-21 | 2020-05-21 | Virtual environment picture display method, device, equipment and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111603770B (en) |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111467802B (en) | 2020-04-09 | 2022-02-22 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for displaying picture of virtual environment |
| CN112118358B (en) * | 2020-09-21 | 2022-06-14 | 腾讯科技(深圳)有限公司 | Shot picture display method, terminal and storage medium |
| CN112245920A (en) * | 2020-11-13 | 2021-01-22 | 腾讯科技(深圳)有限公司 | Virtual scene display method, device, terminal and storage medium |
| CN112619140B (en) * | 2020-12-18 | 2024-04-26 | 网易(杭州)网络有限公司 | Method and device for determining position in game and method and device for adjusting path |
| CN112667220B (en) * | 2021-01-27 | 2023-07-07 | 北京字跳网络技术有限公司 | Animation generation method and device and computer storage medium |
| CN113827969A (en) * | 2021-09-27 | 2021-12-24 | 网易(杭州)网络有限公司 | Interaction method and device for game objects |
| CN114307145B (en) * | 2022-01-04 | 2023-06-27 | 腾讯科技(深圳)有限公司 | Picture display method, device, terminal and storage medium |
| CN117298577A (en) * | 2022-06-23 | 2023-12-29 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and program product in virtual environment |
| CN116020110A (en) * | 2023-02-09 | 2023-04-28 | 网易(杭州)网络有限公司 | Game screen control method and device, computer storage medium, electronic equipment |
| CN116501209B (en) * | 2023-05-09 | 2025-05-27 | 网易(杭州)网络有限公司 | Editing view angle adjusting method and device, electronic equipment and readable storage medium |
| CN119174912A (en) * | 2023-06-21 | 2024-12-24 | 腾讯科技(深圳)有限公司 | Body part orientation editing method, device, equipment and storage medium |
| CN119215415A (en) * | 2023-06-30 | 2024-12-31 | 腾讯科技(深圳)有限公司 | Virtual character posture editing method, device, equipment and storage medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1125609A2 (en) * | 2000-01-21 | 2001-08-22 | Sony Computer Entertainment Inc. | Entertainment apparatus, storage medium and object display method |
| JP2013158456A (en) * | 2012-02-03 | 2013-08-19 | Konami Digital Entertainment Co Ltd | Game device, game system, control method of game device and program |
| CN108786110A (en) * | 2018-05-30 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Gun sight display methods, equipment and storage medium in virtual environment |
| CN110917616A (en) * | 2019-11-28 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Orientation prompting method, device, equipment and storage medium in virtual scene |
| CN111035918A (en) * | 2019-11-20 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Reconnaissance interface display method and device based on virtual environment and readable storage medium |
| JP2020062376A (en) * | 2019-07-18 | 2020-04-23 | 株式会社セガゲームス | Information processor and program |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6223533B1 (en) * | 2016-11-30 | 2017-11-01 | 株式会社コロプラ | Information processing method and program for causing computer to execute information processing method |
| CN107168611B (en) * | 2017-06-16 | 2018-12-28 | 网易(杭州)网络有限公司 | Information processing method, device, electronic equipment and storage medium |
| US10712900B2 (en) * | 2018-06-06 | 2020-07-14 | Sony Interactive Entertainment Inc. | VR comfort zones used to inform an In-VR GUI editor |
| CN108717733B (en) * | 2018-06-07 | 2019-07-02 | 腾讯科技(深圳)有限公司 | View angle switch method, equipment and the storage medium of virtual environment |
| CN110665222B (en) * | 2019-09-29 | 2024-04-19 | 网易(杭州)网络有限公司 | Aiming direction control method and device in game, electronic equipment and storage medium |
| CN110575671B (en) * | 2019-10-08 | 2023-08-11 | 网易(杭州)网络有限公司 | Method and device for controlling viewing angle in game and electronic equipment |
-
2020
- 2020-05-21 CN CN202010437875.3A patent/CN111603770B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1125609A2 (en) * | 2000-01-21 | 2001-08-22 | Sony Computer Entertainment Inc. | Entertainment apparatus, storage medium and object display method |
| JP2013158456A (en) * | 2012-02-03 | 2013-08-19 | Konami Digital Entertainment Co Ltd | Game device, game system, control method of game device and program |
| CN108786110A (en) * | 2018-05-30 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Gun sight display methods, equipment and storage medium in virtual environment |
| JP2020062376A (en) * | 2019-07-18 | 2020-04-23 | 株式会社セガゲームス | Information processor and program |
| CN111035918A (en) * | 2019-11-20 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Reconnaissance interface display method and device based on virtual environment and readable storage medium |
| CN110917616A (en) * | 2019-11-28 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Orientation prompting method, device, equipment and storage medium in virtual scene |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111603770A (en) | 2020-09-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111603770B (en) | Virtual environment picture display method, device, equipment and medium | |
| CN111467802B (en) | Method, device, equipment and medium for displaying picture of virtual environment | |
| CN112494955B (en) | Skill releasing method, device, terminal and storage medium for virtual object | |
| CN111589128B (en) | Operation control display method and device based on virtual scene | |
| CN112402949B (en) | Skill releasing method, device, terminal and storage medium for virtual object | |
| CN111589141B (en) | Virtual environment picture display method, device, equipment and medium | |
| CN111672106B (en) | Virtual scene display method and device, computer equipment and storage medium | |
| CN111672102A (en) | Virtual object control method, device, equipment and storage medium in virtual scene | |
| CN112704876B (en) | Method, device and equipment for selecting virtual object interaction mode and storage medium | |
| CN113577765B (en) | User interface display method, device, equipment and storage medium | |
| CN113559495B (en) | Method, device, equipment and storage medium for releasing skill of virtual object | |
| CN111744185B (en) | Virtual object control method, device, computer equipment and storage medium | |
| CN111530075A (en) | Method, device, equipment and medium for displaying picture of virtual environment | |
| CN112691370A (en) | Method, device, equipment and storage medium for displaying voting result in virtual game | |
| CN112169330B (en) | Method, device, equipment and medium for displaying picture of virtual environment | |
| CN113599819A (en) | Prompt message display method, device, equipment and storage medium | |
| CN116920398A (en) | Method, apparatus, device, medium and program product for exploration in virtual worlds | |
| CN112604274A (en) | Virtual object display method, device, terminal and storage medium | |
| HK40028090B (en) | Display method and apparatus of virtual environment screen, device and medium | |
| HK40028090A (en) | Display method and apparatus of virtual environment screen, device and medium | |
| HK40027373B (en) | Method and apparatus for displaying virtual environment screen, device and medium | |
| HK40054545A (en) | Display method of prompt information, device, equipment and storage medium | |
| WO2025236874A1 (en) | Virtual map marking method and apparatus, device, storage medium, and program product | |
| HK40027373A (en) | Method and apparatus for displaying virtual environment screen, device and medium | |
| HK40054045A (en) | Display method, device, equipment and storage medium of user interface |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40028090 Country of ref document: HK |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |