[go: up one dir, main page]

CN113058264B - Virtual scene display method, virtual scene processing method, device and equipment - Google Patents

Virtual scene display method, virtual scene processing method, device and equipment

Info

Publication number
CN113058264B
CN113058264B CN202110455848.3A CN202110455848A CN113058264B CN 113058264 B CN113058264 B CN 113058264B CN 202110455848 A CN202110455848 A CN 202110455848A CN 113058264 B CN113058264 B CN 113058264B
Authority
CN
China
Prior art keywords
scene
virtual scene
virtual
progress
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110455848.3A
Other languages
Chinese (zh)
Other versions
CN113058264A (en
Inventor
徐士立
钟炳武
陆燕慧
马辰龙
付亚彬
马啸虎
胡玉林
郑骎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110455848.3A priority Critical patent/CN113058264B/en
Publication of CN113058264A publication Critical patent/CN113058264A/en
Priority to PCT/CN2022/082080 priority patent/WO2022227936A1/en
Priority to US17/991,776 priority patent/US12427410B2/en
Application granted granted Critical
Publication of CN113058264B publication Critical patent/CN113058264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/352Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/48Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a virtual scene display method, a virtual scene processing device and virtual scene processing equipment, and belongs to the technical field of computers. The method comprises the steps of displaying at least one virtual scene entry, responding to triggering operation of a target virtual scene entry, sending a loading request to a first server, and displaying a scene picture if the scene picture is received. According to the technical scheme, the user can view and trigger any virtual scene entry according to the preference by displaying at least one virtual scene entry, and because the virtual scene entry corresponds to any scene progress of the target virtual scene, the first server operates the target virtual scene based on the scene progress corresponding to the target virtual scene entry, and then the terminal displays the scene picture of the target virtual scene in the scene progress, so that the user can reach the scene progress without complicated operation, the time is remarkably saved, and the man-machine interaction efficiency and the user viscosity are improved.

Description

Virtual scene display method, virtual scene processing method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for displaying a virtual scene, a method, an apparatus, and a device for processing a virtual scene.
Background
With the explosive growth of the gaming industry, games have become part of life. Wherein the game includes different types of virtual scenes such as copies in role playing games, fight maps in shooting games, etc. Because the game content experienced by the user in different types of virtual scenes is not the same, the virtual scenes preferred by different users are not the same.
However, when the user experiences the preferred virtual scene, the user often needs to reach the expected scene progress through complicated operation, which wastes time of the user, causes low human-computer interaction efficiency, and affects the viscosity of the user.
Disclosure of Invention
The embodiment of the application provides a virtual scene display method, a virtual scene processing device and virtual scene processing equipment, so that a user can reach expected scene progress without complicated operation, the time is obviously saved, and the man-machine interaction efficiency and the user viscosity are improved. The technical scheme is as follows:
in one aspect, a method for displaying a virtual scene is provided, the method comprising:
displaying at least one virtual scene entry, wherein the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on historical interaction behaviors of a user account in the at least one virtual scene;
Responding to triggering operation of a target virtual scene entry, and sending a loading request to a first server, wherein the loading request is used for indicating the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to a scene picture of the target virtual scene;
and if the scene picture is received, displaying the scene picture.
In another aspect, a method for processing a virtual scene is provided, where the method includes:
providing at least one virtual scene entry for a terminal, wherein the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on historical interaction behaviors of a user account of the terminal in the at least one virtual scene;
Receiving a loading request of the terminal, wherein the loading request is used for indicating to operate the target virtual scene based on the scene progress and returning to a scene picture of the target virtual scene;
And operating the target virtual scene based on the scene progress, and sending a scene picture of the target virtual scene to the terminal.
In some embodiments, the loading request further carries matching information, wherein the matching information is used for indicating a matching mode of a second virtual object in the target virtual scene, and the second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposite camp as the first virtual object and a neutral virtual object;
The method further comprises the steps of:
And loading the second virtual object in the target virtual scene based on the matching information.
In another aspect, there is provided a display apparatus of a virtual scene, the apparatus including:
The first display module is used for displaying at least one virtual scene entry, wherein the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on the historical interaction behavior of the user account in the at least one virtual scene;
the system comprises a request sending module, a first server and a second server, wherein the request sending module is used for responding to the triggering operation of a target virtual scene entry, sending a loading request to the first server, and the loading request is used for indicating the first server to run the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to the scene picture of the target virtual scene;
And the second display module is used for displaying the scene picture if the scene picture is received.
In some embodiments, the scene graph is loaded based on the scene progress and historical scene state data of the user account.
In some embodiments, a first virtual object is displayed in the scene picture, the first virtual object being equipped with a virtual prop corresponding to the historical scene state data.
In some embodiments, a first virtual object is displayed in the scene, the first virtual object being in an action state corresponding to the historical scene state data.
In some embodiments, the loading request further carries matching information, where the matching information is used to indicate a matching manner of a second virtual object in the target virtual scene, where the second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposite camp as the first virtual object, and a neutral virtual object;
The apparatus further comprises:
the third display module is used for displaying a matching information setting page, and the matching information setting page is used for setting a matching mode of the second virtual object;
and the information generation module is used for generating the matching information according to the matching mode set on the matching information setting page.
In some embodiments, a second virtual object that is matched based on the matching information is displayed in the scene.
In some embodiments, the apparatus further comprises:
the fourth display module is used for displaying a scene picture of any virtual scene;
the fourth display module is further configured to display, in a scene picture of the any virtual scene, guiding information of the any virtual scene, where the guiding information is used to prompt whether to share a scene progress of the any virtual scene.
In another aspect, there is provided a display apparatus of a virtual scene, the apparatus including:
The system comprises an entry providing module, a terminal providing module and a storage module, wherein the entry providing module is used for providing at least one virtual scene entry for the terminal, the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on historical interaction behaviors of a user account of the terminal in at least one virtual scene;
The request receiving module is used for receiving a loading request of the terminal, wherein the loading request is used for indicating to operate the target virtual scene based on the scene progress and returning to a scene picture of the target virtual scene;
And the operation module is used for operating the target virtual scene based on the scene progress and sending a scene picture of the target virtual scene to the terminal.
In some embodiments, the operation module is configured to use the scene progress as an operation start point, obtain virtual scene data corresponding to the scene progress, and operate the target virtual scene based on the virtual scene data.
In some embodiments, the run module comprises:
the first acquisition unit is used for taking the scene progress as an operation starting point and acquiring virtual scene data corresponding to the scene progress;
The second acquisition unit is used for acquiring historical scene state data of the user account;
and the running unit is used for running the target virtual scene based on the virtual scene data, and loading a first virtual object in the target virtual scene based on the historical scene state data.
In some embodiments, the running unit is configured to load, in the target virtual scene, a virtual prop in the historical scene state data for the first virtual object.
In some embodiments, the execution unit is configured to load the first virtual object in the target virtual scene with a corresponding action state in the historical scene state data.
In some embodiments, the apparatus further comprises:
The acquisition module is used for acquiring the historical interaction behavior of the user account in any virtual scene;
and the entry generation module is used for generating at least one virtual scene entry of the target virtual scene based on at least one scene progress matched with the historical interaction behavior in the target virtual scene.
In some embodiments, the at least one scene progress matched with the historical interaction behavior refers to at least one scene progress of which the number of times of occurrence of the historical interaction behavior meets the behavior condition.
In some embodiments, the apparatus further comprises:
the identification module is used for identifying the scene type of the scene picture of any virtual scene of the terminal;
and the determining module is used for determining the target virtual scene based on the scene type.
In some embodiments, the determining module is configured to determine a target scene type based on the occurrence number of the scene type, where the target scene type is a scene type whose occurrence number meets an occurrence condition, and determine any virtual scene belonging to the target scene type as the target virtual scene.
In some embodiments, the loading request further carries matching information, wherein the matching information is used for indicating a matching mode of a second virtual object in the target virtual scene, and the second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposite camp as the first virtual object and a neutral virtual object;
The apparatus further comprises:
and the loading module is used for loading the second virtual object in the target virtual scene based on the matching information.
In another aspect, there is provided a computer device including a processor and a memory for storing at least one section of a computer program loaded and executed by the processor to implement operations performed in a method for displaying a virtual scene as described in the above aspect, or to implement operations performed in a method for processing a virtual scene in an embodiment of the present application.
In another aspect, there is provided a computer readable storage medium having stored therein at least one section of a computer program loaded and executed by a processor to implement operations performed in a method for displaying a virtual scene as described in the above aspect or to implement operations performed in a method for processing a virtual scene as in an embodiment of the application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The computer program code is read from a computer readable storage medium by a processor of a computer device, which executes the computer program code, causing the computer device to carry out operations performed within a method of displaying a virtual scene as described in the above aspect, or causing the computer device to carry out operations performed within a method of processing a virtual scene as described in the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
In the embodiment of the application, by displaying at least one virtual scene entry, a user can view and trigger any virtual scene entry according to preference, and as the virtual scene entry corresponds to any scene progress of a target virtual scene, the current terminal can request the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry, so that the terminal can display a scene picture of the target virtual scene in the scene progress, the user can reach the scene progress without complicated operation, the time is obviously saved, and the man-machine interaction efficiency and the user viscosity are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided in accordance with an embodiment of the present application;
fig. 2 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application;
FIG. 3 is a schematic diagram showing a virtual scene portal provided according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for processing a virtual scene according to an embodiment of the present application;
FIG. 5 is a flow chart of interactions provided in accordance with an embodiment of the present application;
FIG. 6 is a flow chart of generating virtual scene entries according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another interaction flow provided in accordance with an embodiment of the present application;
FIG. 8 is a diagram of a system architecture provided in accordance with an embodiment of the present application;
fig. 9 is a block diagram of a display device of a virtual scene according to an embodiment of the present application;
Fig. 10 is a block diagram of a display device of another virtual scene provided according to an embodiment of the present application;
FIG. 11 is a block diagram of another virtual scene processing apparatus provided in accordance with an embodiment of the present application;
FIG. 12 is a block diagram of another virtual scene processing apparatus provided in accordance with an embodiment of the application;
fig. 13 is a block diagram of a terminal according to an embodiment of the present application;
fig. 14 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms.
These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the various examples. The first element and the second element may both be elements, and in some cases, may be separate and distinct elements.
Wherein at least one means one or more, for example, at least one element may be an integer number of elements of one or more of any one element, two elements, three elements, and the like. And at least two means two or more, for example, at least two elements may be any integer number of elements equal to or greater than two, such as two elements, three elements, and the like.
Some techniques according to embodiments of the present application are explained below.
The resource allocation scheme provided by the embodiment of the application relates to the technical field of clouds.
Cloud Technology (Cloud Technology) is based on the general terms of network Technology, information Technology, integration Technology, management platform Technology, application Technology and the like applied by Cloud computing business models, and can form a resource pool, and the Cloud computing business model is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Cloud Gaming (Cloud Gaming), which may also be referred to as game On Demand, is an online Gaming technology based On Cloud computing technology. Cloud gaming technology enables lightweight devices (THIN CLIENT) with relatively limited graphics processing and data computing capabilities to run high quality games. In a cloud game scene, the game is not run in a player game terminal, but is run in a cloud server, the cloud server renders the game scene into a video and audio stream, and the video and audio stream is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capability, and only needs to have basic streaming media playing capability and the capability of acquiring player input instructions and sending the player input instructions to the cloud server.
Virtual scene-refers to a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object-refers to a movable object in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as a character, an animal, a plant, an oil drum, a wall, a stone, etc., displayed in a virtual scene. The virtual object may be an avatar in the virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
Transparent transmission, i.e. pass-through, refers to the fact that in communication, no matter what the traffic content is transmitted, it is only responsible for transmitting the content of the transmission from the source address to the destination address without any change to the traffic data content.
A single PVE (Player Versus Environment, player engagement environment) scenario refers to a scenario where a user does not need to interact with other real game players, but only needs to engage in a number of NPCs (Non-PLAYER CHARACTER, non-player characters) and boss within the game. Training fields in FPS (First Person shooter) games are the most typical single Person PVE scenario.
The multi-person PVE scene refers to that a user needs to interact and cooperate with other players to jointly fight against NPC and BOSS in the game. Multi-person team training (or team man-machine combat) in FPS games is a typical multi-person PVE scenario.
The 1VS1 scenario refers to the need for the user to engage in 1VS1 challenge with another player. Performing 1VS1 competition under a specific map in the FPS game is a typical 1VS1 scene.
A multi-player PVP (Player Versus Player, player fight player) scenario refers to the need for a user to team with other players and fight against another team of players. Survival play in FPS games is a typical multi-person PVP scenario.
Mixed scenes, meaning both PVE and PVP, may also need to be counter-acted with multiple opponents, etc., but may be combined with different types of virtual scenes in general.
The following describes an implementation environment according to the present application.
The virtual scene display method and the virtual scene processing method provided by the embodiment of the application can be executed by computer equipment. Optionally, the computer device is a terminal or a server. In the following, a method for displaying the virtual scene by using a computer device as a terminal is first taken as an example, and an implementation environment provided by an embodiment of the present application is introduced, and fig. 1 is a schematic diagram of an implementation environment provided according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102.
The terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
Alternatively, the terminal 101 is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, or the like, but is not limited thereto. The terminal 101 is installed and operated with a first client for displaying a virtual scene portal and a scene picture. The First client may be used to launch any one of a First person shooter game (FPS), a third person shooter game, a multiplayer online tactical competition game (Multiplayer Online Battle ARENA GAMES, MOBA), a virtual reality application, a three-dimensional map program, or a multiplayer gunfight survival game. Illustratively, the terminal 101 is a terminal used by a user, a user account of the user is logged in a first client installed and operated by the terminal 101, and the user uses the terminal 101 to operate a virtual object located in a virtual scene through the first client to perform activities including, but not limited to, at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as an emulated persona or a cartoon persona.
In this implementation environment, the first client is exemplified as a cloud game client for starting at least one game.
For example, the terminal starts the first client and displays at least one virtual scene entry, for example, at least one game copy or entry of a fight map, where the at least one game copy or the fight map may belong to the same game, may belong to different games, or may not completely belong to the same game. The user can trigger any virtual scene entry, the terminal determines the virtual scene entry triggered by the user as a target virtual scene entry, the terminal sends a loading request to a first server, the first server operates the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returns to the scene picture of the target virtual scene, wherein the first server is a cloud game server corresponding to the first client. The terminal receives and displays the scene picture based on the first terminal, and the scene picture is obtained by running based on the scene progress, so that a user can play based on the cloud game client without complicated operation, the time for the user to log in the game, control the virtual object to go to the copy entrance, wait for the copy loading and advance the copy progress to the expected progress is saved, and the man-machine interaction efficiency is improved.
In other embodiments, taking a processing method that a computer device executes the virtual scene as the server 102 as an example, an implementation environment provided by an embodiment of the present application is described. Optionally, the server 102 is a stand-alone physical server, or a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms.
In this implementation environment, taking the server 102 as an example of a cloud game server, the cloud game server is a first server for providing a background service for cloud game clients installed and operated by the terminal 101.
For example, the first server provides the terminal with the at least one virtual scene entry, and then, after receiving the loading request sent by the terminal, the first server runs the target virtual scene, that is, runs the game copy or the fight map, based on the scene progress corresponding to the target virtual scene entry. And then, the first server returns the scene picture of the target virtual scene obtained by operation to the first client, and the client can display the scene picture. By providing the virtual scene entry for the terminal, the first server can run the virtual scene based on the loading request sent by the terminal and return the scene picture, so that the terminal can quickly display the scene picture of the corresponding virtual scene based on the virtual scene entry selected by the user.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the application does not limit the number of terminals and the equipment type.
Fig. 2 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application, and as shown in fig. 2, in an embodiment of the present application, a method for displaying a virtual scene executed by a terminal using a computer device as the terminal is illustrated. The display method of the virtual scene comprises the following steps:
201. the terminal displays at least one virtual scene entry corresponding to any scene progress of the target virtual scene, the scene progress being determined based on historical interaction behavior of the user account in the at least one virtual scene.
In the embodiment of the application, a first client is installed and operated on the terminal, and the first client is a cloud game client. The terminal displays at least one virtual scene entry based on the first client, wherein the at least one virtual scene entry is provided by a first server corresponding to the first client, and the first server is a cloud game server and is used for providing background service for the first client. If the number of the at least one virtual scene entry is greater than 1, the at least one virtual scene entry corresponds to different scene progress of the same virtual scene in the same game, or the at least one virtual scene entry corresponds to scene progress of different virtual scenes in different games.
For example, referring to fig. 3, fig. 3 is a schematic diagram showing a virtual scene entry according to an embodiment of the present application. As shown in fig. 3, the cloud game client displays 4 virtual scene entries, namely a virtual scene entry a, a virtual scene entry B, a virtual scene entry C and a virtual scene entry D, wherein the 4 virtual scene entries correspond to scene progress of different virtual scenes in different games. Optionally, when the user hovers the mouse over any virtual scene entry, the terminal displays progress information corresponding to the virtual scene entry, for example, the progress information corresponding to the virtual scene entry a indicates that the virtual scene entry corresponds to a virtual scene a, the content of the scene is a desert shooting range, and the virtual character is equipped with a virtual submachine gun and a 4-fold mirror for shooting practice. Optionally, the cloud game client may display, in addition to the 4 virtual scene entries, game entries of other cloud games, where a user may play a game according to a conventional game mode by triggering the game entries.
In the embodiment of the application, the scene progress is used for indicating the change degree of the virtual scene. The user account logged in by the terminal can cause the change of the virtual scene by controlling the first virtual object to execute the interactive behavior in the virtual scene, thereby influencing the scene progress of the virtual scene, such as the position of the first virtual object in the virtual scene, the virtual prop of the first virtual object equipment, the damage of the first virtual object to the building in the virtual scene, and the like. It should be noted that, in general, the initial progress of each running of the virtual scene is the same, such as the same mountain, river, building, the same scenario, the same NPC, etc., and the process of playing the game by the user is the process of changing the virtual scene.
For example, taking FPS game as an example, a user account expects to equip a virtual submachine gun in a desert scene, uses a 4-fold mirror to practice shooting at a position 400 meters away from a target, and controls a first virtual object to search for the virtual submachine gun and the 4-fold mirror in the desert scene each time the user account plays the game, and then moves to a position 400 meters away from the target to start practice. By triggering the virtual scene entry corresponding to the scene progress, the terminal can directly display the first virtual object which is provided with the virtual submachine gun and the 4-time mirror and is positioned at the position out of the target 400 meters.
202. And responding to the triggering operation of the target virtual scene entry, and sending a loading request to the first server by the terminal, wherein the loading request is used for indicating the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to the scene picture of the target virtual scene.
In the embodiment of the application, a user can trigger any virtual scene entry, the terminal determines the triggered virtual scene entry as a target virtual scene entry, and then a loading request is sent to a first server, so that the first server operates the target virtual scene based on the scene progress corresponding to the target virtual scene entry after receiving the loading request, thereby obtaining a scene picture corresponding to the scene progress, and the first server returns the scene picture obtained by operation to the terminal.
For example, taking a target virtual scene as a desert scene in an FPS game as an example, a terminal displays a virtual scene entry corresponding to the desert scene, the desert scene corresponding to the virtual scene entry is a first virtual object (i.e., a virtual object corresponding to a user account) equipped with a virtual sniper gun, a virtual submachine gun and a plurality of virtual torches, and the first virtual object is located on a peak in the desert scene. The loading request instructs the first server to load the desert scene according to the scene progress. Alternatively, the virtual scene entry of the desert scene corresponds to a game stump instead of a completely new game.
203. And if the terminal receives the scene picture, displaying the scene picture.
In the embodiment of the application, if the terminal receives the scene picture returned by the server, the scene picture is displayed based on the first client, so that the user can view the scene picture based on the first client. Optionally, after detecting the control operation of the user account on the virtual object, the terminal sends an operation instruction corresponding to the control operation to the first server, the first server updates the scene picture according to the operation instruction, and the terminal receives and displays the updated scene picture based on the first client.
According to the scheme provided by the embodiment of the application, by displaying at least one virtual scene entry, a user can view and trigger any virtual scene entry according to preference, and as the virtual scene entry corresponds to any scene progress of a target virtual scene, the current terminal can request the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry, so that the terminal can display a scene picture of the target virtual scene in the scene progress, the user can reach the scene progress without complicated operation, the time is obviously saved, and the man-machine interaction efficiency and the user viscosity are improved.
Fig. 4 is a flowchart of another processing method of a virtual scene according to an embodiment of the present application, and as shown in fig. 4, in an embodiment of the present application, a processing method of executing the virtual scene by a server using a computer device as the server is illustrated as an example. The virtual scene method comprises the following steps:
401. the server provides at least one virtual scene entry for the terminal, wherein the virtual scene entry corresponds to any scene progress of the target virtual scene, and the scene progress is determined based on historical interaction behaviors of a user account of the terminal in the at least one virtual scene.
In the embodiment of the present application, the server is a first server, and the first server is a cloud game server corresponding to a first client installed and operated on a terminal, that is, a cloud game client. The first server can collect historical interaction behavior of the user account in at least one virtual scene under the condition of authorization of the user account of the terminal, then determine scene progress of the at least one virtual scene based on the collected historical interaction behavior, then generate at least one virtual scene entry, provide the at least one virtual scene entry to the terminal, and display the at least one virtual scene entry by the terminal. The content related to the scene progress is referred to step 201, and will not be described herein.
It should be noted that, the virtual scene entry corresponds to a scene progress of the target virtual scene, and may also be a scene progress corresponding to a professional in the electronic contest, that is, the user account may be able to control a virtual object of any professional to play a game by triggering the scene entry to enter a residual office of the electronic contest.
402. The server receives a loading request of the terminal, wherein the loading request is used for indicating the operation of the target virtual scene based on the scene progress and returning to a scene picture of the target virtual scene.
In the embodiment of the application, the loading request is sent by the terminal when the terminal detects that the scene entry of any virtual scene is triggered. Optionally, the first server is provided with at least one client, one client being used for running at least one virtual scene. The first server determines a second client for running the target virtual scene based on the load request.
403. And the server runs the target virtual scene based on the scene progress and sends a scene picture of the target virtual scene to the terminal.
In the embodiment of the application, the first server runs the target virtual scene based on the scene progress of the target virtual scene, so that the scene picture of the target virtual scene obtained by running corresponds to the scene progress, and then returns the scene picture to the terminal, so that the terminal displays the scene picture to the user based on the first client. Optionally, the first server may further receive an operation instruction sent by the terminal, and then update the scene picture according to the received operation instruction based on the second client, and return the updated scene picture to the terminal.
According to the method provided by the embodiment of the application, the virtual scene entry is provided for the terminal, so that the first server can run the virtual scene and return the scene picture based on the loading request sent by the terminal, the terminal can quickly display the scene picture of the corresponding virtual scene based on the virtual scene entry selected by the user, and the scene picture is obtained based on the scene progress running, so that the time of the user for logging in a game, controlling the virtual object to go to the copy entry and waiting for the copy loading is saved, the copy progress is advanced to the desired progress, and the man-machine interaction efficiency is improved.
Fig. 5 is a flowchart of interaction provided according to an embodiment of the present application, and as shown in fig. 5, in an embodiment of the present application, interaction between a terminal and a first server is illustrated as an example. The method comprises the following steps:
501. The first server provides at least one virtual scene entry for the terminal, the virtual scene entry corresponding to any scene progress of the target virtual scene, the scene progress being determined based on historical interaction behavior of a user account of the terminal in the at least one virtual scene.
In the embodiment of the present application, this step is referred to as step 401, and is not described herein.
In some embodiments, the at least one virtual scene entry is generated by a first server, and accordingly, the first server obtains a historical interaction behavior of the user account in any virtual scene, and then generates the at least one virtual scene entry of the target virtual scene based on at least one scene progress in the target virtual scene that matches the historical interaction behavior. By matching the historical interaction behavior with the scene progress in the target virtual scene, the scene progress of the user account preference can be determined in the target virtual scene based on the historical interaction behavior of the user account.
For example, taking an FPS game as an example, if a user account is in the same position in a desert scene multiple times, the same virtual firearm is equipped, and the same accessory is used for shooting, the first server generates a corresponding virtual scene entry based on scene progress matched with the behavior in the desert scene. Based on the virtual scene entry, the virtual scene entry is used for directly starting game play at the scene progress.
In some embodiments, the at least one scene progress matching the historical interaction behavior refers to at least one scene progress for which the number of occurrences of the historical interaction behavior satisfies the behavior condition. Wherein the behavior condition may be reaching a first time threshold.
For example, the first server collects a near 20 game of the user account in the desert scene, by analyzing the game data, it is determined that more than 3 rounds of game play have more than 4 rounds of game play, the user account controls the same position of the first virtual object in the desert scene, the same virtual firearm is equipped, the same accessory is used for shooting, and then the first server determines the scene progress as one scene progress matched with the historical interaction behavior.
In some embodiments, the first server determines the target virtual scene by identifying a scene picture of the virtual scene displayed by the terminal. Correspondingly, the first server identifies the scene type of the scene picture of any virtual scene of the terminal, and then determines the target virtual scene based on the scene type. Among them, the scene types include, but are not limited to, single person PVE scenes, multi-person PVE scenes, 1VS1 scenes, multi-person PVP scenes, and mixed scenes. Optionally, when the user account starts the first client based on the terminal next time, the first server provides at least one virtual scene entry corresponding to the target virtual scene to the terminal.
For example, taking a scene picture of a virtual scene displayed by a terminal as an example of a training field in an FPS game, after the first server recognizes the scene picture displayed by the terminal, it determines that the scene type is a single PVE scene, and then determines the single PVE scene as a target virtual scene.
In some embodiments, the first server determines a target scene type based on the number of occurrences of the scene type, the target scene type being a scene type whose number of occurrences meets the occurrence condition, and the first server determines any virtual scene belonging to the target scene type as the target virtual scene. Wherein the occurrence condition is reaching a second time threshold. Optionally, when the user account frequently enters a certain virtual scene of the same scene type, the first server provides at least a virtual scene entry of a target virtual scene which belongs to the same scene type as the certain virtual scene for the terminal.
In some embodiments, the first server is further capable of obtaining operation information of a plurality of virtual scenes operated by the terminal, where the operation information is used to indicate historical interaction behavior of the user account in each virtual scene. And then, the first server analyzes the preference of the user account according to the running information to obtain preference information of the user account for the multiple virtual scenes, wherein the preference information is used for indicating the preference degree of the target user account for each virtual scene and different scene progress of each virtual scene. By collecting the historical interaction behaviors of the user account in each virtual scene, the preference degree of the user on each virtual scene and the scene progress in each scene can be determined, and then the virtual scene entrance can be recommended to the user based on the preference degree.
For example, for a virtual scene where a user account enters, the cloud game server acquires and records interaction behaviors of the user account in the virtual scene, such as explored areas, equipped virtual props, scene positions where the user account remains longer, and the like, and the cloud game server determines the preference degree of a target user account on the virtual scene according to the game data.
Optionally, the first server can also directly obtain the preference degree of the user account on each virtual scene and the different scene progress of each virtual scene from the second server. The second server is a game server corresponding to a second client, and the second client is a game client corresponding to a virtual scene.
For example, the game client sends the game data of the user account to the game server, the game server extracts the interaction behavior of the user account from the game data, determines the preference degree of the user account for the virtual scene, feeds back to the game client, and sends the game client to the first server.
In some embodiments, the first server generates at least one virtual scene entry based on preference information of the user account for each virtual scene. Correspondingly, the first server determines at least one target virtual scene, namely a virtual scene of interest to the user, according to the preference information. Then, for any target virtual scene, the first server acquires a plurality of scene segmentation points corresponding to the target virtual scene, then the first server selects at least one scene segmentation point from the plurality of scene segmentation points according to the historical interaction behavior of the user account in the target virtual scene, and segments the target virtual scene to obtain at least one virtual scene entry, wherein different virtual scene entries correspond to different scene progress of the any target virtual scene. For any scene segmentation point of a target virtual scene, a first server acquires scene data corresponding to the target virtual scene at the any scene segmentation point, namely data of the target virtual scene, wherein the scene data is used for loading scene contents when the target virtual scene is operated. Optionally, the first server further obtains scene state data corresponding to the target virtual scene at the scene segmentation point, where the scene state data represents state data corresponding to the first virtual object corresponding to the user account at the scene segmentation point of the target virtual scene, and is used for loading the first virtual object in the target virtual scene. Optionally, when the first virtual object corresponding to the user account enters the target virtual scene at any time, the scene state data is the instantaneous state data of the first virtual object obtained at the scene segmentation point by the first server, or when the first virtual object corresponding to the user account enters the target virtual scene for a plurality of times, the scene state data is the average state data of the first virtual object at the scene segmentation point obtained by the first server.
For example, taking a target virtual scene as a training field in an FPS game as an example, a first server obtains 2 scene cuts of the training field, the first scene cut uses a virtual submachine gun for a first virtual object, and the second scene cut uses a virtual sniper gun for the first virtual object and targets at a moment of 8 times of the mirror. The first server segments the training scene according to the historical interaction behavior of the user account in the training scene, namely the behavior of using the virtual submachine gun and the virtual snipe gun for multiple times, so as to obtain two virtual scene entries. The scene progress of the training field corresponding to one virtual scene entrance is that a first virtual object corresponding to a user account is provided with a virtual submachine gun, a 4-fold mirror is used at a position which is 400 meters away from a target, and the scene progress of the training field corresponding to the other virtual scene entrance is that a first virtual object corresponding to the user account is provided with a virtual sniper gun, an 8-fold mirror is used at a position which is 1500 meters away from the target.
In some embodiments, the virtual scene may further include a plurality of sub-scenes, such as sub-copies in the copy, and the first server may be configured to segment sub-copies of interest to the user account according to the historical interaction behavior of the user account in the copy, so that the user account may directly enter the sub-copies. Correspondingly, for any virtual scene comprising sub-copies, the first server determines at least one target sub-scene preferred by the user account according to the historical interaction behavior of the user account in the virtual scene, and for any target sub-scene, the first server generates a virtual scene entry corresponding to the target sub-scene.
For example, a virtual scene is a scene that includes 8 rooms, each room corresponding to a sub-scene, through which 8 rooms the virtual object needs to pass in turn. The first server determines three rooms interested by a user according to the historical interaction behavior of the user account in the 8 rooms, so that virtual scene inlets corresponding to the 3 rooms are generated, and the user can directly enter any one room of the 3 rooms without sequentially passing through each room.
In some embodiments, the user account can share the scene progress of the virtual scene to other user accounts. Correspondingly, the terminal displays a scene picture of any virtual scene, and the terminal displays guiding information of any virtual scene in the scene picture of any virtual scene, wherein the guiding information is used for prompting whether to share the scene progress of any virtual scene. Optionally, the terminal only displays the guiding information when the user account first enters the virtual scene, and does not repeatedly display the guiding information when the user account does not enter the virtual scene for the first time. Through displaying the guiding information, the user can select to share the game progress used to by the user to other users, so that the communication among the users is improved, the users are more willing to play the game, and the user viscosity is improved.
In some embodiments, the target user account can also directly log into the game client, and the game server corresponding to the game client generates the at least one virtual scene entry. Or the target user account logs in the cloud game client, and interacts with the game client through the cloud game server, so that the cloud game client and the cloud game server play a role in data transparent transmission.
It should be noted that, in order to make the process of generating at least one virtual scene entry described in the above embodiments easier to understand, referring to fig. 6, fig. 6 is a schematic flow chart of generating a virtual scene entry according to an embodiment of the present application. As in fig. 6, the method comprises the following steps 601, entering a virtual scene by a user account. 602. If the user account number enters the current virtual scene for the first time, the terminal displays guiding information to prompt the user whether to share or not. 603. After the user sets whether sharing is finished, the terminal displays a scene picture of the virtual scene. 604. The first server requests a virtual scene recognition policy from the game server. 605. The game server returns a virtual scene recognition policy to the first server. 606. The first server identifies the scene type of the current virtual scene, records the interaction behavior of the user account in the virtual scene and uploads the interaction behavior to the game server. 607. And the game server determines the preference degree of the user account on the current virtual scene according to the data uploaded by the first server, and feeds back the result to the first server. 608. The above 606 and 607 are repeatedly performed until the user account leaves the virtual scene.
502. The terminal displays at least one virtual scene entry.
In this embodiment, reference is made to step 201, and the description thereof is omitted.
In some embodiments, the terminal displays the at least one virtual scene entry in the list based on the first client, the at least one virtual scene entry being a text link in the list.
In some embodiments, the terminal displays at least one virtual scene portal in the form of a picture link based on the first client. Optionally, the picture link further displays introduction information of the virtual scene corresponding to each virtual scene entry, where the introduction information includes information such as a name of the virtual scene, a scene progress, and a virtual character status.
503. And responding to the triggering operation of the target virtual scene entry, and sending a loading request to the first server by the terminal, wherein the loading request is used for indicating the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to the scene picture of the target virtual scene.
In the embodiment of the present application, the step is referred to as step 202, and will not be described herein.
In some embodiments, the target virtual scene needs to be matched with at least one of teammates or matching opponents to run, and the loading request further carries matching information, where the matching information is used to indicate a matching manner of a second virtual object in the target virtual scene, and the second virtual object is any one of a virtual object belonging to the same camping as the first virtual object, a virtual object belonging to an opposite camping as the first virtual object, and a neutral virtual object, where the first virtual object is a virtual object corresponding to the user account. Correspondingly, before sending the loading request to the first server, the terminal also displays a matching information setting page, wherein the matching information setting page is used for setting a matching mode of the second virtual object. Then, the terminal generates the matching information according to the matching mode set by the matching information setting page. The form of setting the matching mode can be a check box, a single-selection box, a drop-down menu and the like, and the application is not limited to the check box, the single-selection box, the drop-down menu and the like.
For example, the terminal displays a matching information setting page based on a cloud game client, the matching setting page displays a check box corresponding to a matching teammate and a check box of a matching opponent, if a user only checks the check box corresponding to the matching teammate, the terminal detects the set matching mode in the matching information setting interface as only the matching teammate based on the first client, if the user only checks the check box corresponding to the matching opponent, the terminal detects the set matching mode in the matching information setting interface as only the matching opponent based on the first client, if the user checks the check box corresponding to the matching teammate and checks the check box corresponding to the matching opponent, the terminal detects the set matching mode in the matching information setting interface as both the matching teammate and the matching opponent based on the first client, and of course, if the user does not check the matching teammate and does not check the matching opponent, the terminal displays matching prompt information based on the first client, and the matching prompt information is used for prompting the user to select.
504. The first server receives a loading request of the terminal.
In the embodiment of the present application, this step is referred to as step 402, and will not be described herein.
In some embodiments, the first server can operate the virtual scene corresponding to the at least one virtual scene entry provided to the terminal in advance, so that after receiving the loading request, the first server can directly return the scene picture of the target virtual scene to the terminal, thereby saving the time for the user to wait for the first server to start the target virtual scene, and further improving the man-machine interaction efficiency.
505. The first server runs the target virtual scene based on the scene progress and sends a scene picture of the target virtual scene to the terminal.
In the embodiment of the present application, this step is referred to as step 403, and will not be described herein.
In some embodiments, the first server takes the scene progress as an operation starting point, acquires virtual scene data corresponding to the scene progress, and then operates the target virtual scene based on the virtual scene data. The virtual scene data corresponding to the scene progress only relates to the changes of the virtual scene, such as buildings, carriers, NPCs, scenarios and the like.
In some embodiments, the first server takes the scene progress as an operation starting point, and obtains virtual scene data corresponding to the scene progress. And then acquiring historical scene state data of the user account. The target virtual scene is then run based on the virtual scene data, in which the first virtual object is loaded based on the historical scene state data. The historical scene state data is used for indicating the state of a first virtual object of the user account at a historical moment, such as a health value, a standing or squatting state, a position in a virtual scene and the like.
In some embodiments, a first server loads virtual props in historical scene state data for a first virtual object in a target virtual scene. Such as virtual firearms, virtual vehicles, etc.
In some embodiments, the first server loads the first virtual object in the target virtual scene with a corresponding action state in the historical scene state data. Such as the first virtual object being located at the top of a mountain, the first virtual object being located at plain, or the health value of the first object not being a maximum, etc.
For example, taking a target virtual scene as a training field in an FPS game as an example, a user expects to load virtual props in advance in the training field, for example, a virtual submachine gun frequently used by the user is used to perform target shooting training at a middle-distance by using a 4-fold mirror, when the first server runs the target virtual scene, the first server takes the scene progress as a running starting point to acquire virtual scene data corresponding to the scene progress, namely, the virtual submachine gun, the 4-fold mirror, the first virtual object is positioned at the middle-distance of the target, and the like, so that the effect of loading the virtual props in advance is achieved, and the user can directly start training from the scene progress of the loaded virtual props.
In some embodiments, the loading request further carries matching information, where the matching information is used to indicate a matching manner of the second virtual object in the target virtual scene. The second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to the opposite camp as the first virtual object, and a neutral virtual object, wherein the first virtual object is a virtual object corresponding to a user account. Correspondingly, the first server can also load the second virtual object in the target virtual scene based on the matching information.
For example, taking a target virtual scene as a multi-person PVE scene as an example, the first server needs to load virtual props for users and also needs to match proper teammates for the users, and then the first server matches teammates for the users according to the matching information carried by the loading request, so that relatively good teammate matching is achieved, training effect or game experience is improved, and user viscosity is improved.
For another example, taking the target virtual scene as a 1VS1 scene as an example, the first server needs to load virtual props for the user and also needs to match a proper opponent for the user, and then the first server matches the proper opponent for the user according to the matching information carried by the loading request. Of course, the user can invite the opponent by himself, and the matching information includes the account identification of the opponent invited by the user.
For another example, taking the target virtual scene as a multi-person PVP scene as an example, the first server needs to load virtual props for the user and also needs to match proper teammates and opponents for the user, and then the first server matches the proper teammates and opponents for the user according to the matching information carried by the loading request. Of course, the user can invite teammates and opponents by himself, and the matching information comprises account numbers of the teammates and opponents invited by the user.
506. And if the terminal receives the scene picture, displaying the scene picture.
In the embodiment of the present application, this step is referred to as step 203, and will not be described herein.
In some embodiments, the scene image is obtained based on the scene progress and the historical scene status data of the user account, see step 505, which is not described herein.
In some embodiments, a first virtual object is displayed in the scene, where the first virtual object is equipped with a virtual prop corresponding to the historical scene status data, see step 505, which is not described herein.
In some embodiments, a first virtual object is displayed in the scene, where the first virtual object is in an action state corresponding to the historical scene state data, see step 505, which is not described herein.
In some embodiments, the scene image has a second virtual object that is matched based on the matching information, see step 505, which is not described herein.
It should be noted that, in order to make the flow described in the above steps easier to understand, the first client installed and operated by the terminal is a cloud game client, the first server is a cloud game server, the second client is a game client, the second server is a game server, and an example is described with reference to fig. 7, and fig. 7 is a schematic diagram of another interaction flow provided according to an embodiment of the present application. As shown in fig. 7, the process includes the steps of 701, displaying at least one virtual scene entry by a cloud gaming client. 702. And the cloud game client sends the target virtual scene entry selected by the user to the cloud game server. 703. And the cloud game server sends the target virtual scene entry selected by the user to the corresponding game client. 704. If the game client determines that the target virtual scene corresponding to the target virtual scene entry needs to match the second virtual object, then 705 is performed, otherwise 709 is performed. 705. The cloud game client displays guide information and guides a user to select a matching mode of the second virtual object, wherein the guide information in the step is transmitted through the cloud game client and the cloud game server. 706. The cloud game client sends the matching mode selected by the user to the game client, wherein the matching mode in the step carries out transparent transmission on the guiding information through the cloud game client and the cloud game server. 707. And the game client sends the target virtual scene entrance and the matching mode to the game server. 708. And the game server feeds back the environment information of the target virtual scene corresponding to the target virtual scene entry and the second virtual object to the game client. 709. The game client runs the target virtual scene on the cloud game server based on the environment information. 710. And the cloud game server renders and calculates the scene picture of the target virtual scene in real time to obtain the scene picture. 711. The cloud game client displays a scene picture.
In the embodiment of the application, the virtual scene entry is provided for the terminal, so that a user can view and trigger any virtual scene entry according to preference, and the current terminal can request the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry because the virtual scene entry corresponds to any scene progress of the target virtual scene, so that the terminal can quickly display the scene picture of the corresponding virtual scene based on the virtual scene entry selected by the user, the user can reach the scene progress without complicated operation, the time is obviously saved, and the man-machine interaction efficiency and the user viscosity are improved.
Fig. 8 is a diagram of a system architecture according to an embodiment of the present application, and as shown in fig. 8, the system architecture includes a cloud game client, a cloud game server, a game client, and a game server. The cloud game client is installed and operated on a first client side in the terminal, the cloud game server is a first server, the game client side is a second client side deployed in the cloud game server, and the game server is a second server.
The cloud game client is used for displaying at least one virtual scene entry, the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on historical interaction behaviors of a user account in the at least one virtual scene;
The cloud game client is further used for responding to triggering operation of the target virtual scene entrance and sending a loading request to the cloud game server;
The cloud game server is used for receiving a loading request of the cloud game client, the loading request is used for indicating the first server to run the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to the scene picture of the target virtual scene;
the cloud game server is also used for running the target virtual scene based on the scene progress and sending a scene picture of the target virtual scene to the cloud game client;
the game server is used for providing background service for the game client;
the cloud game client is also used for displaying the scene picture if the scene picture is received.
The cloud game client comprises a user interaction service, wherein the user interaction service is used for enabling a user to select a game mode to enter a traditional game mode or enter a mode designed by the scheme, namely directly entering a recommended game scene, such as a game scene preferred by the user or a game scene shared by other users.
The cloud game server comprises a first scene rendering service and a user matching service. The first scene rendering service is used for enabling the game client to send a loading request according to a virtual scene entry selected by a user at the cloud game client, loading a corresponding virtual scene, and carrying out necessary rendering and real-time calculation on the virtual scene which cannot be directly loaded. The user matching service is used to assist players in inviting friends or automatically matching in virtual scenarios involving player teams or player countermeasures.
The game client comprises a second scene rendering service, a data reporting service and a scene identification service. The second scene rendering service is used for accessing relevant virtual scene data from the game server, and loading corresponding virtual scenes in cooperation with the first rendering service of the cloud game server. The data reporting service is used for reporting the running information of the current virtual scene in the game running process so that the user can experience the virtual scene with different scene progress next time. The scene recognition service is used for pulling a scene recognition strategy from the game server and recognizing the running information of the virtual scene currently running based on the scene recognition strategy.
The game server comprises a scene identification strategy service, a scene data storage service, a scene data loading service and a scene data analysis service. The scene recognition strategy service is used for recognizing and classifying strategies based on various virtual scenes and transmitting the strategies to the game client side for recognizing the virtual scenes. The scene data storage service is used for recording the state data of the user account according to the running information reported by the game client, so that when the user enters the virtual scene again, the user can enter the same virtual scene as the user enters the virtual scene when the user leaves. The scene data loading service is used for receiving a request sent by the second scene rendering service, and sending state data of the user account to the game client to render a virtual object corresponding to the user account in the virtual scene. The scene data analysis service is used for analyzing the preference degree of the user account on each virtual scene according to various running information stored based on the scene data storage service and combining the interaction behavior of the user account in the virtual scene, so that a virtual scene entry is recommended for the user account.
Fig. 9 is a block diagram of a display device for a virtual scene according to an embodiment of the present application. The device is used for executing the steps in the virtual scene display method, and referring to fig. 9, the device comprises a first display module 901, a request sending module 902 and a second display module 903.
The first display module 901 is configured to display at least one virtual scene entry, where the virtual scene entry corresponds to any scene progress of the target virtual scene, and the scene progress is determined based on a historical interaction behavior of the user account in the at least one virtual scene;
A request sending module 902, configured to respond to a trigger operation on a target virtual scene entry, and send a loading request to a first server, where the loading request is used to instruct the first server to run the target virtual scene based on a scene progress corresponding to the target virtual scene entry, and return to a scene picture of the target virtual scene;
the second display module 903 is configured to display the scene if the scene is received.
In some embodiments, the scene graph is loaded based on the scene progress and historical scene state data of the user account.
In some embodiments, a first virtual object is displayed in the scene screen, the first virtual object being equipped with a virtual prop corresponding to the historical scene state data.
In some embodiments, a first virtual object is displayed in the scene, the first virtual object being in an action state corresponding to the historical scene state data.
In some embodiments, the loading request further carries matching information, where the matching information is used to indicate a matching manner of a second virtual object in the target virtual scene, where the second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposing camp as the first virtual object, and a neutral virtual object;
referring to fig. 10, fig. 10 is a block diagram of another display device for virtual scenes according to an embodiment of the present application, where the device further includes:
a third display module 904, configured to display a matching information setting page, where the matching information setting page is used to set a matching manner of the second virtual object;
The information generating module 905 is configured to generate the matching information according to the matching manner set in the matching information setting page.
In some embodiments, a second virtual object that is matched based on the matching information is displayed in the scene.
In some embodiments, referring to fig. 10, the apparatus further comprises:
a fourth display module 906, configured to display a scene frame of any virtual scene;
the fourth display module 906 is further configured to display, in a scene frame of the any virtual scene, guiding information of the any virtual scene, where the guiding information is used to prompt whether to share a scene progress of the any virtual scene.
According to the scheme provided by the embodiment of the application, by displaying at least one virtual scene entry, a user can view and trigger any virtual scene entry according to preference, and as the virtual scene entry corresponds to any scene progress of a target virtual scene, the current terminal can request the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry, so that the terminal can display a scene picture of the target virtual scene in the scene progress, the user can reach the scene progress without complicated operation, the time is obviously saved, and the man-machine interaction efficiency and the user viscosity are improved.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
It should be noted that, when the display device for a virtual scene provided in the above embodiment displays the virtual scene, only the division of the above functional modules is used for illustrating, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the display device of the virtual scene provided in the above embodiment and the method embodiment of displaying the virtual scene belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Fig. 11 is a block diagram of another processing apparatus for virtual scenes according to an embodiment of the present application. The device is used for executing the steps in the processing method of the virtual scene, and referring to fig. 11, the device comprises an entry providing module 1101, a request receiving module 1102 and a running module 1103.
An entry providing module 1101, configured to provide at least one virtual scene entry for a terminal, where the virtual scene entry corresponds to any scene progress of a target virtual scene, and the scene progress is determined based on a historical interaction behavior of a user account of the terminal in the at least one virtual scene;
A request receiving module 1102, configured to receive a loading request of the terminal, where the loading request is used to instruct to run the target virtual scene based on the scene progress, and return to a scene picture of the target virtual scene;
and the operation module 1103 is configured to operate the target virtual scene based on the scene progress, and send a scene picture of the target virtual scene to the terminal.
In some embodiments, the operation module 1103 is configured to take the scene progress as an operation start point, obtain virtual scene data corresponding to the scene progress, and operate the target virtual scene based on the virtual scene data.
In some embodiments, referring to fig. 12, fig. 12 is a block diagram of a processing apparatus for a virtual scene provided according to an embodiment of the present application, where the running module 1103 includes:
a first obtaining unit 11031, configured to obtain virtual scene data corresponding to the scene progress with the scene progress as an operation start point;
A second obtaining unit 11032, configured to obtain historical scene status data of the user account;
An execution unit 11033 is configured to execute the target virtual scene based on the virtual scene data, where the first virtual object is loaded based on the historical scene status data.
In some embodiments, the execution unit 11033 is configured to load the first virtual object with the virtual prop in the historical scene state data in the target virtual scene.
In some embodiments, the execution unit 11033 is configured to load the first virtual object with a corresponding action state in the historical scene state data in the target virtual scene.
In some embodiments, referring to fig. 12, the apparatus further comprises:
the acquiring module 1104 is configured to acquire a historical interaction behavior of the user account in any virtual scene;
The entry generation module 1105 is configured to generate at least one virtual scene entry of the target virtual scene based on at least one scene progress in the target virtual scene that matches the historical interaction behavior.
In some embodiments, the at least one scene progress matching the historical interaction behavior refers to at least one scene progress for which the number of times the historical interaction behavior occurs meets the behavior condition.
In some embodiments, referring to fig. 12, the apparatus further comprises:
An identifying module 1106, configured to identify a scene type of a scene frame of any virtual scene of the terminal;
A determining module 1107 is configured to determine the target virtual scene based on the scene type.
In some embodiments, the determining module 1107 is configured to determine a target scene type based on the occurrence number of the scene type, where the target scene type is a scene type whose occurrence number meets the occurrence condition, and determine any virtual scene belonging to the target scene type as the target virtual scene.
In some embodiments, the load request also carries matching information, wherein the matching information is used for indicating a matching mode of a second virtual object in the target virtual scene, and the second virtual object is any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposing camp as the first virtual object and a neutral virtual object;
referring to fig. 12, the apparatus further includes:
the loading module 1108 is configured to load the second virtual object in the target virtual scene based on the matching information.
According to the scheme provided by the embodiment of the application, the virtual scene entry is provided for the terminal, so that the first server can run the virtual scene and return the scene picture based on the loading request sent by the terminal, the terminal can quickly display the scene picture of the corresponding virtual scene based on the virtual scene entry selected by the user, and the scene picture is obtained based on the scene progress running, so that the time of the user for logging in a game, controlling the virtual object to go to the copy entry and waiting for the copy loading is saved, the copy progress is advanced to the desired progress, and the man-machine interaction efficiency is improved.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
It should be noted that, when the processing device for a virtual scene provided in the above embodiment processes the virtual scene, only the division of the above functional modules is used for illustrating, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the processing device of the virtual scene provided in the above embodiment and the processing method embodiment of the virtual scene belong to the same concept, and the specific implementation process of the processing device of the virtual scene is detailed in the method embodiment, which is not repeated here.
In the embodiment of the present application, the computer device can be configured as a terminal or a server, when the computer device is configured as a terminal, the technical solution provided by the embodiment of the present application may be implemented by the terminal as an execution body, and when the computer device is configured as a server, the technical solution provided by the embodiment of the present application may be implemented by the server as an execution body, or the technical solution provided by the present application may be implemented by interaction between the terminal and the server, which is not limited by the embodiment of the present application.
Fig. 13 is a block diagram of a structure of a terminal provided according to an embodiment of the present application when a computer device is configured as the terminal. The terminal 1300 may be a portable mobile terminal such as a smart phone, tablet, MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, notebook or desktop. Terminal 1300 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, terminal 1300 includes a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array) GATE ARRAY, PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in a wake-up state, also referred to as a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content that is required to be displayed by the display screen. In some embodiments, processor 1301 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one computer program for execution by processor 1301 to implement the method of displaying a virtual scene provided by an embodiment of the method in the present application.
In some embodiments, terminal 1300 may also optionally include a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral devices include at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, and a power supply 1309.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board, and in some other embodiments, either or both of processor 1301, memory 1302, and peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to, the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1304 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1305 may be one and disposed on the front panel of the terminal 1300, in other embodiments, the display 1305 may be at least two and disposed on different surfaces or in a folded configuration of the terminal 1300, respectively, and in other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1300. Even more, the display screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 1300, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
A power supply 1309 is used to power the various components in the terminal 1300. The power supply 1309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to, an acceleration sensor 1311, a gyroscope sensor 1312, a pressure sensor 1313, an optical sensor 1315, and a proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal 1300, and the gyro sensor 1312 may collect a 3D motion of the user on the terminal 1300 in cooperation with the acceleration sensor 1311. Based on the data collected by gyro sensor 1312, processor 1301 can realize functions such as motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side frame of terminal 1300 and/or below display screen 1305. When the pressure sensor 1313 is disposed at a side frame of the terminal 1300, a grip signal of the terminal 1300 by a user may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1315. Specifically, the display brightness of the display screen 1305 is turned up when the ambient light intensity is high, and the display brightness of the display screen 1305 is turned down when the ambient light intensity is low. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1315.
A proximity sensor 1316, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1300. The proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300. In one embodiment, the processor 1301 controls the display screen 1305 to switch from the on-screen state to the off-screen state when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal 1300 becomes gradually smaller, and the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal 1300 becomes gradually larger.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of terminal 1300 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
When the computer device is configured as a server, fig. 14 is a block diagram of a server according to an embodiment of the present application, where the server 1400 may have a relatively large difference according to configuration or performance, and may include one or more processors (Central Processing Units, CPU) 1401 and one or more memories 1402, where at least one computer program is stored in the memories 1402, and the at least one computer program is loaded and executed by the processor 1401 to implement the method for processing a virtual scene provided in each of the method embodiments described above. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The embodiment of the application also provides a computer readable storage medium, wherein at least one section of computer program is stored in the computer readable storage medium, and the at least one section of computer program is loaded and executed by a processor of the computer device to realize the operation executed by the computer device in the method for displaying the virtual scene in the embodiment of the application or the operation executed by the computer device in the method for processing the virtual scene in the embodiment of the application. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising computer program code stored in a computer readable storage medium. The computer program code is read from a computer readable storage medium by a processor of a computer device, and executed by the processor, the computer program code causes the computer device to execute the method of displaying the virtual scene provided in the above-described various alternative implementations, or causes the computer device to execute the method of processing the virtual scene provided in the above-described various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (20)

1. A method for displaying a virtual scene, the method comprising:
Displaying at least one virtual scene entry, wherein the virtual scene entry corresponds to any scene progress of a target virtual scene, the scene progress is determined based on historical interaction behavior of a user account in at least one virtual scene, the virtual scene entry is generated based on preference information of the user account, and the preference information is used for indicating preference degree of the user account on each virtual scene and different scene progress of each virtual scene;
Responding to triggering operation of a target virtual scene entry, and sending a loading request to a first server, wherein the loading request is used for indicating the first server to operate the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to a scene picture of the target virtual scene;
and if the scene picture is received, displaying the scene picture.
2. The method of claim 1, wherein the scene graph is loaded based on historical scene state data of the scene progress and the user account.
3. The method of claim 2, wherein a first virtual object is displayed in the scene screen, the first virtual object being populated with virtual props corresponding to the historical scene state data.
4. The method of claim 2, wherein a first virtual object is displayed in the scene, the first virtual object being in an action state corresponding to the historical scene state data.
5. The method of claim 1, wherein the load request further carries matching information, the matching information being used to indicate a matching manner of a second virtual object in the target virtual scene, the second virtual object being any one of a virtual object belonging to the same camp as the first virtual object, a virtual object belonging to an opposing camp as the first virtual object, and a neutral virtual object;
before the sending of the load request to the first server, the method further comprises:
Displaying a matching information setting page, wherein the matching information setting page is used for setting a matching mode of the second virtual object;
and generating the matching information according to the matching mode set on the matching information setting page.
6. The method of claim 5, wherein a second virtual object that is matched based on the matching information is displayed in the scene.
7. The method of claim 1, wherein prior to the displaying the at least one virtual scene entry, the method further comprises:
displaying a scene picture of any virtual scene;
And displaying the guiding information of any virtual scene in the scene picture of any virtual scene, wherein the guiding information is used for prompting whether to share the scene progress of any virtual scene.
8. A method for processing a virtual scene, the method comprising:
Providing at least one virtual scene entry for a terminal, wherein the virtual scene entry corresponds to any scene progress of a target virtual scene, the scene progress is determined based on historical interaction behavior of a user account of the terminal in at least one virtual scene, the virtual scene entry is generated based on preference information of the user account, and the preference information is used for indicating preference degree of the user account to each virtual scene and different scene progress of each virtual scene;
Receiving a loading request of the terminal, wherein the loading request is used for indicating to operate the target virtual scene based on the scene progress and returning to a scene picture of the target virtual scene;
And operating the target virtual scene based on the scene progress, and sending a scene picture of the target virtual scene to the terminal.
9. The method of claim 8, wherein the running the target virtual scene based on the scene progress comprises:
taking the scene progress as an operation starting point, and acquiring virtual scene data corresponding to the scene progress;
and running the target virtual scene based on the virtual scene data.
10. The method of claim 8, wherein the running the target virtual scene based on the scene progress comprises:
taking the scene progress as an operation starting point, and acquiring virtual scene data corresponding to the scene progress;
acquiring historical scene state data of the user account;
And running the target virtual scene based on the virtual scene data, and loading a first virtual object in the target virtual scene based on the historical scene state data.
11. The method of claim 10, wherein loading a first virtual object in the target virtual scene based on the historical scene state data comprises:
And loading the virtual prop in the historical scene state data for the first virtual object in the target virtual scene.
12. The method of claim 10, wherein loading a first virtual object in the target virtual scene based on the historical scene state data comprises:
And loading the first virtual object into the corresponding action state in the historical scene state data in the target virtual scene.
13. The method of claim 8, wherein prior to providing the terminal with the at least one virtual scene entry, the method further comprises:
Acquiring historical interaction behaviors of the user account in any virtual scene;
and generating at least one virtual scene entry of the target virtual scene based on at least one scene progress matched with the historical interaction behavior in the target virtual scene.
14. The method of claim 13, wherein the at least one scene progress matching the historical interaction behavior refers to:
the historical interaction behavior occurrence times meet at least one scene progress of the behavior condition.
15. The method of claim 8, wherein the method further comprises:
Identifying the scene type of a scene picture of any virtual scene of the terminal;
And determining the target virtual scene based on the scene type.
16. The method of claim 15, wherein the determining the target virtual scene based on the scene type comprises:
determining a target scene type based on the occurrence times of the scene types, wherein the target scene type is a scene type with the occurrence times conforming to the occurrence conditions;
and determining any virtual scene belonging to the target scene type as the target virtual scene.
17. A display device for a virtual scene, the device comprising:
The first display module is used for displaying at least one virtual scene entry, the virtual scene entry corresponds to any scene progress of a target virtual scene, the scene progress is determined based on historical interaction behavior of a user account in at least one virtual scene, the virtual scene entry is generated based on preference information of the user account, and the preference information is used for indicating preference degree of the user account on each virtual scene and different scene progress of each virtual scene;
the system comprises a request sending module, a first server and a second server, wherein the request sending module is used for responding to the triggering operation of a target virtual scene entry, sending a loading request to the first server, and the loading request is used for indicating the first server to run the target virtual scene based on the scene progress corresponding to the target virtual scene entry and returning to the scene picture of the target virtual scene;
And the second display module is used for displaying the scene picture if the scene picture is received.
18. A processing apparatus for a virtual scene, the apparatus comprising:
The system comprises an entry providing module, a target virtual scene providing module and a virtual scene providing module, wherein the entry providing module is used for providing at least one virtual scene entry for a terminal, the virtual scene entry corresponds to any scene progress of the target virtual scene, the scene progress is determined based on historical interaction behavior of a user account of the terminal in at least one virtual scene, the virtual scene entry is generated based on preference information of the user account, and the preference information is used for indicating preference degree of the user account to each virtual scene and different scene progress of each virtual scene;
The request receiving module is used for receiving a loading request of the terminal, wherein the loading request is used for indicating to operate the target virtual scene based on the scene progress and returning to a scene picture of the target virtual scene;
And the operation module is used for operating the target virtual scene based on the scene progress and sending a scene picture of the target virtual scene to the terminal.
19. A computer device, characterized in that it comprises a processor and a memory for storing at least one piece of computer program, which is loaded by the processor and which performs the method of displaying a virtual scene according to any of claims 1 to 7 or the method of processing a virtual scene according to any of claims 8 to 16.
20. A computer-readable storage medium storing at least one piece of computer program for executing the method of displaying a virtual scene according to any one of claims 1 to 7 or executing the method of processing a virtual scene according to any one of claims 8 to 16.
CN202110455848.3A 2021-04-26 2021-04-26 Virtual scene display method, virtual scene processing method, device and equipment Active CN113058264B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110455848.3A CN113058264B (en) 2021-04-26 2021-04-26 Virtual scene display method, virtual scene processing method, device and equipment
PCT/CN2022/082080 WO2022227936A1 (en) 2021-04-26 2022-03-21 Virtual scene display method and apparatus, virtual scene processing method and apparatus, and device
US17/991,776 US12427410B2 (en) 2021-04-26 2022-11-21 Method and apparatus for displaying virtual scene, method and apparatus for processing virtual scene, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110455848.3A CN113058264B (en) 2021-04-26 2021-04-26 Virtual scene display method, virtual scene processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN113058264A CN113058264A (en) 2021-07-02
CN113058264B true CN113058264B (en) 2025-09-12

Family

ID=76567742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110455848.3A Active CN113058264B (en) 2021-04-26 2021-04-26 Virtual scene display method, virtual scene processing method, device and equipment

Country Status (3)

Country Link
US (1) US12427410B2 (en)
CN (1) CN113058264B (en)
WO (1) WO2022227936A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113058264B (en) 2021-04-26 2025-09-12 腾讯科技(深圳)有限公司 Virtual scene display method, virtual scene processing method, device and equipment
CN114629682B (en) * 2022-02-09 2023-06-09 烽台科技(北京)有限公司 Industrial control network target range allocation method, device, terminal and storage medium
CN115037562B (en) * 2022-08-11 2022-11-15 北京网藤科技有限公司 Industrial control network target range construction method and system for safety verification
CN115860816B (en) * 2022-12-09 2024-07-23 南京领行科技股份有限公司 Order preference method, device, equipment and medium
CN117132743A (en) * 2023-08-29 2023-11-28 支付宝(杭州)信息技术有限公司 Virtual image processing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111917768A (en) * 2020-07-30 2020-11-10 腾讯科技(深圳)有限公司 Virtual scene processing method and device and computer readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120142429A1 (en) * 2010-12-03 2012-06-07 Muller Marcus S Collaborative electronic game play employing player classification and aggregation
WO2014058385A1 (en) * 2012-10-08 2014-04-17 Weike (S) Pte Ltd A gaming machine and a method of restoring a game
US9993729B2 (en) * 2015-08-19 2018-06-12 Sony Interactive Entertainment America Llc User save data management in cloud gaming
US11278807B2 (en) * 2016-06-13 2022-03-22 Sony Interactive Entertainment LLC Game play companion application
CN107497116B (en) * 2016-09-04 2020-10-16 广东小天才科技有限公司 A game progress update control method and device, and user terminal
CN109395385B (en) * 2018-09-13 2021-05-25 深圳市腾讯信息技术有限公司 Method and device for configuring virtual scene, storage medium, and electronic device
US11612813B2 (en) * 2019-09-30 2023-03-28 Dolby Laboratories Licensing Corporation Automatic multimedia production for performance of an online activity
CN111265860B (en) * 2020-01-07 2023-08-04 广州虎牙科技有限公司 Game archiving processing method and device, terminal equipment and readable storage medium
CN111773737A (en) * 2020-06-17 2020-10-16 咪咕互动娱乐有限公司 Cloud game generation method and device, server and storage medium
CN112569596B (en) * 2020-12-11 2022-11-22 腾讯科技(深圳)有限公司 Video picture display method and device, computer equipment and storage medium
CN113058264B (en) * 2021-04-26 2025-09-12 腾讯科技(深圳)有限公司 Virtual scene display method, virtual scene processing method, device and equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111917768A (en) * 2020-07-30 2020-11-10 腾讯科技(深圳)有限公司 Virtual scene processing method and device and computer readable storage medium

Also Published As

Publication number Publication date
WO2022227936A1 (en) 2022-11-03
CN113058264A (en) 2021-07-02
US20230082060A1 (en) 2023-03-16
US12427410B2 (en) 2025-09-30

Similar Documents

Publication Publication Date Title
CN113058264B (en) Virtual scene display method, virtual scene processing method, device and equipment
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111744186B (en) Virtual object control method, device, equipment and storage medium
CN111659122B (en) Virtual resource display method and device, electronic equipment and storage medium
CN112221137B (en) Audio processing method and device, electronic equipment and storage medium
CN111760285B (en) Virtual scene display method, device, equipment and medium
CN113144597A (en) Virtual vehicle display method, device, equipment and storage medium
CN113680060B (en) Virtual picture display method, apparatus, device, medium and computer program product
CN112973117A (en) Interaction method of virtual objects, reward issuing method, device, equipment and medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113730916B (en) Resource loading method, device, equipment and medium based on virtual environment
CN113599810B (en) Virtual object-based display control method, device, equipment and medium
CN110898433B (en) Virtual object control method and device, electronic equipment and storage medium
CN113633982B (en) Virtual prop display method, device, terminal and storage medium
CN114288659B (en) Interaction method, device, equipment, medium and program product based on virtual object
CN113101651B (en) Virtual object type equipment method, device, equipment and storage medium
HK40048687A (en) Virtual scene display method, virtual scene processing method, device and equipment
CN118059496A (en) Virtual object control method, device, equipment and storage medium
HK40054049B (en) Virtual picture display method, device, equipment, medium and computer program product
HK40048310A (en) Virtual vehicle display method, device, equipment and storage medium
CN118059490A (en) Virtual object control method, device, computer equipment and storage medium
HK40048768B (en) Method and apparatus for equipping virtual object type, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40048687

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant