[go: up one dir, main page]

CN120302072A - An interactive method and related device in a virtual scene - Google Patents

An interactive method and related device in a virtual scene Download PDF

Info

Publication number
CN120302072A
CN120302072A CN202410046787.9A CN202410046787A CN120302072A CN 120302072 A CN120302072 A CN 120302072A CN 202410046787 A CN202410046787 A CN 202410046787A CN 120302072 A CN120302072 A CN 120302072A
Authority
CN
China
Prior art keywords
virtual scene
picture
terminal
target element
direct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410046787.9A
Other languages
Chinese (zh)
Inventor
陈俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410046787.9A priority Critical patent/CN120302072A/en
Publication of CN120302072A publication Critical patent/CN120302072A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供了一种虚拟场景中的互动方法以及相关装置,可应用于直播领域。方法包括:响应于第二对象对第一对象的观看操作,展示第一直播画面,第一直播画面对应第一对象在虚拟场景中的视角,虚拟场景中包括多个互动元素;响应于第二对象基于第一直播画面对目标元素的变更操作,对虚拟场景中的目标元素进行变更,目标元素属于多个互动元素。本申请实施例所提供的方法丰富了主播与观众之间的互动性,也提高了观众的游戏体验感。

The present application provides an interactive method and related devices in a virtual scene, which can be applied to the field of live broadcasting. The method includes: in response to the viewing operation of the second object on the first object, displaying the first live screen, the first live screen corresponds to the perspective of the first object in the virtual scene, and the virtual scene includes multiple interactive elements; in response to the second object's change operation on the target element based on the first live screen, the target element in the virtual scene is changed, and the target element belongs to multiple interactive elements. The method provided by the embodiment of the present application enriches the interactivity between the anchor and the audience, and also improves the audience's gaming experience.

Description

Interaction method in virtual scene and related device
Technical Field
The application relates to the technical field of internet, in particular to an interaction method in a virtual scene and a related device.
Background
With the development of internet technology, live broadcasting is becoming a new information communication mode, and among them, live broadcasting of games is attracting attention in the live broadcasting field. In live game play, the host can attract the attention and favor of audiences by displaying own game skills and strategies. Live games typically focus on interactions and communications to enhance audience engagement and stickiness. At present, most of live games depend on interaction of spectators through bullet screens and anchor, the spectators are difficult to participate in the games in an immersive manner, and for exploring the game of reasoning, if the spectators are not direct participants of the games, better game experience is difficult.
Disclosure of Invention
The embodiment of the application provides an interaction method in a virtual scene and a related device, which are used for enhancing interaction between audiences and anchor and improving live broadcast watching experience of the audiences.
The first aspect of the present application provides an interaction method in a virtual scene, including:
Responding to the watching operation of the second object on the first object, displaying a first direct-broadcasting picture, wherein the first direct-broadcasting picture corresponds to the view angle of the first object in the virtual scene, and the virtual scene comprises a plurality of interaction elements;
and responding to the change operation of the second object on the target element based on the first direct-play picture, and changing the target element in the virtual scene, wherein the target element belongs to a plurality of interactive elements.
In one possible implementation method, after the target element in the virtual scene is changed in response to the changing operation of the second object on the target element based on the first direct-play picture, the method further includes:
and updating the first direct-play picture according to the change of the target element in the virtual scene.
In one possible implementation method, before the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
In response to an execution operation of the first object on the virtual scene, a first direct broadcast picture is generated.
In one possible implementation method, before generating the first direct-play picture in response to the execution operation of the first object on the virtual scene, the method further includes:
And responding to the scene selection operation of the first object, and displaying a plurality of selectable scenes to the first object, wherein the virtual scenes belong to the selectable scenes.
In one possible implementation method, generating a first direct-play picture in response to an execution operation of a virtual scene by a first object includes:
Responding to an execution request of a first object to a virtual scene, judging whether the number of the anchor objects for executing the virtual scene meets a preset requirement, wherein the first object belongs to the anchor object;
If the preset requirement is met, a first direct broadcast picture is generated.
In one possible implementation method, before the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
and responding to the selection operation of the virtual scene sent by the second object, sending a plurality of anchor objects for executing the virtual scene to the second object, wherein the first object belongs to the anchor objects.
In one possible implementation method, after the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
Issuing a virtual vote to the second object, wherein the virtual vote is used for voting the candidate object by the second object;
Responding to voting operation of the second object on the candidate object, and judging whether the candidate object is a preset object or not;
and if the candidate object is a preset object, issuing a virtual rewards to the second object.
In one possible implementation, the change operation is in particular a volume change operation;
responding to the changing operation of the second object on the target element based on the first always-playing picture, changing the target element in the virtual scene comprises the following steps:
responding to the change operation of the target element sent by the second object, and acquiring a first parameter, wherein the first parameter is the volume parameter of the target element;
Modifying the first parameter based on the changing operation to obtain a second parameter;
And changing the target element according to the second parameter.
In one possible implementation method, after the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
In response to an interfering operation of the third object on the first object, an interfering element is added in the first direct broadcast picture.
In one possible implementation method, after the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
And in response to the interference operation of the second object on the fourth object, adding an interference element in the second live broadcast picture, wherein the second live broadcast picture corresponds to the view angle of the fourth object in the virtual scene.
In one possible implementation method, after the first direct-play picture is displayed in response to the viewing operation of the second object on the first object, the method further includes:
Responding to the visual angle adjusting operation of the second object on the first direct-broadcasting picture, and acquiring a third parameter, wherein the third parameter is the visual angle parameter of the first direct-broadcasting picture;
Changing the third parameter based on the visual angle adjustment operation to obtain a fourth parameter;
and changing the view angle of the first direct-play picture according to the fourth parameter.
A second aspect of the present application provides an interactive apparatus in a virtual scene, including:
The live broadcast module is used for responding to the watching operation of the second object on the first object, displaying a first live broadcast picture, wherein the first live broadcast picture corresponds to the view angle of the first object in the virtual scene, and the virtual scene comprises a plurality of interaction elements;
And the element changing module is used for responding to the changing operation of the second object on the target element based on the first direct-play picture, changing the target element in the virtual scene, wherein the target element belongs to a plurality of interactive elements.
In one possible implementation method, the method further includes:
and the picture updating module is used for updating the first direct-play picture according to the change of the target element in the virtual scene.
In one possible implementation method, the method further includes:
And the picture generation module is used for responding to the execution operation of the first object on the virtual scene and generating a first direct-broadcasting picture.
In one possible implementation method, the method further includes:
And the scene selection module is used for responding to the scene selection operation of the first object, displaying a plurality of selectable scenes to the first object, and the virtual scene belongs to the selectable scenes.
In one possible implementation method, the first direct broadcasting module is specifically configured to determine whether the number of anchor objects executing the virtual scene meets a preset requirement in response to an execution request of the first object on the virtual scene, where the first object belongs to the anchor object, and if the preset requirement is met, generate a first direct broadcasting picture.
In one possible implementation method, the method further includes:
And the anchor selection module is used for responding to the selection operation of the virtual scene sent by the second object, sending a plurality of anchor objects for executing the virtual scene to the second object, and the first object belongs to the anchor object.
In one possible implementation method, the method further includes:
the voting module is used for issuing a virtual vote to the second object, wherein the virtual vote is used for voting the candidate object by the second object, judging whether the candidate object is a preset object or not in response to the voting operation of the second object on the candidate object, and issuing a virtual reward to the second object if the candidate object is the preset object.
In one possible implementation, the change operation is in particular a volume change operation;
The element changing module is specifically configured to respond to a changing operation of the target element sent by the second object, obtain a first parameter, where the first parameter is a volume parameter of the target element, modify the first parameter based on the changing operation to obtain a second parameter, and change the target element according to the second parameter.
In one possible implementation method, the method further includes:
and the interference module is used for responding to the interference operation of the third object on the first object and adding interference elements in the first direct broadcast picture.
In one possible implementation method, the interference module further responds to an interference operation of the second object on the fourth object, and adds an interference element in the second live broadcast picture, where the second live broadcast picture corresponds to a viewing angle of the fourth object in the virtual scene.
In one possible implementation method, the method further includes:
The viewing angle adjusting module is used for responding to the viewing angle adjusting operation of the second object on the first direct-broadcasting picture to obtain a third parameter, wherein the third parameter is the viewing angle parameter of the first direct-broadcasting picture, changing the third parameter based on the viewing angle adjusting operation to obtain a fourth parameter, and changing the viewing angle of the first direct-broadcasting picture according to the fourth parameter.
A third aspect of the present application provides a computer apparatus comprising:
memory, transceiver, processor, and bus system;
wherein the memory is used for storing programs;
The processor is used for executing programs in the memory, and the method comprises the steps of executing the aspects;
the bus system is used to connect the memory and the processor to communicate the memory and the processor.
A fourth aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the methods of the above aspects.
A fifth aspect of the application provides a computer program product or computer program comprising computer instructions stored on a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the above aspects.
From the above technical solutions, the embodiment of the present application has the following advantages:
The application provides an interaction method in a virtual scene and a related device, wherein a main broadcasting object is defined as a first object, a spectator object is defined as a second object, when the first object plays live, a first direct broadcasting picture is displayed in response to the watching operation of the second object on the first object, the first direct broadcasting picture corresponds to the view angle of the first object in the virtual scene, the virtual scene comprises a plurality of interaction elements, and the target element in the virtual scene is changed in response to the changing operation of the second object on the target element based on the first direct broadcasting picture, wherein the target element belongs to the plurality of interaction elements. Through the method, the audience object can change the interactive elements in the live broadcast picture in the live broadcast process of the game, so that the audience can participate in the game to a certain extent, experience the game together with the game host, and assist the game host in cue collection and prop searching. The method provided by the embodiment of the application enriches the interactivity between the anchor and the audience, and also improves the game experience of the audience.
Drawings
FIG. 1a is an application environment diagram of an interaction method in a virtual scene according to an embodiment of the present application;
FIG. 1b is a flowchart illustrating an interaction method in a virtual scene according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application;
FIGS. 3a and 3b are schematic views of a game interface;
FIG. 4 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application;
fig. 6 is an interaction signaling diagram of an interaction method in a virtual scene according to an embodiment of the present application;
Fig. 7a and fig. 7b are interaction signaling diagrams of an interaction method in a virtual scene according to an embodiment of the present application;
FIG. 7c is a flowchart illustrating a method for interaction of virtual scenes according to an embodiment of the present application;
Fig. 8a to 8h are schematic views of a game interface of an interaction method applied to a virtual scene of a scenario game;
fig. 9 is a schematic diagram of an embodiment of a terminal device in an embodiment of the present application;
FIG. 10 is a diagram of one embodiment of a server in an embodiment of the application;
FIG. 11 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application;
FIG. 12 is a schematic structural diagram of an interactive device in a virtual scene according to an embodiment of the present application;
Fig. 13 is a schematic diagram of a server structure according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an interaction method in a virtual scene and a related device, wherein a main broadcasting object can change interaction elements in a live broadcasting picture in the process of live broadcasting a target game, so that a spectator can participate in the game to a certain extent, experience the game together with a game main broadcasting, and assist the game main broadcasting in cue collection and prop searching.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
With the rapid development of internet technology, live broadcasting has become a brand new information communication mode gradually. In many areas of live broadcast, game live broadcast has been attracting attention due to its unique appeal and tremendous market demand. The live game not only provides a platform for players to share game skills, strategies and experiences, but also becomes a community for interaction with friends who have like-minded the same.
In live game play, the host players often attract attention and favor of a large number of spectators by exhibiting their own game skills and strategies. They not only provide the spectators with a wonderful competitive performance, but also enable the spectators to learn more about the game and participate in the game by interacting with the spectators. The interaction mode not only enhances the participation and viscosity of audiences, but also brings more traffic for the anchor.
However, there are some problems with the current live game market. First, most live games rely on spectators to interact with the host by way of a bullet screen, which, although enhancing the spectators' sense of participation to some extent, still makes it difficult for the spectators to participate in the game truly immersively. For some exploratory reasoning-like games, it is often difficult to have a better game experience if the spectator is just a spectator.
Exploring the game of the inference class is a type of game that combines exploration and decryption. Such games typically provide an open world or environment in which players can freely explore, find cues, solve problems, and reveal puzzles. Two types of exploratory inference-like games are provided below:
1) Cue collection type games, in which players play a role in games such as detectivity and lawyers, and the true phase of an event is revealed by collecting evidence, investigating cues and making reasonable reasoning. 2) Social reasoning-type games, which are typically composed of more than 3 players, play different roles according to the scenario, and determine the player playing the "murder" or completing the mission by collecting evidence, investigation cues and communication.
It can be understood that the two exploring and reasoning games have links for collecting evidence, and when the audience watches live games in the game field, if the links for collecting evidence can be immersive, the audience can be assisted to find clues, and the game experience of the audience can be greatly improved.
Based on the above, the application provides an interaction method in a virtual scene, wherein the virtual scene can be a game scene specifically and is used for improving the live broadcast watching experience of spectators.
For easy understanding, referring to fig. 1a, fig. 1a is an application environment diagram of an interaction method in a virtual scene in an embodiment of the present application, and as shown in fig. 1a, the interaction method in the virtual scene in the embodiment of the present application can be applied to a live interaction system. The live interaction system comprises a server and terminal equipment, wherein the server can be an independent physical server, can be a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligent platforms and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and embodiments of the present application are not limited herein.
The terminal equipment comprises a first terminal and a second terminal, wherein the first terminal is a main broadcasting terminal, a main broadcasting object executes a target application on the first terminal, the target application can be a game application, the target application comprises a virtual scene, the virtual scene comprises a plurality of interaction elements, the first terminal uploads pictures of the main broadcasting object in the virtual scene to a server for live broadcasting, the second terminal is a spectator terminal, the spectator object can select the main broadcasting object to be watched, the server synchronizes the live broadcasting pictures of the main broadcasting object to the second terminal, the spectator object can watch the pictures of the main broadcasting object in the virtual scene at the second terminal, the spectator object can change the interaction elements in the live broadcasting pictures at the second terminal, the corresponding interaction elements are in a form different from the original form, the change can be synchronized to the server, and accordingly interaction between the spectator object and the main broadcasting object is enhanced due to the fact that the main broadcasting object can see the change of the interaction elements in the virtual scene.
The interaction method in the virtual scene of the present application is described below from the perspective of a terminal device, where the terminal device may be a live client, and includes a hosting end corresponding to a hosting end and a viewer end corresponding to a viewer. Referring to fig. 1b, fig. 1b is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application, including:
101, responding to the watching operation of a second object on a first object, and displaying a first direct-broadcasting picture, wherein the first direct-broadcasting picture corresponds to the view angle of the first object in a virtual scene, and the virtual scene comprises a plurality of interaction elements;
102, in response to a change operation of the second object on the target element based on the first direct-play picture, changing the target element in the virtual scene, wherein the target element belongs to a plurality of interactive elements.
In an embodiment of the present application, the first object may be used to represent a game player or game player, and the second object may be used to represent a spectator object viewing the live. The virtual scene corresponds to the content of the first object, for example, when the first object plays live games, the scene of the first object in the game is the virtual scene, and then the view angle of the first object in the virtual scene, that is, the game picture of the first object is the first direct-play picture.
The first direct-play screen may be presented to the viewer object when the first object is live on the corresponding platform or client. Thus, when the second object picks the live content on the corresponding platform or client, the first object is selected, that is, the first direct-play screen is displayed to the second object when the viewing operation of the first object is performed.
The virtual scene comprises a plurality of interaction elements, the interaction elements can be expressed as props in the virtual scene, and the first object can interact with the props in the virtual scene.
When the second object views the first direct-play picture, a change operation can be performed on a designated element in the virtual scene based on the first direct-play picture, wherein the designated element belongs to the interactive prop. For example, the second object sees the first element and the second element in the virtual scene through the first direct broadcast picture, the second object can select the first element and change the first element, for example, zoom in, zoom out or replace the first element with other forms, the operation is synchronous to the virtual scene, and therefore the first object can also sense the corresponding change of the first element.
Compared with the prior art in which the audience can interact with the host only in the form of a barrage, the interaction method provided by the embodiment of the application enriches the interaction between the host and the audience and also improves the game experience of the audience. In this manner of participation, the spectator is no longer merely a spectator of the game, but rather is one of the participants of the game. They can have a substantial impact on the progress of the game through interaction with the game master, thereby increasing the interest and interactivity of the game. Meanwhile, the participation of spectators can help game players to better understand and master game rules and find clues and props, so that the progress of the game is better promoted. This manner of participation provides a deeper gaming experience for the spectators, allowing them to feel challenges and fun in the game. At the same time, it also provides more possibilities and opportunities for interaction between the anchor and the audience, thereby enhancing the contact and interactivity between the anchor and the audience. This interactivity enhancement may attract more spectators to play and increase the popularity and popularity of the game.
In order to facilitate understanding, the interaction method in the virtual scene provided by the embodiment of the application is described from the angles of a first terminal (audience end) corresponding to a first object (anchor object), a second terminal (anchor end) corresponding to a second object (audience object) and a server respectively.
The interaction method in the virtual scene of the present application will be described from the perspective of the first terminal. Referring to fig. 2, fig. 2 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application, including:
And 201, responding to the execution operation of the first object on the virtual scene, and displaying a first direct-broadcasting picture corresponding to the virtual scene, wherein the virtual scene comprises a plurality of interaction elements.
It will be appreciated that the first object may be used to represent a game player or a game player, and that the first terminal is a terminal device corresponding to that game player. The first object selects a virtual scene on the first terminal to execute, which is equivalent to the virtual scene loading flow of entering the game after the game host selects the target game. After the target game is loaded, the first terminal displays a game picture corresponding to the virtual scene to the first object, wherein the game picture is a first direct play picture of the first object.
The target game can be a exploring and reasoning game, or a exploring and reasoning playing method exists in the target game, a plurality of interaction elements are included in corresponding virtual scenes, the interaction elements can be expressed as props in the virtual scenes, and the first object (namely, a game player) can be beneficial to completing corresponding game tasks through interaction with the designated interaction elements. The interaction includes examining the interaction element, collecting the interaction element, and manipulating the interaction element. The operation interaction element means that a certain interaction element comprises a first state and a second state, a game player can enable the interaction element to be converted into the second state from the first state through correct operation of the interaction element, for example, the interaction element can be a coded lock, the coded lock comprises a locking state and an unlocking state, the coded lock is in the locking state in the initial state, and the game player enables the coded lock to be converted into the unlocking state through correct code input to the coded lock.
It will be appreciated that the interactive elements may include a specific element that the first object (i.e. the game player) needs to find, for example:
As shown in fig. 3a, in the cable collection game, the first direct-play screen displayed on the first terminal includes a plurality of interactive elements, where the designated element 301 is an element that the first object (i.e. the game player) needs to find, and the other elements are interference elements.
202, Responding to a change request of a target element sent by a second terminal, changing the target element in the first live broadcast, wherein the target element belongs to a plurality of interactive elements, and the change request is generated based on a change operation of a second object on the target element.
It will be appreciated that the second object may be used to represent a game spectator, the second terminal being a terminal device corresponding to the game spectator. The second object (i.e., the viewer) may select the first object (i.e., the game player) for viewing on the second terminal, at which point the first direct play screen will be displayed on the second terminal. The second object may select, through the second terminal, a target element in the first direct broadcast picture, and execute a change operation, where the target element belongs to the interactive element. The change operation is used to change the target element, for example, to enlarge, reduce, change the form of the target element, and the like. Based on the changing operation of the second object, the second terminal generates a corresponding changing request, wherein the changing request is used for indicating a changing instruction to the target element in the virtual scene, and at the moment, the first terminal updates the first direct-broadcasting picture, and the updated target element in the first direct-broadcasting picture is correspondingly changed. Such changes may include changing the location, shape, color, etc. attributes of the target element, or performing some particular action or effect. Through this process, the second object (viewer) can see the corresponding change of the target element when viewing the first direct-play picture of the first object (game player) on the second terminal, thereby obtaining a richer and lively viewing experience.
For example, as shown in fig. 3a, the game screen displayed on the second terminal also shows that the second object (i.e., the spectator) finds that the first object (i.e., the game player) cannot always find the specified element 301 in the game screen, and therefore the second object selects to zoom in on the element 301. At this time, the operation of the second object on the element 301 is synchronized with the first terminal, so that the first direct-play picture in the first terminal is updated to the picture shown in fig. 3b, and becomes the enlarged element 302, and the element 302 is more easily found by the first object, so that the operation of the second object can assist the first object to complete the game task.
It can be understood that the second object may perform various operations on the multiple interactive elements in the first game frame, for example, the second object may further reduce other interactive elements to achieve the effect of highlighting the designated element 301, the second object may also reduce the designated element 301 to interfere with the first object to improve the game difficulty of the first object and increase the live program effect, and similarly, the second object may also change the form of the interactive element to increase the interactivity with the first object, so as to improve the game experience of the first object or improve the game participation feeling of the second object.
The interaction method in the virtual scene in the application will be described from the perspective of the server. Referring to fig. 4, fig. 4 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application, including:
a first direct-play picture of a first object execution virtual scene is acquired 401 from a first terminal.
It will be appreciated that the server may be a live server or a game server. The server interacts with the first terminal and the second terminal in a wired or wireless mode, and the interaction between the first terminal and the second terminal is performed through the server.
The first terminal is a hosting terminal, and the first object can be live broadcast through the first terminal. When the live broadcast content of the first object is the execution target game, the scene in the target game is a virtual scene, the virtual scene comprises a plurality of interaction elements, the picture of the first object in the virtual scene is a first direct broadcast picture, and the server acquires the first direct broadcast picture on the first terminal. The first object finds the appointed interactive element in the virtual scene through the first direct broadcasting picture, which is beneficial to the completion of the corresponding game task.
And 402, synchronizing the first direct-broadcasting picture to the second terminal in response to a viewing request of the first object sent by the second terminal.
It is understood that the second terminal is an audience terminal, and the audience object corresponding to the audience terminal is a second object. When the second object (i.e., spectator) wants to watch the game picture of the first object (i.e., gamer) in the virtual scene, the second terminal generates a corresponding viewing request in response to the viewing operation of the second object and transmits the viewing request to the server. After receiving the viewing request, the server synchronizes the first direct-play screen on the first terminal to the second terminal in response to the request, so as to achieve the effect of enabling the second object (audience) to view the game direct-play of the first object (game player) on the first terminal on the second terminal.
It should be noted that this synchronization technique may be real-time or slightly delayed. Depending on server and network conditions, among other factors.
403, In response to a change request for the target element sent by the second terminal, changing the target element in the virtual scene, where the target element belongs to the interactive element, and the change request is generated based on a change operation of the second object on the target element.
It can be understood that in this embodiment, the second object not only can only watch the first direct-play picture, but also can perform the operations of changing the form of zooming in, zooming out, and changing the form of the interactive element in the virtual scene corresponding to the first direct-play picture.
The second terminal generates a corresponding change request based on the change operation of the second object, and sends the corresponding change request to the server. After receiving the change request, the server correspondingly updates or changes the target element in the virtual scene so as to reflect the change operation of the second object on the target element.
It can be understood that after the target element in the virtual scene is changed, the first direct-play picture is updated based on the change of the element in the virtual scene, so that the target element is more easily or less easily focused by the first object.
Compared with the prior art in which the audience can interact with the host only in the form of a barrage, the interaction method provided by the embodiment of the application enriches the interaction between the host and the audience and also improves the game experience of the audience.
The interaction method in the virtual scene in the present application will be described from the perspective of the second terminal. Referring to fig. 5, fig. 5 is a flowchart of a method for interaction in a virtual scene according to an embodiment of the present application, including:
501, in response to a viewing operation of a second object on a first object, a viewing request is sent to a server, the viewing request is used for returning a first direct-broadcasting picture to the server, the first direct-broadcasting picture is a view angle of the first object in a virtual scene, and the virtual scene comprises a plurality of interaction elements.
It will be appreciated that the second terminal is an audience terminal, the second object is an audience object corresponding to the audience terminal, and the second object may watch live broadcast through the second terminal. The second object may select a host from a plurality of hosts that each has a corresponding live view. When the second object selects the first object, the second terminal responds to the watching operation of the second object on the first object and sends a watching request to the server, so that the server synchronizes the corresponding first direct broadcasting picture to the second terminal and displays the first direct broadcasting picture to the second object. The first object executes the target game on the first terminal, namely the first terminal is a main playing terminal, the first object is a corresponding game main player or game player, the first object is in a virtual scene corresponding to the target game, and the view angle of the first object in the virtual scene corresponds to the first direct playing picture. The virtual scene comprises a plurality of interaction elements, and the first object can be beneficial to completing corresponding game tasks through interaction with the appointed interaction elements.
502, Responding to a change operation of the second object on the target element, sending a change request to the server, wherein the change request is used for changing the target element in the virtual scene by the server, and the target element belongs to the interaction element.
It can be understood that the second object may select, by the second terminal, a target element in the virtual scene based on the first direct-play image and perform a change operation, where the target element belongs to the interactive element. And carrying out changing operation on the target element, including enlarging, reducing, changing the form and the like of the target element. And the second terminal responds to the change operation of the second object on the target element, generates a change request and sends the change request to the server, wherein the change request is used for the server to correspondingly change the target element in the virtual scene. For a specific description of the modification operation, reference may be made to the related content in step 202 in the corresponding embodiment of fig. 2, and details are not repeated here.
Compared with the prior art in which the audience can interact with the host only in the form of a barrage, the interaction method provided by the embodiment of the application enriches the interactivity between the host and the audience and also improves the game experience of the audience.
It can be understood that the above is an introduction to the interaction method in the virtual scene provided by the embodiment of the present application from three different angles, respectively. For a clearer description of the method provided by the embodiment of the present application, refer to fig. 6, where fig. 6 is an interaction signaling diagram of an interaction method in a virtual scene provided by the embodiment of the present application, and the interaction signaling diagram includes a first terminal, a second terminal and a server.
And 601, the first terminal responds to the execution operation of the first object on the virtual scene, and displays a first direct-broadcasting picture corresponding to the virtual scene.
It will be appreciated that the first object may be used to represent a game player or a game player, and that the first terminal is a terminal device corresponding to that game player. First, a first object (game player or anchor) selects a virtual scene on a first terminal, and at this time, the first terminal presents a corresponding first direct-play screen to the first object in response to an execution operation of the first object on the virtual scene (i.e., an operation of selecting the virtual scene).
It is understood that the virtual scenario may be a game scenario of a exploratory inference type game, or there may be exploratory inference play in the game that corresponds to the game scenario. The virtual scene comprises a plurality of interaction elements, and players need to interact with the interaction elements in the virtual scene in the game playing process, so that the game can be favorably completed.
The server obtains 602 a first direct-play picture of the first object execution virtual scene from the first terminal.
It will be appreciated that the server may be a live server or a game server. When the live broadcast content of the first object is to execute the target game in the virtual scene, the server acquires a first live broadcast picture corresponding to the virtual scene, which is sent by the first terminal. Specifically, the picture parameter, which may be a virtual scene, acquired by the first terminal may generate the first direct-play picture through the picture parameter.
603, The second terminal generates a viewing request in response to a viewing operation of the first object by the second object.
It will be appreciated that the second terminal is an audience terminal, the second object is an audience object corresponding to the audience terminal, and the second object may watch live broadcast through the second terminal. When the second object (i.e., the viewer) wants to watch the live view of the first object (i.e., the game player), the second terminal generates a corresponding viewing request in response to a viewing operation of the second object.
The second terminal sends a viewing request to the server 604.
It will be appreciated that, since the first direct-play picture is the view of the corresponding virtual scene with the first object on the first terminal, the second terminal wants to acquire the first direct-play picture, and needs to send a viewing request to the server, and the server synchronizes the first direct-play picture to the second terminal.
605, The server synchronizes the first direct broadcast picture to the second terminal.
It will be appreciated that upon receiving this viewing request, the server will respond to this request by synchronizing the first direct-play screen on the first terminal to the second terminal to the effect that the second object (viewer) views the game live of the first object (game player) on the first terminal on the second terminal.
606, The second terminal generates a change request in response to a change operation of the second object to the target element.
It can be understood that the second object may select, by the second terminal, a target element in the virtual scene based on the first direct-play image and perform a change operation, where the target element belongs to the interactive element. The second terminal responds to the change operation of the second object on the target element, and generates a change request which is used for the server to correspondingly change the target element in the virtual scene.
607, The second terminal sends a change request to the server.
It can be understood that the interaction between the first terminal and the second terminal is completed through the server, so that after the second terminal generates the change request, the change request is sent to the server, and the server changes the target element in the virtual scene based on the change request.
608, The server responds to the change request to change the target element in the virtual scene.
It can be understood that after receiving the change request, the server correspondingly updates or changes the target element in the virtual scene to reflect the change operation of the second object on the target element.
609, The server sends the change situation of the target element to the first terminal.
It can be understood that after the server correspondingly updates the target element in the virtual scene, the change condition is sent to the first terminal, so that the first terminal changes the target element in the first direct broadcast picture, and the target element is more easily or less easily focused by the first object.
The first terminal changes 610 the target element in the first direct broadcast picture.
It can be understood that, after the change condition of the target element of the first terminal, a corresponding change operation is performed on the target element in the first direct broadcast picture.
Through the process, audiences can participate in the game to a certain extent, experience the game together with a game master, and assist the game master in cue collection and prop finding. Compared with the prior art, the method has the advantages that the audience can only interact with the anchor in the form of the barrage, so that the interactivity between the anchor and the audience is enriched, and the game experience of the audience is improved.
In an alternative embodiment of the interaction method in a virtual scene provided by the corresponding embodiment of fig. 6, please refer to fig. 7a and fig. 7b, fig. 7a and fig. 7b are interaction signaling diagrams of the interaction method in a virtual scene provided by the embodiment of the present application, which includes:
701, the first terminal transmits a selection request to the server in response to a scene selection operation of the first object.
It will be appreciated that the first object may indicate that the first object is desiring to play a game, but that a particular game item has not yet been selected, e.g., the first object is logged into a game lobby, which refers to a platform or space where a player may select various games to experience, each game corresponding to a game scene, and that the first object enters a scene selection interface for presenting a set of selectable scenes where the player may select a target virtual scene to experience. The first terminal sends a selection request to the server, indicating that the first terminal needs to acquire the list of selectable scenes from the server.
The server sends 702 a plurality of selectable scenes to the first terminal in response to the selection request.
It will be appreciated that these alternative scenarios may be pre-stored by the game developer in a database of the server, or may be created by other players and shared through the server. Through this process, the first object can see a plurality of selectable scenes on the first terminal and select one of them for play.
The first terminal transmits an execution request to the server in response to the execution operation of the first object on the virtual scene 703.
It will be appreciated that when a first object (e.g., a player) selects to execute a virtual scene on a first terminal, the first terminal sends an execution request to the server. This execution request may contain some information about the virtual scene, such as its name, version, player level, etc., as well as instructions or parameters to perform the operation.
And 704, the server responds to the execution request and sends scene parameters corresponding to the virtual scene to the first terminal, wherein the virtual scene comprises a plurality of interaction elements.
It may be understood that after receiving the execution request, the server may send, according to information in the request, a scene parameter corresponding to the virtual scene, for example, an execution file or a related resource, to the first terminal. After the execution file or the related resource of the virtual scene is sent to the first terminal, the first terminal starts to execute the virtual scene according to the instruction or the parameter in the request.
In one possible implementation method, step 704 specifically includes:
7041, a server responds to an execution request of a first terminal to a virtual scene, and judges whether the number of anchor objects for executing the virtual scene meets preset requirements or not, wherein the first object belongs to the anchor object;
7042, if the preset requirement is met, the server sends scene parameters corresponding to the virtual scene to the first terminal.
It can be understood that, when the first terminal sends an execution request for the virtual scene, the server will respond to the request and determine whether the number of anchor objects executing the virtual scene meets the preset requirement. The preset requirements herein may mean that there must be enough host objects to start the game or to play a particular session of the game. If the preset requirement is met, the server sends scene parameters corresponding to the virtual scene to the first terminal. These scene parameters may include rules, maps, character settings, props, etc. for the corresponding game to define the play and environment of the game. When the first terminal receives the scene parameters, the virtual scene is rendered and executed according to the parameters.
Through this process, the host object may experience the game with other players.
705, The first terminal generates a first direct broadcast picture corresponding to the virtual scene based on the scene parameter.
It will be appreciated that when the first terminal receives these game parameters, it will render and execute the virtual scene in accordance with these parameters. Because the first direct-broadcasting picture corresponds to the virtual scene, the first direct-broadcasting picture also comprises a plurality of interaction elements, and players need to interact with the interaction elements in the direct-broadcasting picture in the process of playing the target game, so that the player can complete the corresponding game task.
706, The server acquires a first direct-play picture from the first terminal;
it will be appreciated that the server may be a live server or a game server. When a first object is live broadcast on a first terminal, a server acquires a first direct broadcast picture on the first terminal.
707, The second terminal generates a selection request in response to a selection operation of the virtual scene sent by the second object.
It will be appreciated that the second object (viewer) may specifically select a virtual scene of the game on the second terminal that it is desired to view. When the second object selects the virtual scene of the target game, the second terminal generates a selection request, wherein the selection request is used for instructing the server to return to the main broadcasting object list for playing the virtual scene, and the audience can select one main broadcasting object from the main broadcasting object list for watching.
The server receives and responds to the selection request of the virtual scene sent by the second terminal, and sends a plurality of anchor objects for executing the virtual scene to the second terminal 708.
It will be appreciated that upon receipt of this selection request, the server responds to this request and sends a plurality of anchor objects to the second terminal that execute the virtual scene. The first object belongs to one of these anchor objects. The second object may select one of the plurality of anchor objects to view live.
709, The second terminal responds to the watching operation of the second object on the first object, and generates a watching request, wherein the first object belongs to the anchor object;
It will be appreciated that the second object may be watching live through the second terminal. When the anchor object that the second object (i.e., the viewer) wants to watch is the first object (i.e., the game player), the second terminal generates a corresponding viewing request for instructing the server to return to the first direct-play screen in response to the viewing operation of the second object.
And 710, the second terminal sends a viewing request to the server.
It will be appreciated that since the first always-on picture is generated based on the operation of the first object on the first terminal, the second object actually wants to view the first always-on picture, a viewing request is sent to the server, and the server synchronizes the first always-on picture to the second terminal.
The server synchronizes 711 the first direct-cast picture to the second terminal.
It will be appreciated that upon receiving this viewing request, the server will respond to this request by synchronizing the first direct-play screen on the first terminal to the second terminal to the effect that the second object (viewer) views the game live of the first object (game player) on the first terminal on the second terminal.
The second terminal generates 712 a change request in response to a change operation of the second object to the target element.
It can be understood that the second object may be selected and perform the changing operation based on the target element in the first direct broadcast picture by the second terminal, where the target element belongs to the interactive element. The second terminal responds to the change operation of the second object on the target element, and generates a change request which is used for carrying out corresponding change on the target element in the virtual scene.
713, The second terminal sends a change request to the server.
It can be understood that the interaction between the first terminal and the second terminal is completed through the server, so that after the second terminal generates the change request, the change request is sent to the server, and the server instructs the first terminal to change the target element in the first direct-play picture based on the change request.
714, The server changes the target element in the virtual scene in response to the change request.
It can be understood that after receiving the change request, the server correspondingly updates or changes the target element in the virtual scene to reflect the change operation of the second object on the target element.
In one possible implementation, the changing operation is specifically a volume changing operation, where step 714 specifically includes:
7141, the server responds to a change request of the target element sent by the second terminal to obtain a first parameter, wherein the first parameter is a volume parameter of the target element;
7142, the server changes the first parameter based on the change operation to obtain a second parameter;
7143, the server changes the target element in the virtual scene according to the second parameter.
It will be appreciated that when the second terminal sends a request for a change to the target element, the server will respond to the request and obtain a first parameter. This first parameter is a volume parameter of the target element, which may represent the size, length, width, etc. of the target element, and in particular may be an original volume parameter of the target element, which is typically generated at the time of creating the virtual scene. The server alters the first parameter based on the volume altering operation to obtain the second parameter. This second parameter is a new volume parameter that reflects the change operation of the second object to the target element volume. The server alters a target element in the virtual scene based on the second parameter.
Through the process, the second object can perform volume change operation on the target element on the second terminal, and the change can be reflected in the first direct broadcast picture, so that the corresponding change effect can be seen on the first terminal.
715, The server sends the change condition of the target element to the first terminal.
It can be understood that after the server correspondingly updates the target element in the virtual scene, the change condition is sent to the first terminal, so that the first terminal changes the target element in the first direct broadcast picture, and the target element is more easily or less easily focused by the first object.
716, The first terminal changes the target element in the first direct broadcast picture.
It can be understood that, after the change condition of the target element of the first terminal, a corresponding change operation is performed on the target element in the first direct broadcast picture.
Through the process, audiences can participate in the game to a certain extent, experience the game together with a game master, and assist the game master in cue collection and prop finding. Compared with the prior art, the method has the advantages that the audience can only interact with the anchor in the form of the barrage, so that the interactivity between the anchor and the audience is enriched, and the game experience of the audience is improved.
717, The server responds to the interference request sent by the third terminal to the first object, and generates a first interference instruction, where the first interference instruction is used to instruct the first terminal to add an interference element in the first direct broadcast picture.
It will be appreciated that the third terminal corresponds to a third object, which may be a game player or spectator in a different game play than the first object, and that when the third object sends a request for tampering with the first object via the third terminal, the server responds to the request and generates a tampering instruction. This interference instruction is used to instruct the first terminal to add an interfering element in the first direct broadcast picture.
The specific adding manner may be various, and in one possible case, the interfering element is an element that affects the sense of the first object to achieve the interference effect, for example, some shielding objects, noise, special effects and the like are added in the live broadcast picture. The method can be characterized in that shields such as floating clouds, moving trees, dynamic billboards and the like are added in certain areas, or special effects such as sparks, smoke and the like are added to interfere the sight of a first object, and the first terminal plays noise elements such as bombing sound of an airplane and noisy sound of people to interfere the attention of the first object. These interfering elements may be dynamic or static, depending on the design and rules of the game.
In another possible case, the disturbing element is an element that affects the operation of the first object to achieve a disturbing effect, such as stopping or decelerating the motion of the first object, changing the moving direction, increasing the difficulty of acquiring the target element, and the like. The method can be characterized by adding a short pause to the first object to make the first object inoperable, such as using control type interference elements like freezing, confining and the like, adding a deceleration effect to the first object to slow the moving speed of the first object, adding a dizzy effect to the first object, wherein the effect can change the operation direction of the first object, such as changing the original forward operation into backward operation, changing the left operation into right operation and the like, and controlling the first object, such as making the first object move towards a designated direction (such as a direction far away from a target element) and the like.
Furthermore, in the above case, the audience object co-camping with the first object may also cancel the interference element, for example, the noise element in the game may be reduced or eliminated by using a corresponding noise reduction tool for the first object, for example, for the original interference affecting the sense, such as the shielding object and the special effect in the picture, and the like, by performing the erasing operation. For the interfering element affecting the operation, the negative effect of the first object may be cleared by the purge operation, the negative effect may be changed to the positive effect (for example, the deceleration effect may be changed to the acceleration effect, the movement away from the target element may be changed to the movement close to the target element, etc.) by the exchange operation, the interfering element may be bounced back to the game scene of the anchor object under the camp of the third object by the bounce operation, etc.
It can be understood that in practical application, the above-mentioned interference elements and the corresponding operations for eliminating the interference elements can be specifically implemented as an interactive prop in a live client.
718, The server sends a first interference instruction to the first terminal.
It can be understood that after receiving the first interference instruction, the first terminal adds a corresponding interference element in the first direct broadcast picture in response to the first interference instruction, so as to achieve the effect that the third object interferes with the first object.
Corresponding to step 717 and step 718, further includes:
719, the second terminal responds to the interference operation of the second object to the fourth object, and generates an interference request, where the interference request is used to instruct the fourth terminal corresponding to the fourth object to add an interference element in the second live broadcast picture, and the second live broadcast picture corresponds to the fourth object to execute the view angle of the virtual scene through the fourth terminal.
It may be understood that the second object and the fourth object may be in different game camps, so that the second object may also perform an interference operation on the fourth object, and at this time, the second terminal may generate a corresponding interference request for instructing the fourth terminal corresponding to the fourth object to add an interference element in the second live broadcast picture, where the second live broadcast picture is a picture of the fourth object executing the virtual scene through the fourth terminal. It can be appreciated that, in step 719, the second object performs an interference operation on the fourth object, similar to the interference operation on the first object performed by the third object in steps 717 to 718, the interference request generated by the second terminal is first sent to the server, and then the server generates a corresponding second interference instruction based on the interference request and sends the second interference instruction to the fourth terminal, so that the fourth terminal adds an interference element in the second live broadcast picture based on the second interference instruction.
720, The second terminal responds to the visual angle adjustment operation of the second object on the first direct-broadcasting picture to obtain a third parameter, wherein the third parameter is the visual angle parameter of the first direct-broadcasting picture;
721, the second terminal changing the third parameter based on the viewing angle adjustment operation to obtain a fourth parameter;
And 722, the second terminal changes the view angle of the first direct broadcast picture according to the fourth parameter.
It is understood that, for the above steps 720 to 722, the second object may adjust the viewing angle of the first direct-play frame. When the second object performs an adjustment operation on the viewing angle of the first direct-play picture, the second terminal responds to the adjustment operation and acquires a third parameter. This third parameter is the viewing angle parameter of the first always-on picture, which may include elements of direction, angle, distance, etc. of the viewing angle, and is specifically obtainable by the first terminal, since the first always-on picture is generated and displayed on the first terminal. The second terminal changes the third parameter based on the viewing angle adjustment operation to obtain a fourth parameter. This fourth parameter may be a new viewing angle parameter reflecting the adjustment of the viewing angle of the first always-on picture by the second object. The second terminal changes the viewing angle of the first direct-play picture based on the fourth parameter. The altering operation can enable the audience to see the visual angle adjusting effect of the first direct broadcast picture on the second terminal so as to provide a richer and immersive viewing experience. The viewing angle adjusting operation includes moving up and down, rotating, zooming in and out, etc. the first direct-broadcast picture.
Through the process, the second object can adjust the visual angle of the first direct-play picture on the second terminal, and the corresponding changing effect is seen on the second terminal, so that the experience of the audience on the game is improved.
723, The server issues a virtual vote to the second object through the second terminal, wherein the virtual vote is used for the second object to vote on the candidate object through the second terminal;
724, the server responds to a voting request of the second object to the candidate object sent by the second terminal, and judges whether the candidate object is a preset object or not;
If the candidate object is the preset object, the server issues a virtual reward to the second object through the second terminal 725.
It will be appreciated that for steps 723 and 724 described above, the server issues virtual votes to the second object via the second terminal, which virtual votes can be used by the second object to vote on candidate objects via the second terminal. In the voting process, a plurality of candidate objects are displayed on the second terminal, so that the second object can conveniently select the candidate objects. The second terminal responds to the voting operation of the second object, generates a corresponding voting request and sends the corresponding voting request to the server, and the server records the voting result and updates the vote count of the candidate object. The virtual votes may exist in digital form, each vote representing the vote rights of a candidate object. If the candidate object is a preset object, indicating that the selection of the second object is correct, the server will issue a virtual reward to the second object. This virtual reward may exist in the form of a play object, a point, or other form to highlight the voting behavior of the second object on the preset object.
The interaction method in the virtual scene provided by the embodiment of the application enables spectators to participate in games to a certain extent, plays the game experience together with game players, assists the game players in cue collection and prop search, enriches the interactivity between the game players and the spectators, and improves the game experience of the spectators.
Referring to fig. 7c, fig. 7c is a flowchart of a method for describing interactive performance of a host object or a viewer object on a live client according to an embodiment of the present application, where the method corresponds to the signaling diagrams of fig. 7a and fig. 7b, and includes:
S1, responding to scene selection operation of a first object, displaying a plurality of selectable scenes to the first object, wherein the virtual scene belongs to the selectable scenes.
This step corresponds to step 701 and step 702 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
S2, responding to an execution request of the first object to the virtual scene, judging whether the number of the anchor objects executing the virtual scene meets the preset requirement, wherein the first object belongs to the anchor object, and if the number of the anchor objects meets the preset requirement, generating a first direct broadcasting picture.
This step corresponds to steps 703, 7041, 7042 and 705 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
And S3, responding to the selection operation of the virtual scene sent by the second object, sending a plurality of anchor objects for executing the virtual scene to the second object, wherein the first object belongs to the anchor objects.
This step corresponds to step 708 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
S4, responding to the watching operation of the second object on the first object, and displaying a first direct-broadcasting picture, wherein the first direct-broadcasting picture corresponds to the view angle of the first object in the virtual scene, and the virtual scene comprises a plurality of interaction elements;
This step corresponds to steps 709, 710 and 711 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
S5, responding to the change operation of the second object on the target element based on the first direct-play picture, changing the target element in the virtual scene, wherein the target element belongs to a plurality of interaction elements.
This step corresponds to steps 712, 713 and 714 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
In one possible implementation method, the changing operation is specifically a volume changing operation, and the step S5 specifically includes:
responding to the change operation of the target element sent by the second object, and acquiring a first parameter, wherein the first parameter is the volume parameter of the target element;
Modifying the first parameter based on the changing operation to obtain a second parameter;
And changing the target element according to the second parameter.
It is understood that this step corresponds to the refinement steps 7141 to 7143 of step 714 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
And S6, updating the first direct-play picture according to the change of the target element in the virtual scene.
This step corresponds to steps 715 and 716 in the corresponding embodiment of fig. 7a, and the detailed description is referred to above, and will not be repeated here.
S7, in response to the interference operation of the third object on the first object, adding an interference element in the first direct broadcast picture.
This step corresponds to steps 717 and 718 in the corresponding embodiment of fig. 7b, and the detailed description is referred to above, and will not be repeated here.
And S8, adding interference elements in the second live broadcast picture in response to the interference operation of the second object on the fourth object, wherein the second live broadcast picture corresponds to the view angle of the fourth object in the virtual scene.
This step corresponds to step 719 in the corresponding embodiment of fig. 7b, and the detailed description is referred to above, and will not be repeated here.
S9, responding to the visual angle adjustment operation of the second object on the first direct-broadcasting picture, and acquiring a third parameter, wherein the third parameter is the visual angle parameter of the first direct-broadcasting picture;
s10, changing the third parameter based on the visual angle adjustment operation to obtain a fourth parameter;
s11, changing the view angle of the first direct-play picture according to the fourth parameter.
It can be understood that step S9 to step S11 correspond to step 720 to step 722 in the corresponding embodiment of fig. 7b, and the detailed description is referred to above, and the detailed description is omitted here.
S12, issuing a virtual vote to the second object, wherein the virtual vote is used for voting the candidate object by the second object;
s13, responding to voting operation of the second object on the candidate object, and judging whether the candidate object is a preset object or not;
S14, if the candidate object is a preset object, a virtual reward is issued to the second object.
It is understood that steps S12 to S14 correspond to steps 723 to 725 in the corresponding embodiment of fig. 7b, and the detailed description is referred to above, and the detailed description is omitted herein.
For easy understanding, an interaction method applied to a virtual scene of a scenario game will be described with reference to fig. 7a to 7c and fig. 8a to 8 h.
First, introducing a scenario game, wherein the scenario game is an inference game for players to experience on an online real field, and belongs to deduction entertainment. The rule of the scenario game is that a player firstly selects a role, reads the scenario corresponding to the role, collects clues and finds out a target role hidden in the game. The scenario game is not only a game, but also an entertainment item integrating knowledge attributes, psychological game attributes and strong social attributes. The game integrates elements such as role playing, reasoning, verification and the like, and different subjects such as modern republic of China, ancient wind, swordsman and the like can be selected. And brings the player with the experience of being on the scene through reloading, real scene restoration, real NPC deduction, music setting-off and the like.
Referring to fig. 8a, fig. 8a is a schematic diagram of a game interface of a scenario game.
The player may select a character based on the character profile, the player including player a, player B, player C, player D, etc., the character including character a, character B, character C, etc., and their corresponding profiles. After selection, the script corresponding to the character can be read in the text area, and clues can be collected in the game picture (or live broadcast picture) area. A disadvantage of existing online script games is that only players can participate and non-players cannot. Therefore, the application provides an interaction method in the virtual scene, which adds the viewing angle of the audience, binds the connection between the audience and the anchor and helps the anchor search warrant. Finally, the audience is also involved in the murder link of the anchor. After the final win, both the anchor and the spectator can obtain the game benefits.
Referring to fig. 8b to 8h, an interaction method applied to a virtual scene of a scenario game will be described with reference to the accompanying drawings.
1) After selecting the avatar of the user to play, click on the "open" to open (corresponding to step 701 and step 702), select the scenario to be played to the server (corresponding to step 703), select the scenario to be played in the "open" to play, click on the team, and after other anchor joins the team, the lower limit of the number of persons in the scenario is reached, and start the game by automatic countdown, as shown in fig. 8c (corresponding to step 704 and step 705).
2) As shown in fig. 8d, after the viewer enters the living room where the deductive play is being opened, the lineup is selected (corresponding to step 706 and step 707). The spectator can see the scenario it plays from the host-lineup (corresponding to steps 708 and 709) and can look around the room 360 degrees by means of mouse dragging (corresponding to steps 720 to 722).
3) The anchor walks around the room after the deduction starts by clicking on the arrow guide on the screen. Clicking on the prop found in the room opens the prop and can be viewed at any time in the backpack, as in fig. 8e.
4) The viewer may choose to click on his own anchor head portraits in the list, choosing to follow the viewing angle (corresponding to steps 710-711). The prop can be found by visual angle wrapping and amplified by giving the magnifier gift a position on the prop (interactive element) to make the host more aware of the prop, as shown in fig. 8f (corresponding to steps 712-716). The audience can give the gift with the reduced mirror, and only the gift prop is required to be dragged on the corresponding prop.
5) The audience can obtain the number of votes according to the online time length, and the votes are automatically put into a knapsack (corresponding to step 723), the scenario progress has a voting link at a certain stage, and all the audience can vote by using the obtained voting props (corresponding to step 724). After using the vote count, the audience clicks on the user object to vote, as shown in fig. 8g.
6) If the host is in the camp to win, all deduction props collected by the host in the deduction process are converted into unequal-amount gift props, and the audience ten in front of the online time length is issued as the gift prop reward, and the audience finds out in the knapsack and can use the gift props for gift delivery and the like. The anchor receives a reward for play points and may be used to redeem the prop, as shown in figure 8h (corresponding to step 725).
It will be appreciated that the above provides an application example of the interaction method in the virtual scenario provided by the embodiment of the present application to a scenario game, and the above game playing method is merely an example and is not limited thereto, and a person skilled in the art may specify the game rule by himself, for example, whether arrow guiding is required, whether deduction items are converted into gift items, whether only top 10 spectators issue item rewards, etc.
The first terminal in the present application will be described in detail with reference to fig. 9. Fig. 9 is a schematic diagram of an embodiment of a terminal device in an embodiment of the present application, where a terminal device 900 includes:
the display module 901 is configured to respond to an execution operation of the first object on the virtual scene, and display a first direct-play picture corresponding to the virtual scene, where the virtual scene includes a plurality of interaction elements;
and a change module 902, configured to respond to a change request for a target element sent by the second terminal, and change the target element in the first live broadcast, where the target element belongs to a plurality of interactive elements, and the change request is generated based on a change operation of the second object on the target element.
In one possible implementation method, the method further includes:
the interference module 903 is configured to add a corresponding interference element to the first direct broadcast picture in response to an interference request sent by the third terminal.
It can be understood that, in the terminal device provided in the embodiment of the present application, the method performed by the first terminal in fig. 6, fig. 7a, and fig. 7b is specifically referred to the related description in the corresponding embodiment, and will not be repeated here.
The server in the present application will be described in detail with reference to fig. 10. Fig. 10 is a schematic diagram of an embodiment of a server according to an embodiment of the present application, where the server 1000 includes:
an obtaining module 1001, configured to obtain a first direct-play picture of a first object execution virtual scene from a first terminal;
A synchronization module 1002, configured to synchronize a first direct-play picture to a second terminal in response to a viewing request for a first object sent by the second terminal;
a changing module 1003, configured to change a target element in the virtual scene in response to a change request for the target element sent by the second terminal, where the target element belongs to the interactive element, and the change request is generated based on a change operation of the second object on the target element;
The sending module 1004 is configured to send a change instruction to the first terminal.
In one possible implementation method, the method further includes:
the game selection module is used for responding to a selection request of the first terminal for the virtual scene and sending a plurality of selectable scenes to the first terminal;
the execution module 1005 is configured to send, to the first terminal, a scene parameter corresponding to the virtual scene in response to an execution request of the first terminal for the virtual scene, where the scene parameter is used to enable the first terminal to generate a first direct play picture corresponding to the virtual scene, and the target game belongs to the selectable game.
In one possible implementation method, an execution module 1005 is specifically configured to determine, in response to an execution request of the first terminal for the virtual scene, whether the number of anchor objects for executing the virtual scene meets a preset requirement, where the first object belongs to the anchor object, and if the preset requirement is met, send, by the server, a scene parameter corresponding to the virtual scene to the first terminal.
In one possible implementation method, the method further includes:
And the anchor selection module is used for responding to the selection request of the virtual scene sent by the second terminal and sending a plurality of anchor objects for executing the virtual scene to the second terminal.
In one possible implementation method, the method further includes:
a issuing module 1005, configured to issue a virtual vote to the second object, where the virtual vote is used by the second object to vote on the candidate object through the second terminal;
A voting module 1006, configured to determine whether the candidate object is a preset object in response to a voting request of the second object on the candidate object sent by the second terminal;
and the rewarding module 1007 is configured to issue a virtual rewarding to the second object if the candidate object is a preset object.
In one possible implementation, the change operation is in particular a volume change operation;
The changing module 1003 is specifically configured to obtain a first parameter in response to a change request for a target element sent by the second terminal, where the first parameter is a volume parameter of the target element, change the first parameter based on a change operation to obtain a second parameter, and change the target element in the virtual scene based on the second parameter.
In one possible implementation method, the method further includes:
The interference module is used for responding to an interference request sent by the third terminal to the first object and generating a first interference instruction, wherein the first interference instruction is used for indicating the first terminal to add interference elements in the first direct broadcast picture;
The sending module 1004 is further configured to send the first interference instruction to the first terminal.
It can be appreciated that, the server provided in the embodiment of the present application is configured to perform the method performed by the server in fig. 6, fig. 7a and fig. 7b, and the detailed description of the corresponding embodiment is referred to above, and will not be repeated here.
The second terminal in the present application will be described in detail with reference to fig. 11. Fig. 11 is a schematic diagram of an embodiment of a terminal device in an embodiment of the present application, where a terminal device 1100 includes:
The synchronization module 1101 is configured to send a viewing request to the server in response to a viewing operation of the first object by the second object, where the viewing request is used for the server to return a first direct-broadcasting picture, and the first direct-broadcasting picture is a viewing angle of the first object in a virtual scene, and the virtual scene includes a plurality of interaction elements;
and the change module 1102 is configured to send a change request to the server in response to a change operation of the second object on the target element, where the change request is used for the server to change the target element in the virtual scene, and the target element belongs to the interaction element.
In one possible implementation method, the method further includes:
The adjustment module 1103 is configured to determine a third parameter in response to an adjustment operation of the second object on the viewing angle of the first direct-play picture, where the third parameter is the viewing angle parameter of the first direct-play picture, change the third parameter based on the adjustment operation of the viewing angle to obtain a fourth parameter, and change the viewing angle of the first direct-play picture based on the fourth parameter.
In one possible implementation method, the method further includes:
the interference module is used for responding to the interference operation of the second object on the fourth object, generating an interference request, wherein the interference request is used for indicating the fourth terminal corresponding to the fourth object to add interference elements in a second live broadcast picture, and the second live broadcast picture is a picture of the fourth object for executing the target game through the fourth terminal.
It can be understood that, in the terminal device provided in the embodiment of the present application, the method performed by the second terminal in fig. 6, fig. 7a, and fig. 7b is specifically referred to the related description in the corresponding embodiment, and will not be repeated here.
Referring to fig. 12 for a detailed description of an interactive device in a virtual scene in the present application, fig. 12 is a schematic structural diagram of the interactive device in the virtual scene in an embodiment of the present application, and an interactive device 1200 in the virtual scene includes:
the live broadcast module 1201 is configured to respond to a viewing operation of the first object by the second object, and display a first live broadcast picture, where the first live broadcast picture corresponds to a viewing angle of the first object in a virtual scene, and the virtual scene includes a plurality of interactive elements;
an element changing module 1202, configured to change a target element in the virtual scene in response to a change operation of the second object on the target element based on the first direct-play image, where the target element belongs to a plurality of interactive elements.
In one possible implementation method, the method further includes:
and the picture updating module is used for updating the first direct-play picture according to the change of the target element in the virtual scene.
In one possible implementation method, the method further includes:
And the picture generation module is used for responding to the execution operation of the first object on the virtual scene and generating a first direct-broadcasting picture.
In one possible implementation method, the method further includes:
And the scene selection module is used for responding to the scene selection operation of the first object, displaying a plurality of selectable scenes to the first object, and the virtual scene belongs to the selectable scenes.
In one possible implementation method, the first direct-broadcasting module 1201 is specifically configured to determine whether the number of the anchor objects executing the virtual scene meets a preset requirement in response to an execution request of the first object on the virtual scene, where the first object belongs to the anchor object, and if the preset requirement is met, generate a first direct-broadcasting picture.
In one possible implementation method, the method further includes:
And the anchor selection module is used for responding to the selection operation of the virtual scene sent by the second object, sending a plurality of anchor objects for executing the virtual scene to the second object, and the first object belongs to the anchor object.
In one possible implementation method, the method further includes:
the voting module is used for issuing a virtual vote to the second object, wherein the virtual vote is used for voting the candidate object by the second object, judging whether the candidate object is a preset object or not in response to the voting operation of the second object on the candidate object, and issuing a virtual reward to the second object if the candidate object is the preset object.
In one possible implementation, the change operation is in particular a volume change operation;
the element changing module 1202 is specifically configured to obtain a first parameter in response to a changing operation of the target element sent by the second object, where the first parameter is a volume parameter of the target element, modify the first parameter based on the changing operation to obtain a second parameter, and change the target element according to the second parameter.
In one possible implementation method, the method further includes:
and the interference module is used for responding to the interference operation of the third object on the first object and adding interference elements in the first direct broadcast picture.
In one possible implementation method, the interference module further responds to an interference operation of the second object on the fourth object, and adds an interference element in the second live broadcast picture, where the second live broadcast picture corresponds to a viewing angle of the fourth object in the virtual scene.
In one possible implementation method, the method further includes:
The viewing angle adjusting module is used for responding to the viewing angle adjusting operation of the second object on the first direct-broadcasting picture to obtain a third parameter, wherein the third parameter is the viewing angle parameter of the first direct-broadcasting picture, changing the third parameter based on the viewing angle adjusting operation to obtain a fourth parameter, and changing the viewing angle of the first direct-broadcasting picture according to the fourth parameter.
It can be understood that the interaction device in the virtual scenario provided by the embodiment of the present application is used for executing the method in fig. 7c, and specific reference is made to the related description in the corresponding embodiment, which is not repeated here.
Fig. 13 is a schematic diagram of a server structure according to an embodiment of the present application, where the server 300 may have a relatively large difference between configurations or performances, and may include one or more central processing units (central processing units, CPU) 322 (e.g., one or more processors) and a memory 332, and one or more storage mediums 330 (e.g., one or more mass storage devices) storing application programs 342 or data 344. Wherein the memory 332 and the storage medium 330 may be transitory or persistent. The program stored on the storage medium 330 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 322 may be configured to communicate with the storage medium 330 and execute a series of instruction operations in the storage medium 330 on the server 300.
The Server 300 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input/output interfaces 358, and/or one or more operating systems 341, such as Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM, or the like.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 13.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the present application.

Claims (15)

1. An interaction method in a virtual scene, comprising:
Responding to the watching operation of a second object on a first object, displaying a first direct-broadcasting picture, wherein the first direct-broadcasting picture corresponds to the view angle of the first object in a virtual scene, and the virtual scene comprises a plurality of interaction elements;
And responding to the second object to change the target element in the virtual scene based on the change operation of the first always-playing picture on the target element, wherein the target element belongs to the plurality of interactive elements.
2. The method of claim 1, further comprising, after said changing a target element in said virtual scene in response to said second object changing a target element based on said first always-on-picture operation:
and updating the first direct-play picture according to the change of the target element in the virtual scene.
3. The method of claim 1, further comprising, prior to said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
And generating the first direct-play picture in response to the execution operation of the first object on the virtual scene.
4. The method of claim 3, further comprising, prior to the generating the first always-on screen in response to the executing of the virtual scene by the first object:
And responding to the scene selection operation of the first object, displaying a plurality of selectable scenes to the first object, wherein the virtual scenes belong to the selectable scenes.
5. The method of claim 3, wherein generating the first always-on screen in response to the first object performing an operation on the virtual scene comprises:
Responding to an execution request of the first object to the virtual scene, judging whether the number of the anchor objects executing the virtual scene meets a preset requirement, wherein the first object belongs to the anchor object;
And if the preset requirement is met, generating the first direct broadcast picture.
6. The method of claim 1, further comprising, prior to said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
and responding to the selection operation of the virtual scene sent by the second object, sending a plurality of anchor objects for executing the virtual scene to the second object, wherein the first object belongs to the anchor objects.
7. The method of claim 1, further comprising, after said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
Issuing a virtual vote to the second object, wherein the virtual vote is used for the second object to vote on a candidate object;
Responding to the voting operation of the second object on the candidate object, and judging whether the candidate object is a preset object or not;
and if the candidate object is the preset object, issuing a virtual reward to the second object.
8. The method according to claim 1, wherein the change operation is in particular a volume change operation;
the responding to the second object changing the target element in the virtual scene based on the changing operation of the first always-playing picture on the target element comprises the following steps:
Responding to the change operation of the target element sent by the second object, and acquiring a first parameter, wherein the first parameter is the volume parameter of the target element;
modifying the first parameter based on the change operation to obtain a second parameter;
And changing the target element according to the second parameter.
9. The method of claim 1, further comprising, after said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
And adding an interference element in the first direct broadcast picture in response to the interference operation of the third object on the first object.
10. The method of claim 1, further comprising, after said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
And adding an interference element in a second live broadcast picture in response to the interference operation of the second object on the fourth object, wherein the second live broadcast picture corresponds to the view angle of the fourth object in the virtual scene.
11. The method of claim 1, further comprising, after said presenting the first always-on screen in response to a viewing operation of the first object by the second object:
Responding to the view angle adjustment operation of the second object on the first direct-broadcasting picture, and acquiring a third parameter, wherein the third parameter is the view angle parameter of the first direct-broadcasting picture;
Changing the third parameter based on the visual angle adjustment operation to obtain a fourth parameter;
And changing the view angle of the first direct-play picture according to the fourth parameter.
12. An interactive apparatus in a virtual scene, comprising:
The live broadcast module is used for responding to the watching operation of the second object on the first object and displaying a first live broadcast picture, wherein the first live broadcast picture corresponds to the view angle of the first object in a virtual scene, and the virtual scene comprises a plurality of interaction elements;
and the element changing module is used for responding to the changing operation of the second object on the target element based on the first always-playing picture, changing the target element in the virtual scene, wherein the target element belongs to the interaction elements.
13. A computer device comprises a memory, a transceiver, a processor, and a bus system;
Wherein the memory is used for storing programs;
the processor being configured to execute a program in the memory, comprising performing the interaction method in the virtual scene according to any one of claims 1 to 11;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
14. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the interaction method in a virtual scene as claimed in any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program is executed by a processor for the method of interaction in a virtual scene according to any of claims 1 to 11.
CN202410046787.9A 2024-01-11 2024-01-11 An interactive method and related device in a virtual scene Pending CN120302072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410046787.9A CN120302072A (en) 2024-01-11 2024-01-11 An interactive method and related device in a virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410046787.9A CN120302072A (en) 2024-01-11 2024-01-11 An interactive method and related device in a virtual scene

Publications (1)

Publication Number Publication Date
CN120302072A true CN120302072A (en) 2025-07-11

Family

ID=96270053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410046787.9A Pending CN120302072A (en) 2024-01-11 2024-01-11 An interactive method and related device in a virtual scene

Country Status (1)

Country Link
CN (1) CN120302072A (en)

Similar Documents

Publication Publication Date Title
US11794102B2 (en) Cloud-based game streaming
KR102740573B1 (en) Expanded VR participation and viewing of esports events
US7446772B2 (en) Spectator experience for networked gaming
US9616338B1 (en) Virtual reality session capture and replay systems and methods
US20200282313A1 (en) Integrated online gaming portal offering entertainment-related casual games and user-generated media
CN103885768B (en) Long-range control of the second user to the game play of the first user
RU2605840C2 (en) Automatic design of proposed mini-games for cloud games based on recorded game process
US12220632B2 (en) Method and apparatus for executing interaction event
Chesher Neither gaze nor glance, but glaze: relating to console game screens
US20090215512A1 (en) Systems and methods for a gaming platform
US20220189256A1 (en) System and method for conducting online video game tournaments and matches
KR20230007411A (en) Distribution system, control method of distribution system, and storage medium storing computer program
WO2024101001A1 (en) Information processing system, information processing method, and program for communication points regarding events
US20240033647A1 (en) System and Method for Conducting Online Video Game Tournaments and Matches
US20210397334A1 (en) Data management and performance tracking system for walkable or interactive virtual reality
Mack Evoking interactivity: film and videogame intermediality since the 1980s
Drucker et al. Spectator games: A new entertainment modality of networked multiplayer games
JP2022056812A (en) Computer system and public control system
CN120302072A (en) An interactive method and related device in a virtual scene
JP2022156250A (en) CONTENT PROVIDING SYSTEM, SERVER DEVICE AND PROGRAM
Chang et al. Eye space: an analytical framework for the screen-mediated relationship in video games
US20240048795A1 (en) Real-time interactive platform for live streams
Cumming Constructing authentic esports spectatorship: an ethnography
CN119729026A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN120242461A (en) Game event broadcasting method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication