CN115119004B - Data processing method, information display device, server and terminal equipment - Google Patents
Data processing method, information display device, server and terminal equipment Download PDFInfo
- Publication number
- CN115119004B CN115119004B CN202210551425.6A CN202210551425A CN115119004B CN 115119004 B CN115119004 B CN 115119004B CN 202210551425 A CN202210551425 A CN 202210551425A CN 115119004 B CN115119004 B CN 115119004B
- Authority
- CN
- China
- Prior art keywords
- user
- video data
- object related
- related information
- target part
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title abstract description 14
- 238000000034 method Methods 0.000 claims description 70
- 238000012545 processing Methods 0.000 claims description 56
- 238000003860 storage Methods 0.000 claims description 30
- 108091006146 Channels Proteins 0.000 description 25
- 238000010586 diagram Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 12
- 230000001960 triggered effect Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 6
- 230000009897 systematic effect Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 210000003414 extremity Anatomy 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035764 nutrition Effects 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000036410 touch Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2542—Management at additional data server, e.g. shopping server, rights management server for selling goods, e.g. TV shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/47815—Electronic shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the application provides a data processing method, an information display device, a server and terminal equipment, and relates to the technical field of networks. In this embodiment of the present application, video data uploaded by a first user is obtained, and the video data is sent to a second user, so that the second user plays the video data on a playing interface; determining a target part of an associated object related to the video data based on the video data; generating a prompt identifier corresponding to the target part; and sending the prompt identifier to the second user side so that the second user side can output the prompt identifier in a target display area where the target part is located in the playing interface, thereby improving the viscosity of video content and watching users.
Description
The application is a divisional application of the application with the application date of 2019, 5 month and 13 days, the application number of 201910394238.X and the invention creates an application named data processing method, information display method, device, server and terminal equipment.
Technical Field
The embodiment of the application relates to the technical field of networks, in particular to a data processing method, an information display device, a server and terminal equipment.
Background
At present, when part of live broadcast users sell goods or introduce goods in network live broadcast, the demands of watching users are difficult to grasp in the live broadcast process due to lack of professional and systematic training, and personalized deep explanation cannot be carried out for different watching users. Accordingly, live content cannot be attracted to viewing users, thereby affecting the live effect.
Disclosure of Invention
The embodiment of the application provides a data processing method, an information display device, a server and terminal equipment, which can enable a watching user to effectively obtain more information in time and are beneficial to improving user viscosity.
In a first aspect, an embodiment of the present application provides a data processing method, including:
acquiring video data sent by a first user side, and sending the video data to a second user side so that the second user side plays the video data on a playing interface;
acquiring at least one user characteristic of the second user side;
Determining object related information matching the at least one user feature; wherein the object related information is related data of related objects related to the video data;
and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
In a second aspect, an embodiment of the present application provides a data processing method, including:
acquiring video data uploaded by a first user end, and sending the video data to a second user end so that the second user end plays the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user side so that the second user side outputs the prompt identifier in a target display area where the target part is located in the playing interface.
In a third aspect, an embodiment of the present application provides an information display method, including:
receiving video data sent by a server side and playing the video data on a playing interface;
Receiving object related information sent by the server; the object related information is obtained by the server based on at least one user characteristic matching of the second user terminal;
and outputting the object related information in the playing interface.
In a fourth aspect, an embodiment of the present application provides an information display method, including:
receiving video data sent by a server and outputting the video data on a playing interface;
receiving a prompt identifier sent by the server; the prompt mark is generated by the server based on a target part of an associated object related to the video data; a target location of the associated object is determined based on the video data;
and outputting the prompt identifier in a target display area where the target part is positioned in the playing interface.
In a fifth aspect, in an embodiment of the present application, there is provided a data processing apparatus, including:
the first acquisition module is used for acquiring video data uploaded by a first user;
the first sending module is used for sending the video data to a second user side so that the second user side plays the video data on a playing interface;
the second acquisition module is used for acquiring at least one user characteristic of the second user terminal;
A first determining module, configured to determine object related information matched with the at least one user feature; wherein the object related information is related data of related objects related to the video data;
and the second sending module is used for sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
In a sixth aspect, in an embodiment of the present application, there is provided a data processing apparatus, including:
the video data acquisition module is used for acquiring video data uploaded by the first user;
the video data transmitting module is used for transmitting the video data to a second user side so that the second user side plays the video data on a playing interface;
a second determining module, configured to determine, based on the video data, a target location of an associated object related to the video data;
the prompt identifier generation module is used for generating a prompt identifier corresponding to the target part;
and the prompt identifier sending module is used for sending the prompt identifier to the second user side so that the second user side can output the prompt identifier in a target display area where the target part is located in the playing interface.
In a seventh aspect, an embodiment of the present application provides an information display apparatus, including:
the first receiving module is used for receiving video data sent by the server;
the first playing module is used for playing the video data on a playing interface;
the second receiving module is used for receiving the object related information sent by the server; the object related information is obtained by the server based on at least one user characteristic matching of the second user terminal;
and the first output module is used for outputting the object related information in the playing interface.
In an eighth aspect, an embodiment of the present application provides an information display apparatus, including:
the third receiving module is used for receiving the video data sent by the server;
the second playing module is used for outputting the video data at a playing interface;
the fourth receiving module is used for receiving the prompt identification sent by the server; wherein the prompt identifier is generated for a target part of an associated object related to the video data by the service end; a target location of the associated object is determined based on the video data;
and the second output module is used for outputting the prompt identifier in a target display area where the target part is located in the playing interface.
In a ninth aspect, in an embodiment of the present application, a server is provided, including a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
the processing assembly is configured to:
acquiring video data sent by a first user side, and sending the video data to a second user side so that the second user side plays the video data on a playing interface;
acquiring at least one user characteristic of the second user side;
determining object related information matching the at least one user feature; wherein the object related information is related data of related objects related to the video data;
and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
In a tenth aspect, embodiments of the present application provide a server, including a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
The processing assembly is configured to:
acquiring video data uploaded by a first user end, and sending the video data to a second user end so that the second user end plays the video data on a playing interface;
determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user side so that the second user side outputs the prompt identifier in a target display area where the target part is located in the playing interface.
In an eleventh aspect, an embodiment of the present application provides a terminal device, including a processing component, a display component, and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
the processing assembly is configured to:
receiving video data sent by a server side and playing the video data on a playing interface of the display assembly;
receiving object related information sent by the server; the object related information is obtained by the server based on at least one user characteristic matching of the second user terminal;
And outputting the object related information in a playing interface of the display component.
In a twelfth aspect, an embodiment of the present application provides a terminal device, including a processing component, a display component, and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
the processing assembly is configured to:
receiving video data sent by a server side and outputting the video data on a playing interface of the display assembly;
receiving a prompt identifier sent by the server; the prompt mark is generated by the server based on a target part of an associated object related to the video data; a target location of the associated object is determined based on the video data;
and outputting the prompt identifier in a target display area where the target part is positioned in a play interface of the display assembly.
Compared with the prior art, the method has the following technical effects:
in the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated, so that the video content of the first user prompts the watching user in real time and dynamically, and meanwhile, the watching user can obtain more information effectively in time by displaying the related information of the object corresponding to the target part. The viewing enhancement of the viewing user is further improved, the user viscosity is improved, the viewing user is developed into a potential client, and the commercialization of the associated object is promoted.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram illustrating one embodiment of a data processing method according to the present application;
fig. 2 is a schematic diagram illustrating information related to a display object in a playback interface according to the present application;
FIG. 3 is a flow chart illustrating yet another embodiment of a data processing method provided herein;
FIG. 4 is a schematic diagram of a target display area input prompt pattern of a target portion of a correlation object according to one embodiment of the present disclosure;
FIG. 5 is a flow diagram illustrating one embodiment of an information display method provided in accordance with the present application;
FIG. 6 is a flow chart illustrating another embodiment of an information display method according to the present application;
FIG. 7 is a schematic diagram illustrating one embodiment of a data processing apparatus in accordance with the present application;
FIG. 8 is a schematic diagram of a data processing apparatus according to another embodiment of the present application;
fig. 9 is a schematic view showing the structure of another embodiment of an information display device provided in accordance with the present application;
fig. 10 is a schematic view showing the structure of still another embodiment of an information display device provided according to the present application;
FIG. 11 is a schematic diagram illustrating the construction of one embodiment of a server provided herein;
FIG. 12 is a schematic diagram of a server according to another embodiment of the present application;
fig. 13 is a schematic structural diagram of an embodiment of a terminal device provided in the present application;
fig. 14 shows a schematic structural diagram of a further embodiment of a terminal device provided in the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the foregoing figures, a number of operations are included that occur in a particular order, but it should be understood that the operations may be performed in other than the order in which they occur or in parallel, that the order of operations such as 101, 102, etc. is merely for distinguishing between the various operations, and that the order of execution is not by itself represented by any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The live broadcast user can not grasp the requirement of the watching user in the network live broadcast process, so that the technical problem that live broadcast contents can not be attracted to the watching user is solved. The inventors have made a series of studies to propose embodiments of the present application. In the embodiment of the application, the video data uploaded by the first user side is sent to the second user side, so that the second user side plays the video data on a playing interface. And by acquiring at least one user characteristic of the second user side, determining object related information matched with the at least one user characteristic, the object related information most suitable for the requirement of the watching user can be sent to the second user side for the watching user to watch, so that the viscosity of video content and the watching user is improved, and the product conversion rate of the associated object is further improved.
The embodiment of the application is applicable to, but not limited to, live network scenes, video playback, video recording and playing, video chat, video conference and other scenes, and is not particularly limited herein.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Fig. 1 is a flowchart of an embodiment of a data processing method provided in the embodiments of the present application, where the technical solution of the present embodiment may be executed by a server, and the method may include the following steps:
101: and acquiring video data sent by a first user terminal, and sending the video data to a second user terminal so that the second user terminal plays the video data on a playing interface.
The video data is obtained by collecting live video data of a first user side through equipment such as a camera lens, a microphone and the like, and the video data can comprise video image information, voice information, sensing data collected by a sensing component of the first user side, setting data of a video special effect or an audio special effect set by the first user side and the like.
In practical application, in a video call, a video conference or a network live broadcast scene, a first user side sends collected video data to a server side, the server side sends the video data to corresponding second user sides in real time, and each second user side plays the video data in a respective playing interface so as to be watched by a watching user.
For video recording and playing or video playback scenes, for example, scenes such as social platforms, multimedia platforms, video websites and the like related to videos, a first user can record video data through a first user terminal and upload the video data to a service terminal of the social platform, and a second user can log in the social platform through a second user terminal to watch the video data uploaded by the first user. It can be understood that the first user terminal and the second user terminal are only used for distinguishing the collection terminal and the playing terminal of the video data, and for video conferences, video calls and other scenes, the first user terminal and the second user terminal can generate two playing interfaces at the same time for bidirectional collection and playing of the video data, and at the moment, any one of the terminals can realize the functions of the first user terminal and the second user terminal, which is not limited in detail herein.
102: and acquiring at least one user characteristic of the second user side.
In practical application, the at least one user feature may be related data input by the watching user when registering to log in the second user side, for example, the user type, sex, age, etc. of the watching user, and further may further include a camping category, a transaction channel, etc.; it is to be understood that the at least one user type may further include historical data generated by the viewing user after using the second user side, for example, historical purchase data, historical viewing data, etc., which may be specifically set according to the actual situation, and is not specifically limited herein.
103: object related information matching the at least one user feature is determined.
Wherein the object related information is related data of related objects related to the video data.
In practical applications, the related objects related to the video data may be commodities, exhibits, etc. introduced or sold in the video, which are not limited herein.
Taking the network live broadcast process as an example, a live user (i.e., a first user) wants to attract to a watching user, and needs to grasp the requirement and preference of the watching user, so as to motivate the watching user to interact with the live user. Particularly, for self-marketing live broadcast users, the watching users need to be more visual and more comprehensively know the commodity by introducing the characteristic of the commodity, displaying the detail of the commodity and the like, so that the watching users are developed into potential customers.
However, the threshold of the live broadcast user is low at present, the industry specification is not refined enough, and the live broadcast user can conduct network live broadcast without unified training and learning, so that the professional degree of the live broadcast user is good and bad. The method has the advantages that the specificity and the systematicness of commodity explanation of some live users in the live broadcast process are lacked, and the requirement information of each watching user is not prejudged, so that the requirement of the watching user cannot be grasped, and personalized information of the direct-hit watching user cannot be provided.
In order to solve the above problem, the server may determine object related information matching at least one user feature of each viewing user entering the live broadcasting room by acquiring the at least one user feature. Since the user characteristics are different for each viewing user, the object-related information obtained by matching is also different. At least one user characteristic of the watching user can represent the user requirement of the watching user, so that personalized matching of the object related information is realized, and the technical problem that the requirement of the watching user is difficult to pre-judge in advance by the live broadcast user is solved.
In practical applications, the object related information may be feature information of the associated object, prediction information of multiple dimensions surrounding the associated object, fixed information of the associated object, transaction information of the associated object, etc., which are not limited in detail herein, and may be set according to the practical requirements.
104: and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
Optionally, the second user side may display the object related information in any display form in the playing interface, for example, may display the object related information in a bullet screen form, a message box form, or a dynamic window form, where the second user side may be set by the viewing user according to his actual viewing habit. Of course, the display forms of the object related information include, but are not limited to, the display forms described above, and any display form in the prior art may be applied to the present technical solution.
As an implementation manner, the at least one user feature may include a user type; the object related information may include transaction information of at least one transaction channel to which the associated object relates; the determining object related information matching the at least one user feature may include;
transaction information of a transaction channel matched with the user type is determined.
In practical application, the object related information is classified in advance based on different user characteristics, for example, the user characteristics can comprise user types, and the object related information can comprise channel transaction information; the user types may include personal self-purchase type, a platform channel transaction type, B platform channel transaction type, cross-border channel transaction, import-export channel transaction, entity channel transaction type, and the like.
The transaction information may include transaction amounts of the commodity in different transaction channels corresponding to a preset time, for example, sales amount of about 3 months; the trade price and profit margins of different trade prices may also include the level of the seller selling the commodity, the number and proportion of the buyer's level purchasing the commodity; the price and discount information of the commodity may be included, and the present invention is not limited thereto.
For the explanation of the commodity from the first user of the marketing class, the most common concern of the watching user is the sales data of the commodity at the secondary terminal, namely the transaction information. However, since sales modes of different sales channels are different, sales conditions and prices are different, and generated profit margins are different. However, in the process of carrying out commodity explanation, the first user is difficult to care for all the watching users and cannot pre-judge the requirement information of each watching user, so that during carrying out commodity explanation, the loss or omission of transaction information can be caused, and potential clients cannot be mined.
Since the number of orders and sales channels of viewing users are different for different user types, the corresponding profit costs will be quite different. For self-marketing type first users, different prices and discounts are usually set for different sales channels and how much to get in, for example, a viewing user of personal self-use purchase type will set a higher price because the amount of get in is small and is a random user; the user types are the A-platform channel transaction type, the B-platform channel transaction type, the cross-border channel transaction, the import-export channel transaction, the entity channel transaction type and other watching users, which can be long-term cooperative clients or potential long-term cooperative clients, and the commodity quantity is large, the commodity frequency is high, the price setting is lower, and different discount prices are set for profit spaces of different transaction channels.
Based on the foregoing, as shown in fig. 2, the server may send the transaction data matched with the user type of the second user side to the second user side in the form of a bullet screen by acquiring the user type of the second user side, so that the watching user can obtain the transaction information in time.
Furthermore, the at least one user characteristic may comprise, in addition to the user type, for example, a camping category. After the service side obtains the main category of the second user side, the service side can first judge whether the associated object related to the video data is input into the main category of the watching user. For example, when the associated object is a clothing item, the main nutrition item of the watching user is also the clothing item, and then the user type of the watching user is further determined; if the main category of the watching user is different from the category of the associated object, the transaction information corresponding to the transaction channel with the optimal sales condition is preferentially matched, so that the watching user is attracted to pay attention to the commodity, and the watching user is developed to be a potential client.
In the embodiment of the application, at least one user characteristic of a watching user based on a second user side in a video playing process is provided, and object related information corresponding to the at least one user characteristic is matched. In practical application, the at least one user feature can represent the user requirements of the watching user, so that object related information meeting the user requirements is respectively sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is further developed into potential clients, and systematic product conversion is realized.
For a live scene, the server side can acquire at least one user characteristic of the watching user when the watching user is monitored to enter the live room to watch video data. The corresponding object related information of the viewing user is matched based on the at least one user characteristic. It will be appreciated that in order to be more specialized and systematic for video content, different object correlations may be triggered for display according to the progress of the video content.
Optionally, in some embodiments the determining object related information matching the at least one user feature may include:
and when a first preset event occurs in the video data, determining object related information matched with the at least one user characteristic.
The first preset event can be a first user trigger or an automatic trigger based on the change condition of video data in the video playing process.
In practical applications, the first preset event may be video data collected by the first user terminal, or biological or physiological characteristic information generated by the first user terminal in the process of collecting video data, for example, information such as sound, thermal radiation, limb actions, expressions, etc., or photoelectric, acoustic or magnetic field information (for example, remote control equipment) generated by other electronic devices, etc., or some sensing devices (for example, laser sensors, touch sensors, etc.) set by the first user terminal, and detect and obtain one or more combinations of sensing data in combination with the biological or physiological characteristic information output by the electronic devices or the first user, and may be specifically set according to practical requirements.
Before determining the object related information matched with the at least one user feature when the first preset event occurs in the video data, the method may further include:
and establishing an association relation between the first preset event and the object related information.
In practical application, the server may pre-establish the association relationship between the first preset event and the object related information according to the requirement of the first user. For example, an association relationship between a preset keyword (word) and object related information may be established in advance, and the server determines the object related information associated with the preset keyword (word) when the preset keyword (word) is identified by performing voice recognition on voice information in the video data. Of course, an association relationship between the association object and the object related information may be pre-established, and the server monitors whether the association object appears in the video data by performing image recognition on the video data, and determines the object related information associated with the association object when the association object appears. Optionally, an association relationship between the predetermined sensing data and the object related information may be pre-established, and the associated object related information may be determined based on the acquired predetermined sensing data.
In practical application, the object related information may be pre-generated by the first user based on the video content and sent to the server for storage, and the server establishes an association relationship between the object related information and the first preset event. Further, in order to reduce the workload of the first user, the server may pre-determine an associated object related to the video data, for example, collect information such as an object identifier or an object information code of the associated object, and obtain, based on the object identifier or the object information code, object related information of the associated object from at least one transaction platform or other authorized collaboration platform that cooperates with the video recording and playing platform.
Before determining the object related information matched with the at least one user feature when the first preset event occurs in the video data, the method may further include:
classifying the object related information according to at least one user characteristic to obtain at least one object related sub-information;
the determining object related information matching the at least one user feature may include:
object related sub-information matching the at least one user feature is determined.
In practical applications, taking the object related information as transaction information as an example, the classifying the object related information according to at least one user feature, and obtaining at least one object related sub-information may include:
And classifying the transaction information based on the user type to obtain transaction sub-information corresponding to at least one transaction channel.
When at least one user characteristic obtained by the server side is a user type, the object related information corresponding to the user type can be firstly determined to be transaction information, and when the user type is an A-platform channel transaction type, the transaction information corresponding to the A-platform channel is obtained in a matching mode.
As one implementation, the video data may include voice information; the identifying, when a first preset event occurs in the video data, determining object related information matched with the at least one user feature may include:
identifying first preset voice information in the voice information;
determining object related information associated with the first predetermined voice information;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
For example, the first user speaks "good sales" during video recording, and the object-related information associated with "sales" is determined as transaction information by recognizing the voice information. Further, determining to acquire at least one user characteristic of each second user terminal, and matching to acquire corresponding transaction sub-information based on the user type corresponding to each second user terminal.
As an implementation manner, the video data includes sensing data acquired by a sensing component based on a first user when the first user side outputs a preset gesture; the identifying, when a first preset event occurs in the video data, determining object related information matched with the at least one user feature may include:
identifying first predetermined ones of the sensed data;
determining object related information associated with the first predetermined sensed data;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
In practical application, the first user can set the association relationship between the sensing data corresponding to the preset gesture and the related information of different objects according to own habit. The first preset sensing data is not limited to sensing data collected by the sensing component based on a preset gesture of the first user, but may be sensing data collected by collecting facial expression, head movement, and touch and press actions of hands or feet on the sensing component. The sensing component can be arranged at any position which can be reached by the first user in video recording and broadcasting, or any position which can collect gestures, facial expressions and head movements of the user, and of course, it can be understood that the sensing component can also be arranged in terminal equipment such as a mobile phone or a computer. The sensing component can be directly connected with the server side or connected with the first user side, and the first preset sensing data acquired by the sensing component is sent to the server side through the first user side. The arrangement can be made according to the actual requirements.
As an implementation manner, the identifying the object related information matched with the at least one user feature when the first preset event occurs in the video data may include:
identifying associated objects in the video data;
determining object related information associated with the associated object;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
After the video data is sent to the server by the first user side, the server may obtain an associated object preset by the first user through image recognition, and of course, object related information associated with the associated object may also be determined by only identifying a partial area of the associated object or an object identifier of the associated object, for example, the associated object may be obtained based on feature identification of the partial area of the associated object, or the associated object in the video data may be determined by identifying information such as a two-dimensional code or a bar code of the associated object in the obtained video data, which is not limited herein.
The foregoing examples provide a way to trigger matching of object related information based on at least one user feature by using voice information, video information, gesture and other sensing information of a first user in video data, where the first preset event includes, but is not limited to, one or more combinations of the foregoing, and the first preset event may be specifically set based on actual requirements, which is not limited herein specifically.
The sending the object related information to the second user side, so that the second user side outputs the object related information in a playing interface may include:
and sending the object related sub-information matched with the at least one user characteristic in the object related information to the second user side so that the second user side can output the object related sub-information in a playing interface.
After the sending the object related information to the second user side, the method may further include:
and controlling the second user terminal to display the object related information in the playing interface according to a preset display form.
Alternatively, the object related information is not limited to text information, but may be picture information such as a three-dimensional space diagram, a dynamic diagram, information in the form of video, voice, or address links, etc., which is not particularly limited herein. In practical application, the server may control the user to display in any form of bullet screen, widget, bullet window, dynamic diagram, etc. in the preset area of the playing interface. It should be noted that, when the object related information is displayed in the form of a barrage at the second user side, the display area and the display form thereof need to be distinguished from the common barrage display in the video data, for example, the user message barrage or the message barrage at the second user side, so that the watching user will not have a certain reading difficulty because of being difficult to distinguish.
It can be understood that the server may control the second user side to sequentially and circularly play the object related information in the playing interface from bottom to top or from right to left, or may move among a plurality of random positions or preset positions of the playing interface and stop for a preset time to disappear, so that when the object related information is large, the object related information is circularly played for a plurality of times, and the watching embodiment of the watching user is improved. Meanwhile, in order to ensure that a watching user can read the related information of the object in an effective time and can watch more information in a short time, a server is required to set proper playing speed for matching preset content according to the data size of the related information of the object, for example, the display time, the display speed and the like in a playing interface can be set according to actual requirements.
In practical application, in order to further improve the viewing experience of the viewing user, the method may also trigger the server to send the object related information to the second user according to the requirement of the viewing user, and as an implementation manner, before determining the object related information matched with the at least one user feature, the method may further include:
receiving a display request for the object related information sent by the second user side; the display request is generated based on a preset trigger operation of a watching user of the second user side.
It can be understood that the viewing user can also control the display form of the object related information through the second user side. The viewing user may generate a display request for information related to an associated object in the video data by triggering a display area of the object, such as a single click, double click, or touch press.
Optionally, the server may send the object related information to the second user side in the form of a list, and the viewing user may choose whether to trigger the list to obtain more detailed or enriched object related information. In addition, a display control of the object related information can be set at the second user side, a watching user generates a display request for the object related information by opening the display control, so that the object related information is displayed in a playing interface, and after effective information is read or obtained, the second user side is controlled to stop playing the object related information by closing the display control. Further, the viewing user can set the display form or the display speed of the related information of the object through the display control, so as to adapt to the viewing requirements of different viewing users.
In a live network scene, part of watching users may enter a live broadcasting room after a period of live broadcasting, so that part of effective information is not obtained by the watching users, or part of effective information is forgotten due to overlong live broadcasting time. At this time, the watching user can ask questions or ask questions in the form of messages or barrages, and although the first user, namely the live user, answers to some questions after seeing the barrages, when the barrages or the message information amount is large, the live user cannot timely process or ignore some questions, so that the demands of part of watching users are not timely captured by the live user, the same questions are repeatedly solved for the live user, time is wasted, and workload is increased. In order to improve the experience of the watching users, the requirements of each watching user are captured in time, meanwhile, the workload of the live user is reduced, and the service side obtains the actual requirements of each watching user through barrage information sent by the second user side. Optionally, the method may further include:
Receiving bullet screen information sent by the second user side and identifying preset content in the bullet screen information;
the object related information includes at least one object related sub-information; the determining object related information matching the at least one user feature based on the preset content may include:
determining object related information associated with the preset content;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
In practical application, the bullet screen information may be text information, the first user may preset a keyword (word) as preset content according to the video content, and by setting an association relationship between the preset content and the object related information, after the server side obtains the bullet screen information sent by the second user side, the object related information associated with the preset content may be determined when the preset content is identified in the bullet screen information. In order to realize personalized matching of the object related information, object related sub-information matched with the watching user in the object related information is further obtained through matching based on at least one user characteristic corresponding to a second user side for sending the barrage information.
In the embodiment of the present application, a plurality of implementation forms for triggering to obtain object related information based on at least one user feature match are provided, and the triggering may be triggered by a first preset event generated by a first user side, or may be triggered by a display request generated by a second user side based on a preset triggering operation of a viewing user.
Fig. 3 is a flowchart of an embodiment of a data processing method provided in the embodiment of the present application, where the technical solution of the present embodiment may be executed by a server, and the method may include the following steps:
301: and acquiring video data uploaded by the first user terminal, and sending the video data to the second user terminal so that the second user terminal plays the video data on a playing interface.
302: and determining the target part of the associated object related to the video data based on the video data.
In the live broadcast process, the first user may not perform the system and sufficient preparation in advance, and in the recorded video explanation process, some important contents may be omitted due to various reasons, or due to the problems of too fast speech speed, serious accent and the like, some watching users cannot acquire effective information in time, so that the watching experience of the watching users is affected.
303: and generating a prompt identifier corresponding to the target part.
In order to improve the user viewing experience and further improve the user viscosity, when a first user explains any associated object, a target part of the associated object is identified, and a prompt identifier of the target part is generated, so that dynamic positioning based on video content is realized, and the first user can better understand live content based on video data and prompt information.
For example, when introducing articles of clothing, a new retail first user explains different parts of the articles, such as designs of waist, wrist, shoulder, neckline, etc., fabric, warmth, material, upper body effect, etc., of the clothing one by one. However, effective information may not be obtained for a watching user who enters a missed-listening watching user or does not hear the explanation of the first user, so that the watching user is prompted by the prompt identifier based on the playing process of the video content in real time, the watching requirement that the watching user obtains the effective information in time can be met, the watching experience of the watching user is improved, and the user viscosity is enhanced.
304: and sending the prompt identifier to the second user side so that the second user side outputs the prompt identifier in a target display area where the target part is located in the playing interface.
In practical application, in order to achieve effective and visual prompt for the watching user, the prompt mark can be output and displayed in a target display area where the target part of the associated object is located. Of course, in the embodiment of the present application, the display is not limited to the target display area where the target portion is located, but may be any position in the live broadcast interface, where the target portion may be connected to the prompt identifier through the connector or the indicator, so as to clearly and intuitively prompt the user to watch the target portion of the associated object corresponding to the prompt identifier.
Optionally, in some embodiments, the generating the hint identifier corresponding to the target site may include:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
the sending the prompt identifier to the second user side, so that the second user side outputs the prompt identifier on the playing interface includes:
and sending the prompt pattern to the second user side so that the second user side outputs the prompt pattern in a target display area where the target part is located in the playing interface.
As shown in fig. 4, the associated object is a garment that is tried on by a live user, the target portion is a waist region of the garment, the prompting pattern is a shadow pattern T generated based on detecting the size of a target display region of the waist region in the playing interface, and the shadow pattern T is output in the target display region. Meanwhile, a clothing dimension information display control can be further arranged, and when a user is watched and the display control corresponding to the material dimension information is triggered, material barrage information related to the clothing is displayed in the playing interface.
It will be appreciated that the alert pattern described in the embodiments of the present application includes, but is not limited to, the shadow pattern shown in fig. 4, and may be any pattern and shape, and may be set according to practical requirements.
In practical application, the prompt identifier may be not only a prompt pattern, but also an animation, text information or other forms of prompt identifier, which is not limited herein specifically.
As an optional implementation manner, the determining, based on the video data, a target location of an associated object related to the video data may include:
and when a second preset event occurs in the video data, determining a target part associated with the second preset event in an associated object related to the video data.
In practical applications, the second preset time may be video data collected by the first user side as described above, or may be biological or physiological feature information output by the first user side, for example, information such as sound, thermal radiation, limb motion, expression, etc., or may be photoelectric, acoustic or magnetic field information (for example, remote control device) generated by other electronic devices, etc., or some sensing devices (for example, laser sensors, touch pressure sensors, etc.) set by the first user side, and detect and obtain a combination of one or more of the sensing data by combining the biological or physiological feature information output by the electronic devices or the first user side, which may be specifically set according to actual requirements, which will not be described herein.
In practical application, the server side establishes the association relationship between the second preset event and the target part of the association object in advance. When the first user triggers the second preset event, the server side can determine the target position corresponding to the second preset event triggered by the first user according to the association relation. For example, when the first user speaks "waist", the server determines the waist of the associated object in the video data as the target part through voice recognition; of course, the server may determine, based on the gesture of the first user, for example, when pointing to the waist of the server or pointing to the waist of the associated object, the location pointed by the first user as the target location of the associated object; the server side can also determine the preset target position of the associated object only by identifying the associated object in the video data. When the preset target positions comprise a plurality of target positions, the server side can also generate prompt identifiers corresponding to the target positions at the same time and output the prompt identifiers in target display areas corresponding to the target positions.
In order to further improve the viewing experience of the viewing user, when the second preset event occurs in the video data, determining the target part associated with the second preset event in the associated object related to the video data may further include:
determining object related information matched with the target part;
and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
In practical applications, the target portion may be a part of the associated object, or may refer to the associated object. Namely, the whole associated object is taken as a target part. And establishing association relations with the object related information aiming at different object targets. The different associated objects have different dimension labels, and for clothing commodities, the labels with multiple dimensions can comprise styles, upper body effects, processes, materials, performances, places of production and the like, so that the association relation of object related information of the target part can be established for the multiple dimensions of the associated objects. Meanwhile, the object related information may further include prediction information of multiple dimensions of the related object, such as industry evaluation information, professional small evaluation information, buyer evaluation information, and the like, and may further include fixed information of the related object provided by a manufacturer or a place of origin, such as official commodity detail information, version numbers, product series, and the like, which are not particularly limited herein.
After the sending the prompt identifier to the second user end, the method may further include:
receiving a display request sent by the second user side and aiming at the object related information associated with the target part; the display request is generated based on a preset trigger operation of a watching user of the second user side;
determining object related information associated with the target site;
and sending the object related information to the second user side so that the second user side can output the object related information in a playing interface.
As an optional implementation manner, after the target portion of the associated object is determined, the object related information determined to be matched with the target portion may be directly sent to the second user side for display, where the display manner may be the same as that of the display manner of the object related information described in the embodiment of fig. 1, and no further description is given here.
Optionally, the user watching the second user side may trigger the server side to display the object related information of the target part object. Specifically, the prompt identifier output in the playing interface may be triggered by the watching user to generate an object related information display instruction, or different types of object related information may be set in the playing interface of the second user side, where the object related information may be classified into different types such as detail information, write information, transaction information, and the like. When the watching user triggers any display control corresponding to the category, the display request for generating the related information of the object matched with the target part in the category can be triggered and sent to the server.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated, so that the video content of the first user prompts the watching user in real time and dynamically, and meanwhile, the watching user can obtain more information effectively in time by displaying the related information of the object corresponding to the target part. The viewing enhancement of the viewing user is further improved, the user viscosity is improved, the viewing user is developed into a potential client, and the commercialization of the associated object is promoted.
Fig. 5 is a flowchart of an embodiment of an information display method provided in the embodiment of the present application, where the technical solution of the present embodiment may be executed by a user side, and the method may include the following steps:
501: and receiving video data sent by the server.
502: and playing the video data on a playing interface.
503: and receiving the object related information sent by the server.
The object related information is obtained by the server based on at least one user characteristic matching of the second user terminal.
504: and outputting the object related information in the playing interface.
As an implementation manner, before receiving the object related information sent by the server, the method may further include:
Generating a display request for the object related information based on a preset trigger operation of the watching user;
and sending the display request to the server.
The display interface comprises at least one preset display control; the generating the object related information display request based on the preset trigger operation of the viewing user may include:
detecting a preset trigger operation of the watching user aiming at any preset display control;
and generating a display request aiming at the object related information associated with any one preset display control based on the preset trigger operation.
As an implementation manner, before receiving the object related information sent by the server, the method may further include:
and acquiring bullet screen information input by the watching user and sending the bullet screen information to the server side so that the server side can identify preset contents in the bullet screen information and determine object related information matched with the at least one user characteristic based on the preset contents.
In practical applications, the object related information includes at least one object related sub-information, and the receiving the object related information sent by the server may include:
Receiving object related sub-information sent by the server; the object related sub-information is obtained by the server from the object related information based on at least one user characteristic matching of the second user side.
The outputting the object related information in the playing interface may include:
and outputting the object related sub-information in the playing interface.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, at least one user characteristic of a watching user based on a second user side in the video playing process is provided, and object related information corresponding to the at least one user characteristic is matched. In practical application, the at least one user feature can represent the user requirements of the watching user, so that object related information meeting the user requirements is respectively sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is further developed into potential clients, and systematic product conversion is realized.
Fig. 6 is a flowchart of an embodiment of an information display method provided in the embodiment of the present application, where the technical solution of the present embodiment may be executed by a user side, and the method may include the following steps:
601: and receiving video data sent by the server.
602: and outputting the video data at a playing interface.
603: and receiving the prompt identification sent by the server.
Wherein the prompt identifier is generated for a target part of an associated object related to the video data by the service end; a target location of the associated object is determined based on the video data.
604: and outputting the prompt identifier in a target display area where the target part is positioned in the playing interface.
The outputting the prompt identifier in the target display area where the target portion is located in the playing interface may include:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
The prompt identification comprises a prompt pattern; the outputting the prompt identifier in the target display area where the target portion is located in the playing interface may include:
outputting the prompt identification output prompt pattern in a target display area where the target part is located in the playing interface; the prompting pattern is generated by the server side based on the size of the target display area.
As an implementation manner, the method may further include:
receiving object related information matched with the target part and sent by the server;
and outputting the object related information in the playing interface.
As an implementation manner, before receiving the object related information matched with the target location sent by the server side, the method may further include:
generating an object related information display request related to the target part based on the preset trigger operation of the watching user;
and sending the display request to the server.
In practical application, the object related information includes at least one object related sub-information, and the receiving the object related information that is sent by the server and matches with the target location may include:
receiving object related sub-information matched with the target part and sent by the server; the object related sub-information is obtained by the server from the object related information based on the target part matching.
The outputting the object related information in the playing interface may include:
and outputting the object related sub-information in the playing interface.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated so as to prompt the watching user in real time and dynamically based on the video content of the first user, and meanwhile, the watching user can obtain more information effectively in time by displaying the related information of the object corresponding to the target part. The viewing enhancement of the viewing user is further improved, the user viscosity is improved, the viewing user is developed into a potential client, and the commercialization of the associated object is promoted.
Fig. 7 is a schematic structural diagram of an embodiment of a data processing apparatus provided in the embodiment of the present application, where the technical solution of the present embodiment may be executed by a server, and the apparatus may include:
a first obtaining module 701, configured to obtain video data uploaded by a first user.
The first sending module 702 is configured to send the video data to a second user side, so that the second user side plays the video data on a playing interface.
A second obtaining module 703, configured to obtain at least one user characteristic of the second user terminal.
A first determining module 704 is configured to determine object related information that matches the at least one user feature.
Wherein the object related information is related data of related objects related to the video data.
And the second sending module 705 is configured to send the object related information to the second user side, so that the second user side outputs the object related information in the playing interface.
As an implementation manner, the at least one user feature may include a user type; the object related information may include transaction information of at least one transaction channel to which the associated object relates; the first determining module 704 may specifically be configured to:
transaction information of a transaction channel matched with the user type is determined.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, at least one user characteristic of a watching user based on a second user side in a video playing process is provided, and object related information corresponding to the at least one user characteristic is matched. In practical application, the at least one user feature can represent the user requirements of the watching user, so that object related information meeting the user requirements is respectively sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is further developed into potential clients, and systematic product conversion is realized.
Optionally, in some embodiments, the first determining module 704 may specifically be configured to:
and when a first preset event occurs in the video data, determining object related information matched with the at least one user characteristic.
The method may further include, when the identifying that the first preset event occurs in the video data, determining object related information matched with the at least one user feature, and further specifically:
and establishing an association relation between the first preset event and the object related information.
The method may further include, when the identifying that the first preset event occurs in the video data, determining object related information matched with the at least one user feature, and further specifically:
classifying the object related information according to at least one user characteristic to obtain at least one object related sub-information;
the determining object related information matching the at least one user feature may include:
object related sub-information matching the at least one user feature is determined.
As one implementation, the video data may include voice information; when the first preset event occurs in the video data, determining the object related information matched with the at least one user feature can be specifically used for:
Identifying first preset voice information in the voice information;
determining object related information associated with the first predetermined voice information;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
As an implementation manner, the video data includes sensing data acquired by the sensing component based on a preset gesture output by the first user at the first user terminal; when the first preset event occurs in the video data, determining the object related information matched with the at least one user feature can be specifically used for:
identifying first predetermined ones of the sensed data;
determining object related information associated with the first predetermined sensed data;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
As an implementation manner, when the first preset event occurs in the video data, determining the object related information matched with the at least one user feature may be specifically used to:
identifying associated objects in the video data;
determining object related information associated with the associated object;
And determining object related sub-information matched with the at least one user characteristic in the object related information.
As an implementation manner, the second sending module 705 may specifically be used for:
and sending the object related sub-information matched with the at least one user characteristic in the object related information to the second user side so that the second user side can output the object related sub-information in a playing interface.
After the sending the object related information to the second user side, the method may further include:
and controlling the second user terminal to display the object related information in the playing interface according to a preset display form.
As an implementation manner, before the first determining module 704, the method may further include:
the first display request receiving module is used for receiving a display request for the object related information sent by the second user side; the display request is generated based on a preset trigger operation of a watching user of the second user side.
Optionally, the apparatus may further include:
and the barrage information receiving module is used for receiving barrage information sent by the second user terminal.
And the preset content identification module is used for identifying preset contents in the barrage information.
The object related information includes at least one object related sub-information; the determining, based on the preset content, object related information matched with the at least one user feature may be specifically used to:
determining object related information associated with the preset content;
and determining object related sub-information matched with the at least one user characteristic in the object related information.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the present application, a plurality of implementation forms for triggering to obtain object related information based on at least one user feature match are provided, and the triggering may be triggered by a first preset event generated by a first user side, or may be triggered by a display request generated by a second user side based on a preset triggering operation of a viewing user.
Fig. 8 is a schematic structural diagram of an embodiment of a data processing apparatus provided in the embodiment of the present application, where the technical solution of the present embodiment may be executed by a server, and the apparatus may include:
the video data obtaining module 801 is configured to obtain video data uploaded by the first user.
The video data sending module 802 is configured to send the video data to a second user side, so that the second user side plays the video data on a playing interface.
A second determining module 803 is configured to determine, based on the video data, a target location of an associated object related to the video data.
The prompt identifier generating module 804 is configured to generate a prompt identifier corresponding to the target location.
And a prompt identifier sending module 805, configured to send the prompt identifier to the second user side, so that the second user side outputs the prompt identifier in a target display area where the target portion is located in the playing interface.
Optionally, in some embodiments, the hint-identifier generating module 804 may be specifically configured to:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
The hint-identifier sending module 805 may specifically be configured to:
and sending the prompt pattern to the second user side so that the second user side outputs the prompt pattern in a target display area where the target part is located in the playing interface.
As an alternative embodiment, the second determining module 803 may specifically be configured to:
and when a second preset event occurs in the video data, determining a target part associated with the second preset event in an associated object related to the video data.
In order to further improve the viewing experience of the viewing user, when the second preset event occurs in the video data, determining the target part associated with the second preset event in the associated object related to the video data may further include:
the matching module is used for determining object related information matched with the target part;
and the information sending module is used for sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
After the prompt identifier sending module 805, it may further include:
the second display request receiving module is used for receiving a display request which is sent by the second user side and is aimed at the object related information associated with the target part; the display request is generated based on a preset trigger operation of a watching user of the second user side;
The information determining module is used for determining object related information associated with the target part;
and the information sending module is used for sending the object related information to the second user side so that the second user side can output the object related information in a playing interface.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated, so that the video content of the first user prompts the watching user in real time and dynamically, and meanwhile, the watching user can obtain more information effectively in time by displaying the related information of the object corresponding to the target part. The viewing enhancement of the viewing user is further improved, the user viscosity is improved, the viewing user is developed into a potential client, and the commercialization of the associated object is promoted.
Fig. 9 is a schematic structural diagram of an embodiment of an information display device provided in the embodiment of the present application, where the technical solution of the embodiment may be executed by a user side, and the device may include:
the first receiving module 901 is configured to receive video data sent by a server.
A first playing module 902, configured to play the video data on a playing interface.
The second receiving module 903 is configured to receive the object related information sent by the server.
The object related information is obtained by the server based on at least one user characteristic matching of the second user terminal.
And the first output module 904 is configured to output the object related information in the playing interface.
As an implementation manner, before the second receiving module 903, the method may further include:
the first display request generation module is used for generating a display request aiming at the object related information based on a preset trigger operation of the watching user;
and the first display request sending module is used for sending the display request to the server.
The display interface comprises at least one preset display control; the display request generation module may specifically be configured to:
detecting a preset trigger operation of the watching user aiming at any preset display control;
and generating a display request aiming at the object related information associated with any one preset display control based on the preset trigger operation.
As an implementation manner, before the second receiving module 903, the method may further include:
And the barrage information sending module is used for acquiring barrage information input by the watching user and sending the barrage information to the server side so that the server side can identify preset content in the barrage information and determine object related information matched with the at least one user characteristic based on the preset content.
In practical applications, the object related information includes at least one object related sub-information, and the second receiving module 903 may specifically be configured to:
receiving object related sub-information sent by the server; the object related sub-information is obtained by the server from the object related information based on at least one user characteristic matching of the second user side.
The first output module 904 may be specifically configured to:
and outputting the object related sub-information in the playing interface.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, at least one user characteristic of a watching user based on a second user side in the video playing process is provided, and object related information corresponding to the at least one user characteristic is matched. In practical application, the at least one user feature can represent the user requirements of the watching user, so that object related information meeting the user requirements is respectively sent to the second user side according to different user requirements, the user viscosity is enhanced, the watching experience of the watching user is further improved, the watching user is further developed into potential clients, and systematic product conversion is realized.
Fig. 10 is a schematic structural diagram of an embodiment of an information display device provided in the embodiment of the present application, where the technical solution of the embodiment may be executed by a user side, and the device may include:
the third receiving module 1001 is configured to receive video data sent by the server.
And a second playing module 1002, configured to output the video data at a playing interface.
And a fourth receiving module 1003, configured to receive the prompt identifier sent by the server.
Wherein the prompt identifier is generated for a target part of an associated object related to the video data by the service end; a target location of the associated object is determined based on the video data.
And a second output module 1004, configured to output the prompt identifier in a target display area where the target portion is located in the playing interface.
The second output module 1004 may specifically be configured to:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
The prompt identification comprises a prompt pattern; the second output module 1004 may specifically be configured to:
outputting the prompt identification output prompt pattern in a target display area where the target part is located in the playing interface; the prompting pattern is generated by the server side based on the size of the target display area.
As an implementation manner, the apparatus may further include:
the object related information receiving module is used for receiving object related information matched with the target part and sent by the server side;
and the object related information output module is used for outputting the object related information in the playing interface.
As an implementation manner, before the object related information receiving module, the method may further include:
the second display request generation module is used for generating a display request aiming at the object related information associated with the target part based on the preset trigger operation of the watching user;
and the second display request sending module is used for sending the display request to the server.
In practical application, the object related information includes at least one object related sub-information, and the object related information receiving module may specifically be configured to:
receiving object related sub-information matched with the target part and sent by the server; the object related sub-information is obtained by the server from the object related information based on the target part matching.
The object related information output module may specifically be configured to:
And outputting the object related sub-information in the playing interface.
The foregoing details of the implementation manner of the embodiments of the present application have been described in detail, and are not repeated herein.
In the embodiment of the application, the target part related to the associated object in the video data is identified, and the prompt information corresponding to the target part is generated so as to prompt the watching user in real time and dynamically based on the video content of the first user, and meanwhile, the watching user can obtain more information effectively in time by displaying the related information of the object corresponding to the target part. The viewing enhancement of the viewing user is further improved, the user viscosity is improved, the viewing user is developed into a potential client, and the commercialization of the associated object is promoted.
Fig. 11 is a schematic structural diagram of one embodiment of a server provided in the embodiments of the present application, where the server may include a processing component 1101 and a storage component 1102.
The storage component 1102 is configured to store one or more computer instructions; the one or more computer instructions are for execution by the processing component 1101.
The processing component 1101 may be configured to:
acquiring video data sent by a first user side, and sending the video data to a second user side so that the second user side plays the video data on a playing interface;
Acquiring at least one user characteristic of the second user side;
determining object related information matching the at least one user feature; wherein the object related information is related data of related objects related to the video data;
and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
Wherein the processing component 1101 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 1102 is configured to store various types of data to support operations in a server. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Of course, the server may necessarily also include other components, such as input/output interfaces, communication components, and the like.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate communication between the server and other devices, either wired or wireless, such as communication with a terminal.
The embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer may implement the data processing method of the embodiment shown in fig. 1.
Fig. 12 is a schematic structural diagram of one embodiment of a server provided in the embodiments of the present application, where the server may include a processing component 1201 and a storage component 1202.
The storage component 1202 is for storing one or more computer instructions; the one or more computer instructions are configured to be invoked for execution by the processing assembly 1201.
The processing assembly 1201 may be configured to:
acquiring video data uploaded by a first user end, and sending the video data to a second user end so that the second user end plays the video data on a playing interface;
Determining a target part of an associated object related to the video data based on the video data;
generating a prompt identifier corresponding to the target part;
and sending the prompt identifier to the second user side so that the second user side outputs the prompt identifier in a target display area where the target part is located in the playing interface.
Wherein the processing assembly 1201 may include one or more processors to execute computer instructions to perform all or part of the steps in the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 1202 is configured to store various types of data to support operations in a server. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Of course, the server may necessarily also include other components, such as input/output interfaces, communication components, and the like.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate communication between the server and other devices, either wired or wireless, such as communication with a terminal.
The embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a computer may implement the data processing method of the embodiment shown in fig. 3.
Fig. 13 is a schematic structural diagram of an embodiment of a terminal device provided in the embodiments of the present application, where the terminal device may include a processing component 1301, a display component 1302, and a storage component 1303. The storage component 1303 is configured to store one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1301.
The processing component 1301 may be configured to:
receiving video data sent by a server side and playing the video data on a playing interface of the display component 1302;
Receiving object related information sent by the server; the object related information is obtained by the server based on at least one user characteristic matching of the second user terminal;
the object related information is output in a playback interface of the display component 1302.
Wherein processing component 1301 may include one or more processors to execute computer instructions to perform all or part of the steps in the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 1303 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The display assembly 1302 may be an electronic light Emitting (EL) element, a liquid crystal display or a micro-display having a similar structure, or a retina-directly displayable or similar laser-scanned display.
Of course, the terminal device may naturally also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program can implement the information display method of the embodiment shown in fig. 5 when executed by a computer.
Fig. 14 is a schematic structural diagram of an embodiment of a terminal device provided in the embodiments of the present application, where the terminal device may include a processing component 1401, a display component 1402, and a storage component 1403. The storage component 1403 is used to store one or more computer program instructions; the one or more computer program instructions are for invocation and execution by the processing component 1401.
The processing component 1401 may be configured to:
receiving video data sent by a server side and outputting the video data on a playing interface of the display module 1402;
receiving a prompt identifier sent by the server; the prompt mark is generated by the server based on a target part of an associated object related to the video data; a target location of the associated object is determined based on the video data;
and outputting the prompt identifier in a target display area where the target part is located in a playing interface of the display module 1402.
Wherein the processing component 1401 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component 1403 is configured to store various types of data to support operations at the terminal. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The display assembly 1402 may be an electronic light Emitting (EL) element, a liquid crystal display or a micro display having a similar structure, or a retina-directly displayable or similar laser scanning type display.
Of course, the terminal device may naturally also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program can implement the information display method of the embodiment shown in fig. 6 when executed by a computer.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (11)
1. A method of data processing, comprising:
acquiring video data uploaded by a first user end, and sending the video data to a second user end so that the second user end plays the video data on a playing interface;
when a second preset event occurs in the video data, determining a target part associated with the second preset event in an associated object related to the video data;
generating a prompt identifier corresponding to the target part;
the prompt identification is sent to the second user side, so that the second user side outputs the prompt identification in a target display area where the target part is located in the playing interface;
the method further comprises the steps of: determining object related information matched with the target part; and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
2. The method of claim 1, wherein the generating a hint identifier corresponding to the target site comprises:
determining a target display area of the target part in the playing interface;
generating a prompt pattern with the same size as the target display area;
The sending the prompt identifier to the second user side, so that the second user side outputs the prompt identifier on the playing interface includes:
and sending the prompt pattern to the second user side so that the second user side outputs the prompt pattern in a target display area where the target part is located in the playing interface.
3. The method of claim 1, wherein before the sending the object related information to the second client, further comprises:
receiving a display request sent by the second user side and aiming at the object related information associated with the target part; the display request is generated based on a preset trigger operation of a watching user of the second user side.
4. An information display method, comprising:
receiving video data sent by a server and outputting the video data on a playing interface;
receiving a prompt identifier sent by the server; the prompt mark is generated by the server based on a target part associated with a second preset event in an associated object related to the video data; the target part associated with a second preset event in the associated object is determined when the second preset event occurs in the video data;
Outputting the prompt identifier in a target display area where the target part is located in the playing interface;
the method further comprises the steps of: receiving object related information matched with the target part and sent by the server; and outputting the object related information in the playing interface.
5. The method of claim 4, wherein outputting the hint identifier in a target display area of the playback interface where the target location is located comprises:
determining a target display area where the target part is located in the playing interface;
and outputting the prompt identification in the target display area.
6. The method of claim 4, wherein the hint identification comprises a hint pattern; the outputting the prompt identifier in the target display area where the target part is located in the playing interface includes:
outputting the prompt identification output prompt pattern in a target display area where the target part is located in the playing interface; the prompting pattern is generated by the server side based on the size of the target display area.
7. The method of claim 4, further comprising, before the receiving the object related information that is sent by the server and matches the target location:
Generating an object related information display request related to the target part based on a preset trigger operation of a watching user of the second user side;
and sending the display request to the server.
8. A data processing apparatus, comprising:
the video data acquisition module is used for acquiring video data uploaded by the first user;
the video data transmitting module is used for transmitting the video data to a second user side so that the second user side plays the video data on a playing interface;
the second determining module is used for determining a target part associated with a second preset event in an associated object related to the video data when the second preset event occurs in the video data;
the prompt identifier generation module is used for generating a prompt identifier corresponding to the target part;
the prompt identifier sending module is used for sending the prompt identifier to the second user side so that the second user side can output the prompt identifier in a target display area where the target part is located in the playing interface;
the apparatus further comprises: the information determining module is used for determining object related information associated with the target part; and the information sending module is used for sending the object related information to the second user side so that the second user side can output the object related information in a playing interface.
9. An information display device, comprising:
the third receiving module is used for receiving the video data sent by the server;
the second playing module is used for outputting the video data at a playing interface;
the fourth receiving module is used for receiving the prompt identification sent by the server; the prompt mark is generated for a target part associated with a second preset event in an associated object related to the service end based on video data; the target part associated with a second preset event in the associated object is determined when the second preset event occurs in the video data;
the second output module is used for outputting the prompt identifier in a target display area where the target part is located in the playing interface;
the apparatus further comprises: the object related information receiving module is used for receiving object related information matched with the target part and sent by the server side; and the object related information output module is used for outputting the object related information in the playing interface.
10. A server, comprising a processing component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
The processing assembly is configured to:
acquiring video data uploaded by a first user end, and sending the video data to a second user end so that the second user end plays the video data on a playing interface;
when a second preset event occurs in the video data, determining a target part associated with the second preset event in an associated object related to the video data;
generating a prompt identifier corresponding to the target part;
the prompt identification is sent to the second user side, so that the second user side outputs the prompt identification in a target display area where the target part is located in the playing interface;
the processing assembly is further configured to: determining object related information matched with the target part; and sending the object related information to the second user side so that the second user side can output the object related information in the playing interface.
11. The terminal equipment is characterized by comprising a processing component, a display component and a storage component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for being called by the processing component for execution;
The processing assembly is configured to:
receiving video data sent by a server side and outputting the video data on a playing interface of the display assembly;
receiving a prompt identifier sent by the server; the prompt mark is generated by the server based on a target part associated with a second preset event in an associated object related to the video data; the target part associated with a second preset event in the associated object is determined when the second preset event occurs in the video data;
outputting the prompt identifier in a target display area where the target part is located in a play interface of the display assembly;
the processing assembly is further configured to: receiving object related information matched with the target part and sent by the server; and outputting the object related information in the playing interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210551425.6A CN115119004B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display device, server and terminal equipment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210551425.6A CN115119004B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display device, server and terminal equipment |
CN201910394238.XA CN111935488B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display method, device, server and terminal equipment |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910394238.XA Division CN111935488B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display method, device, server and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115119004A CN115119004A (en) | 2022-09-27 |
CN115119004B true CN115119004B (en) | 2024-03-29 |
Family
ID=73282562
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910394238.XA Active CN111935488B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display method, device, server and terminal equipment |
CN202210551425.6A Active CN115119004B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display device, server and terminal equipment |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910394238.XA Active CN111935488B (en) | 2019-05-13 | 2019-05-13 | Data processing method, information display method, device, server and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN111935488B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114896209A (en) * | 2022-05-13 | 2022-08-12 | 联想(北京)有限公司 | A file display method and electronic device |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867648A (en) * | 2010-04-30 | 2010-10-20 | 华为终端有限公司 | Method for displaying prompt information in video program playing, and mobile terminal |
CN104065979A (en) * | 2013-03-22 | 2014-09-24 | 北京中传数广技术有限公司 | Method for dynamically displaying information related with video content and system thereof |
KR20140133190A (en) * | 2013-05-10 | 2014-11-19 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof |
US8984405B1 (en) * | 2013-06-26 | 2015-03-17 | R3 Collaboratives, Inc. | Categorized and tagged video annotation |
WO2016155562A1 (en) * | 2015-04-03 | 2016-10-06 | 腾讯科技(深圳)有限公司 | Content item display system, method and device |
CN106331429A (en) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | A video detail amplification method |
CN106792092A (en) * | 2016-12-19 | 2017-05-31 | 广州虎牙信息科技有限公司 | Live video flow point mirror display control method and its corresponding device |
CN106791970A (en) * | 2016-12-06 | 2017-05-31 | 乐视控股(北京)有限公司 | The method and device of merchandise news is presented in video playback |
CN107340852A (en) * | 2016-08-19 | 2017-11-10 | 北京市商汤科技开发有限公司 | Gestural control method, device and terminal device |
CN107578306A (en) * | 2016-08-22 | 2018-01-12 | 大辅科技(北京)有限公司 | Commodity in track identification video image and the method and apparatus for showing merchandise news |
CN107613399A (en) * | 2017-09-15 | 2018-01-19 | 广东小天才科技有限公司 | Video fixed-point playing control method and device and terminal equipment |
CN107637089A (en) * | 2015-05-18 | 2018-01-26 | Lg电子株式会社 | Display device and control method thereof |
CN107633441A (en) * | 2016-08-22 | 2018-01-26 | 大辅科技(北京)有限公司 | Commodity in track identification video image and the method and apparatus for showing merchandise news |
CN107944376A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | The recognition methods of video data real-time attitude and device, computing device |
WO2018092016A1 (en) * | 2016-11-19 | 2018-05-24 | Yogesh Chunilal Rathod | Providing location specific point of interest and guidance to create visual media rich story |
CN108174167A (en) * | 2018-03-01 | 2018-06-15 | 中国工商银行股份有限公司 | A kind of remote interaction method, apparatus and system |
CN108255304A (en) * | 2018-01-26 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device and storage medium based on augmented reality |
CN108712683A (en) * | 2018-03-02 | 2018-10-26 | 北京奇艺世纪科技有限公司 | A kind of data transmission method, barrage information generating method and device |
CN108769772A (en) * | 2018-05-28 | 2018-11-06 | 广州虎牙信息科技有限公司 | Direct broadcasting room display methods, device, equipment and storage medium |
CN108881765A (en) * | 2018-05-25 | 2018-11-23 | 讯飞幻境(北京)科技有限公司 | Light weight recorded broadcast method, apparatus and system |
CN109274999A (en) * | 2018-10-08 | 2019-01-25 | 腾讯科技(深圳)有限公司 | A kind of video playing control method, device, equipment and medium |
CN109309762A (en) * | 2018-11-30 | 2019-02-05 | 努比亚技术有限公司 | Message treatment method, device, mobile terminal and storage medium |
CN109429074A (en) * | 2017-08-25 | 2019-03-05 | 阿里巴巴集团控股有限公司 | A kind of live content processing method, device and system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663641A (en) * | 2012-05-19 | 2012-09-12 | 黄洪程 | Electronic commerce method for unifying marketing channels |
US8761448B1 (en) * | 2012-12-13 | 2014-06-24 | Intel Corporation | Gesture pre-processing of video stream using a markered region |
US20140359448A1 (en) * | 2013-05-31 | 2014-12-04 | Microsoft Corporation | Adding captions and emphasis to video |
US10770113B2 (en) * | 2016-07-22 | 2020-09-08 | Zeality Inc. | Methods and system for customizing immersive media content |
CN106791895B (en) * | 2016-11-29 | 2020-07-03 | 北京小米移动软件有限公司 | Interaction method and device in E-commerce application program |
CN106791904A (en) * | 2016-12-29 | 2017-05-31 | 广州华多网络科技有限公司 | Live purchase method and device |
CN108076353A (en) * | 2017-05-18 | 2018-05-25 | 北京市商汤科技开发有限公司 | Business object recommends method, apparatus, storage medium and electronic equipment |
-
2019
- 2019-05-13 CN CN201910394238.XA patent/CN111935488B/en active Active
- 2019-05-13 CN CN202210551425.6A patent/CN115119004B/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867648A (en) * | 2010-04-30 | 2010-10-20 | 华为终端有限公司 | Method for displaying prompt information in video program playing, and mobile terminal |
CN104065979A (en) * | 2013-03-22 | 2014-09-24 | 北京中传数广技术有限公司 | Method for dynamically displaying information related with video content and system thereof |
KR20140133190A (en) * | 2013-05-10 | 2014-11-19 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof |
US8984405B1 (en) * | 2013-06-26 | 2015-03-17 | R3 Collaboratives, Inc. | Categorized and tagged video annotation |
WO2016155562A1 (en) * | 2015-04-03 | 2016-10-06 | 腾讯科技(深圳)有限公司 | Content item display system, method and device |
CN107637089A (en) * | 2015-05-18 | 2018-01-26 | Lg电子株式会社 | Display device and control method thereof |
CN107340852A (en) * | 2016-08-19 | 2017-11-10 | 北京市商汤科技开发有限公司 | Gestural control method, device and terminal device |
CN107578306A (en) * | 2016-08-22 | 2018-01-12 | 大辅科技(北京)有限公司 | Commodity in track identification video image and the method and apparatus for showing merchandise news |
CN107633441A (en) * | 2016-08-22 | 2018-01-26 | 大辅科技(北京)有限公司 | Commodity in track identification video image and the method and apparatus for showing merchandise news |
CN106331429A (en) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | A video detail amplification method |
WO2018092016A1 (en) * | 2016-11-19 | 2018-05-24 | Yogesh Chunilal Rathod | Providing location specific point of interest and guidance to create visual media rich story |
CN106791970A (en) * | 2016-12-06 | 2017-05-31 | 乐视控股(北京)有限公司 | The method and device of merchandise news is presented in video playback |
CN106792092A (en) * | 2016-12-19 | 2017-05-31 | 广州虎牙信息科技有限公司 | Live video flow point mirror display control method and its corresponding device |
CN109429074A (en) * | 2017-08-25 | 2019-03-05 | 阿里巴巴集团控股有限公司 | A kind of live content processing method, device and system |
CN107613399A (en) * | 2017-09-15 | 2018-01-19 | 广东小天才科技有限公司 | Video fixed-point playing control method and device and terminal equipment |
CN107944376A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | The recognition methods of video data real-time attitude and device, computing device |
CN108255304A (en) * | 2018-01-26 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device and storage medium based on augmented reality |
CN108174167A (en) * | 2018-03-01 | 2018-06-15 | 中国工商银行股份有限公司 | A kind of remote interaction method, apparatus and system |
CN108712683A (en) * | 2018-03-02 | 2018-10-26 | 北京奇艺世纪科技有限公司 | A kind of data transmission method, barrage information generating method and device |
CN108881765A (en) * | 2018-05-25 | 2018-11-23 | 讯飞幻境(北京)科技有限公司 | Light weight recorded broadcast method, apparatus and system |
CN108769772A (en) * | 2018-05-28 | 2018-11-06 | 广州虎牙信息科技有限公司 | Direct broadcasting room display methods, device, equipment and storage medium |
CN109274999A (en) * | 2018-10-08 | 2019-01-25 | 腾讯科技(深圳)有限公司 | A kind of video playing control method, device, equipment and medium |
CN109309762A (en) * | 2018-11-30 | 2019-02-05 | 努比亚技术有限公司 | Message treatment method, device, mobile terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111935488A (en) | 2020-11-13 |
CN115119004A (en) | 2022-09-27 |
CN111935488B (en) | 2022-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11922675B1 (en) | Systems and methods for automating benchmark generation using neural networks for image or video selection | |
US11064257B2 (en) | System and method for segment relevance detection for digital content | |
TWI744368B (en) | Play processing method, device and equipment | |
CN107818180B (en) | Video association method, video display method, device and storage medium | |
US20170097679A1 (en) | System and method for content provision using gaze analysis | |
US20170251262A1 (en) | System and Method for Segment Relevance Detection for Digital Content Using Multimodal Correlations | |
EP3425483B1 (en) | Intelligent object recognizer | |
US12118768B1 (en) | Systems and methods for managing computer memory for scoring images or videos using selective web crawling | |
CN114025188B (en) | Live advertisement display method, system, device, terminal and readable storage medium | |
US10638197B2 (en) | System and method for segment relevance detection for digital content using multimodal correlations | |
WO2013138370A1 (en) | Interactive overlay object layer for online media | |
US12073641B2 (en) | Systems, devices, and/or processes for dynamic surface marking | |
WO2019183061A1 (en) | Object identification in social media post | |
US12142026B2 (en) | Systems and methods for using image scoring for an improved search engine | |
US20250014314A1 (en) | Systems and methods for automatic image generation and arrangement using a machine learning architecture | |
US12249117B2 (en) | Machine learning architecture for peer-based image scoring | |
US12249118B2 (en) | Systems and methods for using image scoring for an improved search engine | |
CN116821475A (en) | Video recommendation method and device based on client data and computer equipment | |
CN115119004B (en) | Data processing method, information display device, server and terminal equipment | |
CN113301362B (en) | Video element display method and device | |
US12073640B2 (en) | Systems, devices, and/or processes for dynamic surface marking | |
US20220318549A1 (en) | Systems, devices, and/or processes for dynamic surface marking | |
US12393975B2 (en) | Multi-hosted livestream in an open web ecommerce environment | |
US12198403B1 (en) | Systems and methods for automating benchmark generation using neural networks for image or video selection | |
US20240119509A1 (en) | Object highlighting in an ecommerce short-form video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |