[go: up one dir, main page]

CN104883603A - Playing control method and system, and terminal device - Google Patents

Playing control method and system, and terminal device Download PDF

Info

Publication number
CN104883603A
CN104883603A CN201510210500.2A CN201510210500A CN104883603A CN 104883603 A CN104883603 A CN 104883603A CN 201510210500 A CN201510210500 A CN 201510210500A CN 104883603 A CN104883603 A CN 104883603A
Authority
CN
China
Prior art keywords
picture frame
object content
terminal equipment
content
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510210500.2A
Other languages
Chinese (zh)
Other versions
CN104883603B (en
Inventor
刘洁
梁鑫
王兴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510210500.2A priority Critical patent/CN104883603B/en
Publication of CN104883603A publication Critical patent/CN104883603A/en
Application granted granted Critical
Publication of CN104883603B publication Critical patent/CN104883603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to a playing control method and system, and a terminal device. Through a first terminal device, according to identification information of a second video stream to be played and a timestamp of a second picture frame, a first video stream corresponding to the identification information and a first picture frame corresponding to the timestamp are obtained from the first terminal device, a first position area which a user marks on the first picture frame in advance and which corresponds to target content specified in advance by a user is obtained, then the first position area is sent to a second terminal device, and the second terminal generates a UI layer according to the first position area and updating content corresponding to the target content, so that when a screen displays the second picture frame, the UI layer covers the second picture frame, and thus the updating content covers the target content and is displayed to the user. Without tampering video stream data, personalized video content that satisfies user needs is presented to the user in real time, and a processing load of a playing terminal is lightened.

Description

Control method for playing back, system and terminal equipment
Technical field
The disclosure relates to video display arts field, particularly a kind of control method for playing back, system and terminal equipment.
Background technology
Intelligent terminal day by day universal, become the major way of customer multi-media video-see, for mobile phone, user can download interested video content from network side and watch, or the local video content stored of viewing.
Whether, in correlation technique, video playback plays according to the picture frame of video flowing, and user only can control broadcast mode, such as: playing progress rate, full frame etc.But user can not control play content, personalized video playback is carried out to interested video content.
Summary of the invention
Disclosure embodiment provides a kind of control method for playing back, system and terminal equipment.Described technical scheme is as follows:
According to the first aspect of disclosure embodiment, provide a kind of control method for playing back, the method comprises:
First terminal equipment to storage first video flowing sends label information and obtains request, and described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
Receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
To determine according to described primary importance region showing on the screen of described second picture frame, the second place region of the described object content of corresponding display;
Generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
When described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the second aspect of disclosure embodiment, provide a kind of control method for playing back, the method comprises:
Detect the first picture frame in the first video flowing, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
When the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the timestamp of second picture frame in the second video flowing to be played, acquisition from described first video flowing, first picture frame corresponding with described timestamp, wherein, described first picture frame is identical with described second picture frame;
If the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, with make described second terminal equipment according to described primary importance region and preset, the more fresh content corresponding with described object content generate user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the third aspect of disclosure embodiment, provide a kind of second terminal equipment, described equipment comprises:
Sending module, be configured to send label information to the first terminal equipment of storage first video flowing and obtain request, described acquisition request comprises: the timestamp of second picture frame in the second video flowing to be played on the second terminal equipment;
First receiver module, be configured to receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains from described first video flowing, first picture frame corresponding with described timestamp, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
First locating module, is configured to the second place region according to described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content;
First processing module, be configured to generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
Display module, is configured to, when described in described screen display during second picture frame, be covered by described UI layer on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the fourth aspect of disclosure embodiment, provide a kind of first terminal equipment, described equipment comprises:
Detection module, is configured to the first picture frame in detection first video flowing, judges whether to there is the preassigned object content of user;
Second locating module, judges to know to there is described object content if be configured to, then determines primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
First acquisition module, be configured to when the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the timestamp of second picture frame in the second video flowing to be played, acquisition from described first video flowing, first picture frame corresponding with described timestamp, wherein, described first picture frame is identical with described second picture frame;
Second processing module, the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame if be configured to, then return to described second terminal equipment the response message comprising described primary importance region, to make described second terminal equipment according to described primary importance region, and preset, the more fresh content corresponding with described object content generates user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the 5th aspect of disclosure embodiment, provide a kind of broadcasting control system, this system comprises: the second above-mentioned terminal equipment, and first terminal equipment.
According to the 6th aspect of disclosure embodiment, provide a kind of second terminal equipment, this equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
First terminal equipment to storage first video flowing sends label information and obtains request, and described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
Receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
To determine according to described primary importance region showing on the screen of described second picture frame, the second place region of the described object content of corresponding display;
Generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
When described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
According to the 7th aspect of disclosure embodiment, provide a kind of first terminal equipment, this equipment comprises:
Processor; For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Detect the first picture frame in the first video flowing, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
When the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the timestamp of second picture frame in the second video flowing to be played, acquisition from described first video flowing, first picture frame corresponding with described timestamp, wherein, described first picture frame is identical with described second picture frame;
If the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, with make described second terminal equipment according to described primary importance region and preset, the more fresh content corresponding with described object content generate user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
By the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, presenting the personalized video content meeting user's needs in real time to user, improve flexibility and the efficiency of personalized video broadcasting, and alleviating the processing load of playback terminal.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and be configured to explain principle of the present disclosure together with specification.
Fig. 1 is the flow chart of a kind of control method for playing back according to an exemplary embodiment;
Fig. 2 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
The screen display of the second terminal equipment shown in Fig. 2 B be the second picture frame comprising object content;
The screen display of the second terminal equipment shown in Fig. 2 C be second picture frame by more fresh content coverage goal content;
Fig. 3 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
The screen display of the terminal equipment shown in Fig. 3 B be the second picture frame comprising object content;
The screen display of the terminal equipment shown in Fig. 3 C be second picture frame by more fresh content coverage goal content;
Fig. 4 is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
Fig. 5 is the flow chart of a kind of control method for playing back according to another exemplary embodiment;
Fig. 6 is the block diagram of a kind of second terminal equipment according to an exemplary embodiment;
Fig. 7 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment;
Fig. 8 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment;
Fig. 9 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment;
Figure 10 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment;
Figure 11 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment;
Figure 12 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment;
Figure 13 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment;
Figure 14 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment;
Figure 15 is the block diagram of a kind of broadcasting control system according to an exemplary embodiment;
Figure 16 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.
By above-mentioned accompanying drawing, illustrate the embodiment that the disclosure is clear and definite more detailed description will be had hereinafter.These accompanying drawings and text description be not in order to limited by any mode the disclosure design scope, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Fig. 1 is the flow chart of a kind of control method for playing back according to an exemplary embodiment, and the present embodiment is configured to this control method for playing back to comprise in the second terminal equipment of display screen and illustrates.This control method for playing back can comprise following several step:
In a step 101, first terminal equipment to storage first video flowing sends label information and obtains request, described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame in described second video flowing.
The first video flowing in the present embodiment is stored on first terminal equipment, and each picture frame in the first video flowing is called the first picture frame; Second video flowing is stored on the second terminal equipment, and each picture frame in the second video flowing is called second picture frame.First video flowing with same identification information is the video flowing that content is identical with the second video flowing.First terminal equipment receives user in advance for the object content specified by interested first video flowing, and the second terminal equipment receives the more fresh content corresponding with this object content that user provides in advance.
First, the second terminal equipment receives the video flowing that user specifies broadcasting, and user specifies the video flowing of broadcasting to be the video flowing that the second terminal equipment receives the transmission of all the other network equipments, or the second terminal equipment is stored in advance in the video flowing of second terminal equipment this locality.
Then the second terminal equipment is according to the individual demand of user, in the process of broadcasting second video flowing, first terminal equipment to storage first video flowing sends label information and obtains request, wherein, this acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame in described second video flowing.
In a step 102, receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame.
The label information that first terminal equipment sends the second terminal equipment obtains request and resolves, and obtains the identification information of the second video flowing to be played, and the timestamp of second picture frame in this second video flowing.Then first terminal equipment obtains first video flowing corresponding with this identification information from this locality, then obtains the first picture frame corresponding with this timestamp from this first video flowing, it should be noted that this first picture frame is identical with described second picture frame.
Then the primary importance region whether this first picture frame existed with user corresponding to preassigned object content is inquired about, it should be noted that, primary importance region can identify with the coordinate information of key, or is identified by the region display mode of layer.The preassigned object content of user comprises at least one or more in character face in video flowing, dress ornament, color, word, pattern.If inquiry is known, this first picture frame exists and the primary importance region corresponding to the preassigned object content of user, the response message comprising primary importance region is then sent to the second terminal equipment, thus this response message resolved by the second terminal equipment, obtain this primary importance region.
In step 103, according to the second place region that described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content.
Terminal equipment according on the first picture frame, corresponding with the object content that user specifies primary importance region, determine showing on the screen of second picture frame, the second place region of corresponding display-object content.It should be noted that, determine that the implementation in the second place region on screen is a lot of according to primary importance region, illustrate as follows;
Mode one,
First determine the position of primary importance region on second picture frame, then carry out convergent-divergent to second picture frame, wherein, primary importance region is also synchronous carries out convergent-divergent;
When second picture frame is zoomed to screen size, the primary importance area information after record convergent-divergent, this primary importance area information can as the second place region of the screen for showing this picture frame, corresponding display-object content.
Mode two,
First the position of primary importance region on second picture frame is determined, then multiple first coordinate informations on primary importance region are obtained, such as, suppose that primary importance region is for square, multiple first coordinate informations corresponding with this primary importance region can be the coordinate information at four angles; Suppose that primary importance region is for circular, multiple first coordinate informations corresponding with this primary importance region can be the intersecting point coordinate information of at least two diameters and circular boundary;
According to the dimension scale of this second picture frame and this screen, adjust multiple first coordinate informations on primary importance region in proportion, obtain multiple second coordinate informations corresponding with the plurality of first coordinate information;
Can determine showing on the screen of this second picture frame according to the plurality of second coordinate information, the second place region of corresponding display-object content.
At step 104, generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content.
Second terminal equipment application UI control generates new free user interface UI layer;
Then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added to corresponding part of coincideing on blank UI layer, with second place region screen being used for display-object content.
In step 105, when described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
In the process of the second terminal equipment displaying video stream, when this second picture frame of screen display, to cover on this second picture frame with the UI layer that corresponding part draws more fresh content that coincide of the second place region on screen, and then make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
In sum, the control method for playing back that the present embodiment provides, by the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
For in above-described embodiment, the UI layer generated is adopted to cover the second picture frame with object content, and then make more fresh content coverage goal content, the effect of user individual broadcasting is presented to by screen, it should be noted that, in order to realize said process, the generating mode of UI layer and the realization rate of coverage mode have multiple, the proportion of second picture frame can be accounted for according to object content, or the aspects such as arrangement mode carry out selecting different UI layer treatment technologies, to improve treatment effeciency, below by Fig. 2 and detailed description embodiment illustrated in fig. 3.
Fig. 2 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the second terminal equipment of display screen and illustrate.
The object content of specifying for user in the present embodiment is the first character face, and the application scenarios that these distributed areas of the first character face on second picture frame are unique, adopt the Local treatment mode of UI layer to realize, this control method for playing back can comprise following several step:
In step 201, first terminal equipment to storage first video flowing sends label information and obtains request, described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame in described second video flowing.
In step 202., receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame.
In step 203, according to the second place region that described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content.
Step 201-step 203 in the present embodiment can step 101-step 103 in embodiment shown in Figure 1.
In step 204, generate the UI layer coincide with described second place zone boundary, on whole UI layer, drafting has described renewal content;
Second terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with second place zone boundary, then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added on whole blank UI layer.
In step 205, when described in described screen display during second picture frame, coincide described UI layer the described second place region covered for showing described picture frame, described object content, with described in making more fresh content cover described object content and be shown to described user.
In the process of the second terminal equipment displaying video stream, when this second picture frame of screen display, coincide this UI layer the second place region covered for showing object content on this second picture frame, and then make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the second terminal equipment shown in Fig. 2 B be the second picture frame comprising object content, the screen display of the second terminal equipment shown in Fig. 2 C be second picture frame by more fresh content coverage goal content, shown in Fig. 2 B and Fig. 2 C
Suppose that object content that user specifies is " machine cat face " on this second picture frame, more fresh content is " Little Bear face ", describe in detail: on the first picture frame sent by first terminal equipment, the primary importance region corresponding with object content that mark in advance, according to the object content on the known second picture frame in primary importance region i.e. " machine cat face ", then the file of " Little Bear face " carries out parsings acquisition UI element to storing, and this UI element is added to border and coincide on corresponding blank UI layer with second place zone boundary.
In the process of the second terminal equipment displaying video stream, when this second picture frame of screen display, this UI layer is coincide and covers for showing this second picture frame " machine cat face " region, and then make this " Little Bear face " covering " machine cat face ", thus present the individualized video content of meeting consumers' demand to user.
In sum, the control method for playing back that the present embodiment provides, the object content of specifying for user is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, the Local treatment mode of UI layer is adopted to realize, thus play original video stream time when this picture frame of screen display time, coincide this UI layer the second place region covered for display-object content, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Fig. 3 A is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the second terminal equipment of display screen and illustrate.
The object content of specifying for user in the present embodiment is multiple pattern, the application scenarios of the distributed areas dispersion of multiple pattern on second picture frame, and adopt the disposed of in its entirety mode of UI layer to realize, this control method for playing back can comprise following several step:
In step 301, first terminal equipment to storage first video flowing sends label information and obtains request, described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame in described second video flowing.
In step 302, receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame.
In step 303, according to the second place region that described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content.
Step 301-step 303 in the present embodiment can step 101-step 103 in embodiment shown in Figure 1.
In step 304, generate the UI layer coincide with described screen border, more fresh content described in the 3rd corresponding band of position that coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing;
Second terminal equipment application UI control generates new free user interface UI layer, the zone boundary of this UI layer coincide corresponding with screen border, then the file storing the more fresh content corresponding with object content is resolved to the UI element obtaining more fresh content, and this UI element is added to the 3rd corresponding band of position that to coincide on UI layer, with the second place region on screen, and the part on UI layer, outside the 3rd band of position carries out transparent processing.
In step 305, when described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
In the process of the second terminal equipment displaying video stream, when this second picture frame of screen display, this UI layer entirety is covered on this second picture frame, so make this more fresh content cover user's object content of specifying, thus present the individualized video content of meeting consumers' demand to user.
As a kind of example, the screen display of the terminal equipment shown in Fig. 3 B be the second picture frame comprising object content, the screen display of the terminal equipment shown in Fig. 3 C be second picture frame by more fresh content coverage goal content, shown in Fig. 3 B and Fig. 3 C,
Suppose that the object content that user specifies comprises the first pattern and the second pattern, first pattern is " lower part of the body of health husband " on this second picture frame, corresponding more fresh content is " tail of mermaid ", second pattern is " machine cat head top ", corresponding more fresh content is " the machine cat head top of band aircraft ", be described in detail as follows: on the first picture frame sent by first terminal equipment, the primary importance region corresponding with object content marked in advance, " lower part of the body of health husband " and " machine cat head top " according to the object content on the known second picture frame in primary importance region, then parsing is carried out to the file storing " tail of mermaid " and " the machine cat head top of band aircraft " and obtain UI element, and this UI element is added on UI layer, coincide on the 3rd corresponding band of position with the second place region on screen, and the part outside the 3rd band of position carries out transparent processing.
In the process of the second terminal equipment displaying video stream, when this second picture frame of screen display, this UI layer entirety is covered on this second picture frame, and then make " tail of mermaid " pattern covers " lower part of the body of health husband " pattern, " the machine cat head top of band aircraft " pattern covers " machine cat head top " pattern, thus the individualized video content of meeting consumers' demand is presented to user.
In sum, the control method for playing back that the present embodiment provides, the object content of specifying for user is multiple pattern, the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, the disposed of in its entirety mode of UI layer is adopted to realize, thus play original video stream time when this picture frame of screen display time, UI layer entirety is covered on described picture frame, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Fig. 4 is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the first terminal equipment of display screen and illustrate.
The first video flowing in the present embodiment is stored on first terminal equipment, and each picture frame in the first video flowing is called the first picture frame; Second video flowing is stored on the second terminal equipment, and each picture frame in the second video flowing is called second picture frame.First video flowing with same identification information is the video flowing that content is identical with the second video flowing.First terminal equipment receives user in advance for the object content specified by interested first video flowing, and the second terminal equipment receives the more fresh content corresponding with this object content that user provides in advance.
In step 401, detect the first picture frame in the first video flowing, judge whether to there is the preassigned object content of user.
First terminal equipment plays demand according to user to the personalization of selected video flowing, first detects the first picture frame of the first video flowing that user specifies, judges whether there is the preassigned object content of user in this first picture frame.It should be noted that, detecting the implementation that whether there is object content in the first picture frame has a lot, illustrate: by the mode that the pixel of object content is compared with the pixel in the first picture frame, the mode that the characteristic information of object content and the characteristic information in the first picture frame are mated or the mode that the spectral information of object content is compared with the spectral information in seismograph picture frame, can select suitable detection mode according to the object content of reality, the present embodiment does not limit this.
In step 402, know to there is described object content if judge, then determine primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame.
If first terminal equipment judges to know the object content that there is user and specify, then determine primary importance region on this first picture frame, corresponding with this object content, and in the enterprising row labels of this first picture frame, it should be noted that, primary importance region can identify with the coordinate information of key, or is identified by the region display mode of layer.
In step 403, when the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame.
In step 404, if the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, to make described second terminal equipment according to described primary importance region, and preset, the more fresh content corresponding with described object content generates user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
Step 101-step 105 in the implementation process embodiment shown in Figure 1 of the step 403 in the present embodiment and step 404, repeats no more herein.
In sum, the control method for playing back that the present embodiment provides, by the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting, and alleviate the processing load of playback terminal.
Fig. 5 is the flow chart of a kind of control method for playing back according to another exemplary embodiment, and the present embodiment should be configured to this control method for playing back to comprise in the first terminal equipment of display screen and illustrate.For the detection of object content in the first picture frame in the present embodiment, adopt the detection mode of characteristic information coupling, and for the location in primary importance region corresponding with object content on the first picture frame, adopt the locate mode based on image boundary track algorithm to describe the implementation process of control method for playing back in detail, this control method for playing back can comprise following several step:
In step 501, the characteristic information of the first picture frame in the first video flowing is obtained.
First terminal equipment receives the first video flowing that user specifies broadcasting, and for the object content specified by this first video flowing.Select different characteristic information obtain manners according to the preassigned object content of user, illustrate as follows:
Mode one, if the preassigned object content of user is the first pattern being distributed in multiple position in background, then according to the unit window pre-set, such as long 30 pixels, the unit window of wide 30 pixels, the characteristic information in regions all on this first picture frame is extracted one by one, such as, this first picture frame is long 900 pixels, the picture of wide 900 pixels, utilize long 30 pixels, the unit window of wide 30 pixels carries out feature extraction to picture frame, need extraction 400 characteristic informations, the universality of this mode is very strong, can for all types of object content.
Mode two, if the preassigned object content of user is character face, then can adopt the transaction module such as neural network model of face recognition, or grader comparison model, first in the first picture frame, determine facial zone, and then face feature information is being extracted from this facial zone, avoid the characteristic information extracting this picture from all regions of the first picture frame one by one, this mode improves treatment effeciency to the object content of easily locating regional area.
In step 502, according to property data base identification, whether characteristic information is the preassigned object content of user; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
Whether the characteristic information that first terminal equipment obtains from this first picture frame according to property data base identification is the object content that user specifies, wherein, property data base comprises the sample characteristics information corresponding with object content, thus sample characteristics information corresponding with object content for property data base is mated with the characteristic information obtained from this first picture frame by first terminal equipment one by one, if the match is successful, illustrate in the first picture frame to there is the preassigned object content of user; If it fails to match, illustrate in the first picture frame there is not the preassigned object content of user.
It should be noted that, the content in property data base can be the sample characteristics information that the service provider of video flowing has cured.Comparatively flexibly, the sample characteristics information of property data base except having cured before comprising, can also comprise the video flowing sent for user in real time, sample characteristics information that the contents processing of specifying according to user generates.
As a kind of example, if described object content is the first pattern; Then determine the area of the pattern on described first picture frame according to boundary profile algorithm; Pattern characteristics is extracted from described area of the pattern; Described pattern characteristics is mated with sample patterns feature corresponding with described first pattern in described property data base; If the match is successful, then judge to know that described area of the pattern exists described first pattern; If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
As a kind of example, if described object content is the first character face; The facial characteristics scope then obtained according to training in advance determines the facial zone on described picture frame; Facial characteristics is extracted from described facial zone; Described facial characteristics is mated with sample face feature corresponding with described first character face in described property data base; If the match is successful, then judge to know that described facial zone exists described first character face; If it fails to match, then judge to know that described facial zone does not exist described first character face.
In sum, first locating area, is extracting feature from region, whether can have object content by quick position, improve treatment effeciency.
In step 503, know to there is described object content if judge, obtain the smoothness of the zone boundary corresponding with described object content based on image boundary track algorithm;
First terminal equipment, by detection first picture frame, is known in picture frame to there is the preassigned object content of user if judge, is then obtained the smoothness of the zone boundary corresponding with this object content by image boundary track algorithm; Wherein, image boundary track algorithm comprises the image boundary track algorithm based on two-value, the image boundary track algorithm etc. based on small echo, can need to select according to the application of reality, and then obtain the smoothness of the zone boundary corresponding with this object content by image boundary track algorithm.
In step 504, judge the threshold value whether described smoothness reaches default, know if judge the threshold value that described smoothness reaches default, then perform step 505; Know if judge the threshold value that described smoothness does not reach default, then perform step 506;
Judge whether the smoothness of the zone boundary corresponding with this object content reaches default threshold value, it should be noted that, different image boundary track algorithms is preset with different threshold values, such as, be A based on the threshold value that the image boundary track algorithm of two-value is corresponding, be B based on the threshold value that the image boundary track algorithm of small echo is corresponding, therefore, the smoothness of acquisition compares with corresponding threshold value by the algorithm according to adopting, know if judge the threshold value that smoothness reaches default, then perform step 505; Know if judge the threshold value that smoothness does not reach default, then perform step 506;
In step 505, know that described smoothness reaches described threshold value if judge, then using the zone boundary corresponding with described object content as described primary importance region, and in the enterprising row labels of described first picture frame.
When judging to know the threshold value that the smoothness of the zone boundary corresponding with this object content reaches default, then dividing processing is easily carried out in declare area border, directly using the zone boundary corresponding with object content as primary importance region, and in the enterprising row labels of described first picture frame.
In step 506, know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region, and in the enterprising row labels of described first picture frame.
When judging to know that the smoothness of the zone boundary corresponding with this object content does not reach default threshold value, then declare area border is not easy to carry out dividing processing, the smooth region corresponding with zone boundary can be determined according to the compensating parameter preset, and then using smooth region as primary importance region, and in the enterprising row labels of described first picture frame.
In step 507, when the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame.
In step 508, if the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, to make described second terminal equipment according to described primary importance region, and preset, the more fresh content corresponding with described object content generates user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
Step 101-step 105 in the implementation process embodiment shown in Figure 1 of the step 507 in the present embodiment and step 508, repeats no more herein.
In sum, the control method for playing back that the present embodiment provides, first terminal equipment is for the detection of object content in the first picture frame, adopt the detection mode of characteristic information coupling, and for the location in primary importance region corresponding with object content on the first picture frame, adopt the locate mode based on image boundary track algorithm, as required primary importance region is sent to the second terminal equipment, to make the second terminal equipment generate UI layer according to primary importance region and the more fresh content corresponding with object content, incite somebody to action more fresh content coverage goal content by UI layer.When achieving displaying video stream, when not needing to distort video stream data, presenting the personalized video content meeting user's needs in real time to user, improve flexibility and the efficiency of personalized video broadcasting.By carrying out the location in primary importance region on first terminal equipment, the second terminal equipment real-time query primary importance region directly generates UI layer, improves treatment effeciency simultaneously, and centralized detecting, has saved the process resource detected.
You need to add is that, before step 501, described method also comprises:
Receive the picture frame of multiple video flowing;
Obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
The corresponding relation of sample characteristics information and sample content is stored in described property data base.
In sum, the control method for playing back that the present embodiment provides, can dynamically update property data base, and along with the accumulation of service time, the content that the personalization provided for user is play is more diversified.
Following is disclosure device embodiment, can be configured to perform disclosure embodiment of the method.For the details do not disclosed in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 6 is the block diagram of a kind of second terminal equipment according to an exemplary embodiment, and as shown in Figure 6, this second terminal equipment comprises: sending module 11, first receiver module 12, first locating module 13, first processing module 14 and display module 15, wherein,
Sending module 11, be configured to send label information to the first terminal equipment of storage first video flowing and obtain request, described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
First receiver module 12, be configured to receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
First locating module 13, is configured to the second place region according to described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content;
First processing module 14, be configured to generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
Display module 15, is configured to, when described in described screen display during second picture frame, be covered by described UI layer on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
The function of each module and handling process in the second terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The second terminal equipment that the present embodiment provides, by the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting.
Fig. 7 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment, and as shown in Figure 7, based on embodiment illustrated in fig. 6, this first locating module 13, comprising: adjustment unit 131 and determining unit 132, wherein,
Adjustment unit 131, is configured to the dimension scale according to described second picture frame and described screen, adjusts multiple first coordinate informations on described primary importance region in proportion, obtains multiple second coordinate informations corresponding with described multiple first coordinate information;
Determining unit 132, is configured to the described second place region determined according to described multiple second coordinate information on described screen.
The function of each module and handling process in the second terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
Fig. 8 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment, and as shown in Figure 8, based on embodiment illustrated in fig. 6, this first processing module 14, comprising: the first generation unit 141 and the first drawing unit 142, wherein,
First generation unit 141, is configured to generate the UI layer coincide with described second place zone boundary;
First drawing unit 142, more fresh content described in being configured to draw on whole UI layer;
Display module 15, being configured to coincide described UI layer covers second place region for showing described second frame, described object content.
The function of each module and handling process in the second terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The second terminal equipment that the present embodiment provides, realize for adopting the Local treatment mode of UI layer, thus play original video stream time when this picture frame of screen display time, coincide this UI layer the second place region covered for display-object content, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Fig. 9 is the block diagram of a kind of second terminal equipment according to another exemplary embodiment, and as shown in Figure 9, based on embodiment illustrated in fig. 6, this first processing module 14, comprising: the second generation unit 143 and the second drawing unit 144, wherein,
Second generation unit 143, is configured to generate the UI layer coincide with described screen border;
Second drawing unit 144, more fresh content described in the 3rd corresponding band of position that is configured to coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing;
Display module 15, is configured to described UI layer entirety to cover on described second picture frame.
The function of each module and handling process in the second terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The second terminal equipment that the present embodiment provides, realize for adopting the disposed of in its entirety mode of UI layer, thus play original video stream time when this picture frame of screen display time, UI layer entirety is covered on described picture frame, is shown to user to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, can be real-time present the personalized video content meeting user's needs to user, improve treatment effeciency, saved process resource.
Figure 10 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment, and as shown in Figure 10, this first terminal equipment comprises: detection module 21, second locating module 22, first acquisition module 23 and the second processing module 24, wherein,
Detection module 21, is configured to the first picture frame in detection first video flowing, judges whether to there is the preassigned object content of user;
Second locating module 22, judges to know to there is described object content if be configured to, then determines primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
First acquisition module 23, be configured to when the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame;
Second processing module 24, the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame if be configured to, then return to described second terminal equipment the response message comprising described primary importance region, to make described second terminal equipment according to described primary importance region, and preset, the more fresh content corresponding with described object content generates user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
The function of each module and handling process in the first terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The first terminal equipment that the present embodiment provides, by the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, the personalized video content meeting user's needs is presented in real time to user, avoid and need to need amendment original video stream data and take a large amount of memory spaces to store according to user in advance, improve flexibility and the efficiency of personalized video broadcasting, and alleviate the processing load of playback terminal.
Figure 11 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment, and as shown in figure 11, based on embodiment illustrated in fig. 10, this second locating module 22, comprising: judging unit 221, first determining unit 222 and the second determining unit 223, wherein,
Judging unit 221, whether the smoothness being configured to detect based on image boundary track algorithm the zone boundary corresponding with described object content reaches default threshold value;
First determining unit 222, if be configured to judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described object content as described primary importance region;
Second determining unit 223, judges to know that described smoothness does not reach described threshold value if be configured to, then determines the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
Figure 12 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment, and as shown in figure 12, based on embodiment illustrated in fig. 10, detection module 21, comprising:
Acquiring unit 211, is configured to obtain the characteristic information in described first picture frame;
Recognition unit 212, is configured to whether characteristic information according to property data base identification is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
Further, described equipment also comprises: the second receiver module 25, second acquisition module 26 and memory module 27, wherein,
Second receiver module 25, is configured to the picture frame receiving multiple video flowing;
Second acquisition module 26, is configured to obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
Memory module 27, is configured to the corresponding relation of sample characteristics information and sample content to be stored in described property data base.
The function of each module and handling process in the first terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The first terminal equipment that the present embodiment provides, first terminal equipment is for the detection of object content in the first picture frame, adopt the detection mode of characteristic information coupling, and for the location in primary importance region corresponding with object content on the first picture frame, adopt the locate mode based on image boundary track algorithm, as required primary importance region is sent to the second terminal equipment, to make the second terminal equipment generate UI layer according to primary importance region and the more fresh content corresponding with object content, incite somebody to action more fresh content coverage goal content by UI layer.When achieving displaying video stream, when not needing to distort video stream data, presenting the personalized video content meeting user's needs in real time to user, improve flexibility and the efficiency of personalized video broadcasting.By carrying out the location in primary importance region on first terminal equipment, the second terminal equipment real-time query primary importance region directly generates UI layer, improves treatment effeciency simultaneously, and centralized detecting, has saved the process resource detected.
Figure 13 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment, and as shown in figure 13, based on embodiment illustrated in fig. 12, this acquiring unit 211, comprising: the first process subelement 2111 and first extracts subelement 2112, wherein,
First process subelement 2111, if being configured to described object content is the first pattern, determines the area of the pattern on described first picture frame according to boundary profile algorithm;
First extracts subelement 2112, is configured to extract pattern characteristics from described area of the pattern;
Recognition unit 212, is configured to described pattern characteristics to mate with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
The function of each module and handling process in the first terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The first terminal equipment that the present embodiment provides, the object content of specifying for user is the first character face, and the application scenarios that these distributed areas of the first character face on picture frame are unique, adopt the detection mode of pattern characteristics information matches, improve treatment effeciency.
Figure 14 is the block diagram of a kind of first terminal equipment according to another exemplary embodiment, and as shown in figure 14, based on embodiment illustrated in fig. 12, this acquiring unit 211, comprising: the second process subelement 2113 and second extracts subelement 2114, wherein,
Second process subelement 2113, if being configured to described object content is the first character face, determines the facial zone on described picture frame according to the facial characteristics scope of training in advance acquisition;
Second extracts subelement 2114, is configured to extract facial characteristics from described facial zone;
Recognition unit 212, is configured to described facial characteristics to mate with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
The function of each module and handling process in the first terminal equipment that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The first terminal equipment that the present embodiment provides, the object content of specifying for user is multiple pattern, and the application scenarios of the distributed areas dispersion of multiple pattern on picture frame, adopts the detection mode of face feature information coupling, improve treatment effeciency.
Figure 15 is the block diagram of a kind of broadcasting control system according to an exemplary embodiment, as shown in figure 15, this broadcasting control system comprises: the second terminal equipment 1, and first terminal equipment 2, wherein, second terminal equipment 1, and first terminal equipment 2 can adopt the second terminal equipment and first terminal equipment that provide in above-described embodiment.
The function of each module and handling process in the broadcasting control system that the present embodiment provides, can see the embodiment of the method shown in above-mentioned, and it is similar that it realizes principle, repeats no more herein.
The broadcasting control system that the present embodiment provides, by the identification information of first terminal equipment according to the second video flowing to be played and the timestamp of second picture frame, first video flowing corresponding with this identification information is obtained from first terminal equipment, and first picture frame corresponding with this timestamp, and obtain that user marks in advance on the first picture frame, with the primary importance region corresponding to the preassigned object content of user, then this primary importance region is sent to the second terminal equipment, second terminal equipment generates UI layer according to primary importance region and the more fresh content corresponding with object content, thus when screen display second picture frame, this UI layer is covered on this second picture frame, user is shown to make more fresh content coverage goal content.When achieving displaying video stream, when not needing to distort video stream data, presenting the personalized video content meeting user's needs in real time to user, improve flexibility and the efficiency of personalized video broadcasting, and alleviating the processing load of playback terminal.
Figure 16 is the block diagram of a kind of terminal equipment according to an exemplary embodiment.Such as, terminal equipment 1300 can be mobile phone, computer, flat-panel devices etc.
With reference to Figure 13, terminal equipment 1300 can comprise following one or more assembly: processing components 1302, memory 1304, power supply module 1306, multimedia groupware 1308, audio-frequency assembly 1310, the interface 1312 of I/O (I/O), sensor cluster 1314, and communications component 1316.
The integrated operation of the usual control terminal 1300 of processing components 1302, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1302 can comprise one or more processor 1320 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 1302 can comprise one or more module, and what be convenient between processing components 1302 and other assemblies is mutual.Such as, processing components 1302 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1308 and processing components 1302.
Memory 1304 is configured to store various types of data to be supported in the operation of terminal equipment 1300.The example of these data comprises the instruction being configured to any application program or the method operated on terminal equipment 1300, contact data, telephone book data, message, picture, video etc.Memory 1304 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 1306 is terminal equipment 1300 provide electric power.Power supply module 1306 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for terminal equipment 1300 and be associated.
Multimedia groupware 1308 is included in the touching display screen providing an output interface between described terminal equipment 1300 and user.In certain embodiments, touching display screen can comprise liquid crystal display (LCD) and touch panel (TP).Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1308 comprises a front-facing camera and/or post-positioned pick-up head.When terminal equipment 1300 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1310 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1310 comprises a microphone (MIC), and when terminal equipment 1300 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 1304 further or be sent via communications component 1316.In certain embodiments, audio-frequency assembly 1310 also comprises a loud speaker, is configured to output audio signal.
I/O interface 1312 is for providing interface between processing components 1302 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 1314 comprises one or more transducer, is configured to as terminal equipment 1300 provides the state estimation of various aspects.Such as, sensor cluster 1314 can detect the opening/closing state of terminal equipment 1300, the relative positioning of assembly, such as described assembly is display and the keypad of terminal equipment 1300, the position of all right sense terminals equipment 1300 of sensor cluster 1314 or terminal equipment 1300 assemblies changes, the presence or absence that user contacts with terminal equipment 1300, the variations in temperature of terminal equipment 1300 orientation or acceleration/deceleration and terminal equipment 1300.Sensor cluster 1314 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 1314 can also comprise optical sensor, as CMOS or ccd image sensor, is configured to use in imaging applications.In certain embodiments, this sensor cluster 1314 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 1316 is configured to the communication being convenient to wired or wireless mode between terminal equipment 1300 and other equipment.Terminal equipment 1300 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1316 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1316 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, terminal equipment 1300 can be realized by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is configured to perform above-mentioned document display method.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1304 of instruction, above-mentioned instruction can perform said method by the processor 1320 of terminal equipment 1300.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal equipment 1300, makes terminal equipment 1300 can perform a kind of document display method.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (25)

1. a control method for playing back, is characterized in that, described method comprises:
First terminal equipment to storage first video flowing sends label information and obtains request, and described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
Receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
To determine according to described primary importance region showing on the screen of described second picture frame, the second place region of the described object content of corresponding display;
Generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
When described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
2. method according to claim 1, is characterized in that, describedly to determine according to described primary importance region showing on the screen of described second picture frame, the second place region of the described object content of corresponding display, comprising:
According to the dimension scale of described second picture frame and described screen, adjust multiple first coordinate informations on described primary importance region in proportion, obtain multiple second coordinate informations corresponding with described multiple first coordinate information;
The described second place region on described screen is determined according to described multiple second coordinate information.
3. method according to claim 1, is characterized in that,
Described generation user interface UI layer, comprising:
Generate the UI layer coincide with described second place zone boundary;
Described UI layer coincide with described second place region on corresponding region and draws default, corresponding with described object content more fresh content, comprising:
More fresh content described in whole UI layer is drawn;
Described described UI layer to be covered on described second picture frame, comprising:
Coincide described UI layer the second place region covered for showing described second frame, described object content.
4. method according to claim 1, is characterized in that,
Described generation user interface UI layer, comprising:
Generate the UI layer coincide with described screen border;
Described UI layer coincide with described second place region on corresponding region and draws default, corresponding with described object content more fresh content, comprising:
Described UI layer coincide with described second place region described in the 3rd corresponding band of position draws more fresh content, and the part outside described 3rd band of position carries out transparent processing;
Described described UI layer to be covered on described second picture frame, comprising:
Described UI layer entirety is covered on described second picture frame.
5. a control method for playing back, is characterized in that, described method comprises:
Detect the first picture frame in the first video flowing, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
When the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame;
If the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, with make described second terminal equipment according to described primary importance region and preset, the more fresh content corresponding with described object content generate user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
6. method according to claim 5, is characterized in that, describedly determines primary importance region on described first picture frame, corresponding with described object content, comprising:
Whether the smoothness detecting the zone boundary corresponding with described object content based on image boundary track algorithm reaches default threshold value;
If judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described object content as described primary importance region;
Know that described smoothness does not reach described threshold value if judge, then determine the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
7. the method according to claim 5 or 6, is characterized in that, the first picture frame in described detection first video flowing, judges whether to there is the preassigned object content of user, comprising:
Obtain the characteristic information in described first picture frame;
According to property data base identification, whether characteristic information is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
8. method according to claim 7, is characterized in that, the preassigned object content of described user, comprising:
At least one or more in character face, dress ornament, color, word, pattern.
9. method according to claim 7, is characterized in that, before the characteristic information in described first picture frame of described acquisition, described method also comprises:
Receive the picture frame of multiple video flowing;
Obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
The corresponding relation of sample characteristics information and sample content is stored in described property data base.
10. method according to claim 8, is characterized in that, if described object content is the first pattern; Then obtain the characteristic information in a described picture frame, comprising:
The area of the pattern on described first picture frame is determined according to boundary profile algorithm;
Pattern characteristics is extracted from described area of the pattern;
Described according to property data base identification characteristic information whether be described object content, comprising:
Described pattern characteristics is mated with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
11. methods according to claim 8, is characterized in that, if described object content is the first character face; Then obtain the characteristic information in described first picture frame, comprising:
The facial zone on described picture frame is determined according to the facial characteristics scope of training in advance acquisition;
Facial characteristics is extracted from described facial zone;
Described according to property data base identification characteristic information whether be described object content, comprising:
Described facial characteristics is mated with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
12. a kind of second terminal equipment, is characterized in that, described equipment comprises:
Sending module, be configured to send label information to the first terminal equipment of storage first video flowing and obtain request, described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
First receiver module, be configured to receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
First locating module, is configured to the second place region according to described primary importance region determines showing on the screen of described second picture frame, correspondence shows described object content;
First processing module, be configured to generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
Display module, is configured to, when described in described screen display during second picture frame, be covered by described UI layer on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
13. equipment according to claim 12, is characterized in that, described first locating module, comprising:
Adjustment unit, is configured to the dimension scale according to described second picture frame and described screen, adjusts multiple first coordinate informations on described primary importance region in proportion, obtains multiple second coordinate informations corresponding with described multiple first coordinate information;
Determining unit, is configured to the described second place region determined according to described multiple second coordinate information on described screen.
14. equipment according to claim 12, is characterized in that,
Described first processing module, comprising:
First generation unit, is configured to generate the UI layer coincide with described second place zone boundary;
First drawing unit, more fresh content described in being configured to draw on whole UI layer;
Described display module, being configured to coincide described UI layer covers second place region for showing described second frame, described object content.
15. equipment according to claim 12, is characterized in that,
Described first processing module, comprising:
Second generation unit, is configured to generate the UI layer coincide with described screen border;
Second drawing unit, more fresh content described in the 3rd corresponding band of position that is configured to coincide on described UI layer, with described second place region is drawn, and the part outside described 3rd band of position carries out transparent processing;
Described display module, is configured to described UI layer entirety to cover on described second picture frame.
16. 1 kinds of first terminal equipment, is characterized in that, described equipment comprises:
Detection module, is configured to the first picture frame in detection first video flowing, judges whether to there is the preassigned object content of user;
Second locating module, judges to know to there is described object content if be configured to, then determines primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
First acquisition module, be configured to when the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame;
Second processing module, the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame if be configured to, then return to described second terminal equipment the response message comprising described primary importance region, to make described second terminal equipment according to described primary importance region, and preset, the more fresh content corresponding with described object content generates user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
17. equipment according to claim 16, is characterized in that, described second locating module, comprising:
Judging unit, whether the smoothness being configured to detect based on image boundary track algorithm the zone boundary corresponding with described object content reaches default threshold value;
First determining unit, if be configured to judge know that described smoothness reaches described threshold value, then using the zone boundary corresponding with described object content as described primary importance region;
Second determining unit, judges to know that described smoothness does not reach described threshold value if be configured to, then determines the smooth region corresponding with described zone boundary, and using described smooth region as described primary importance region.
18. equipment according to claim 16 or 17, it is characterized in that, described detection module, comprising:
Acquiring unit, is configured to obtain the characteristic information in described first picture frame;
Recognition unit, is configured to whether characteristic information according to property data base identification is described object content; Wherein, described property data base comprises the sample characteristics information corresponding with described object content.
19. equipment according to claim 18, is characterized in that, the preassigned object content of described user, comprising:
At least one or more in character face, dress ornament, color, word, pattern.
20. equipment according to claim 18, is characterized in that, before the characteristic information in described first picture frame of described acquisition, described equipment also comprises:
Second receiver module, is configured to the picture frame receiving multiple video flowing;
Second acquisition module, is configured to obtain sample characteristics information corresponding with the sample content that user pre-sets in each picture frame;
Memory module, is configured to the corresponding relation of sample characteristics information and sample content to be stored in described property data base.
21. equipment according to claim 19, is characterized in that,
Described acquiring unit, comprising:
First process subelement, if being configured to described object content is the first pattern, determines the area of the pattern on described first picture frame according to boundary profile algorithm;
First extracts subelement, is configured to extract pattern characteristics from described area of the pattern;
Described recognition unit, is configured to described pattern characteristics to mate with sample patterns feature corresponding with described first pattern in described property data base;
If the match is successful, then judge to know that described area of the pattern exists described first pattern;
If it fails to match, then judge to know that described area of the pattern does not exist described first pattern.
22. equipment according to claim 19, is characterized in that,
Described acquiring unit, comprising:
Second process subelement, if being configured to described object content is the first character face, determines the facial zone on described picture frame according to the facial characteristics scope of training in advance acquisition;
Second extracts subelement, is configured to extract facial characteristics from described facial zone;
Described recognition unit, is configured to described facial characteristics to mate with sample face feature corresponding with described first character face in described property data base;
If the match is successful, then judge to know that described facial zone exists described first character face;
If it fails to match, then judge to know that described facial zone does not exist described first character face.
23. 1 kinds of broadcasting control systems, is characterized in that, described system comprises: the second terminal equipment as described in as arbitrary in claim 12-15, and as arbitrary in claim 16-22 as described in first terminal equipment.
24. a kind of second terminal equipment, is characterized in that, described equipment comprises:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
First terminal equipment to storage first video flowing sends label information and obtains request, and described acquisition request comprises: the identification information of the second video flowing to be played on the second terminal equipment, and the timestamp of second picture frame;
Receive response message that described first terminal equipment returns, that comprise primary importance region, described primary importance region is that described first terminal equipment obtains first video flowing corresponding with described identification information, the first that obtain in first video flowing from described, corresponding with described timestamp picture frame, and obtain on described first picture frame and the region corresponding to the preassigned object content of user, wherein, described first picture frame is identical with described second picture frame;
To determine according to described primary importance region showing on the screen of described second picture frame, the second place region of the described object content of corresponding display;
Generate user interface UI layer, corresponding part that described UI layer coincide with described second place region draws default, corresponding with described object content more fresh content;
When described in described screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
25. 1 kinds of first terminal equipment, is characterized in that, described equipment comprises:
Processor; For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Detect the first picture frame in the first video flowing, judge whether to there is the preassigned object content of user;
Know to there is described object content if judge, then determine primary importance region on described first picture frame, corresponding with described object content, and in the enterprising row labels of described first picture frame;
When the label information that reception second terminal equipment sends obtains request, described acquisition request comprises: the identification information of the second video flowing to be played, and the timestamp of second picture frame, obtain first video flowing corresponding with described identification information, and obtain from described first video flowing, corresponding with described timestamp the first picture frame, wherein, described first picture frame is identical with described second picture frame;
If the primary importance region corresponding with the preassigned object content of user can be obtained from described first picture frame, then return to described second terminal equipment the response message comprising described primary importance region, with make described second terminal equipment according to described primary importance region and preset, the more fresh content corresponding with described object content generate user interface UI layer, and then when described in screen display during second picture frame, described UI layer is covered on described second picture frame, with described in making more fresh content cover described object content and be shown to described user.
CN201510210500.2A 2015-04-29 2015-04-29 Control method for playing back, system and terminal device Active CN104883603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510210500.2A CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510210500.2A CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Publications (2)

Publication Number Publication Date
CN104883603A true CN104883603A (en) 2015-09-02
CN104883603B CN104883603B (en) 2018-04-27

Family

ID=53950911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510210500.2A Active CN104883603B (en) 2015-04-29 2015-04-29 Control method for playing back, system and terminal device

Country Status (1)

Country Link
CN (1) CN104883603B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028097A (en) * 2015-12-09 2016-10-12 展视网(北京)科技有限公司 Vehicle-mounted terminal movie play device
CN108713322A (en) * 2016-04-01 2018-10-26 英特尔公司 Videos with optional marker overlay secondary images
CN109963106A (en) * 2019-03-29 2019-07-02 宇龙计算机通信科技(深圳)有限公司 A video image processing method, device, storage medium and terminal
CN112583976A (en) * 2020-12-29 2021-03-30 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN103634503A (en) * 2013-12-16 2014-03-12 苏州大学 Video manufacturing method based on face recognition and behavior recognition and video manufacturing method based on face recognition and behavior recognition
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN103634503A (en) * 2013-12-16 2014-03-12 苏州大学 Video manufacturing method based on face recognition and behavior recognition and video manufacturing method based on face recognition and behavior recognition
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TANG WANG ET AL: "Video collage:A Novel Presentation of Video Sequence", 《PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028097A (en) * 2015-12-09 2016-10-12 展视网(北京)科技有限公司 Vehicle-mounted terminal movie play device
CN108713322A (en) * 2016-04-01 2018-10-26 英特尔公司 Videos with optional marker overlay secondary images
CN108713322B (en) * 2016-04-01 2021-07-16 英特尔公司 Method, apparatus for preparing video content and playing back encoded content
CN109963106A (en) * 2019-03-29 2019-07-02 宇龙计算机通信科技(深圳)有限公司 A video image processing method, device, storage medium and terminal
CN112583976A (en) * 2020-12-29 2021-03-30 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium
CN112583976B (en) * 2020-12-29 2022-02-18 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium

Also Published As

Publication number Publication date
CN104883603B (en) 2018-04-27

Similar Documents

Publication Publication Date Title
CN102498725B (en) Mobile device which automatically determines operating mode
US10115395B2 (en) Video display device and operation method therefor
CN105160854A (en) Equipment control method, device and terminal equipment
CN104281432A (en) Method and device for regulating sound effect
CN105338399A (en) Image acquisition method and device
CN104506715A (en) Method and device for displaying notification messages
CN104486451B (en) Application program recommends method and device
CN104540184A (en) Equipment networking method and device
CN104202624B (en) The method of transmission picture and device
CN106331761A (en) Live list display method and device
CN105611413A (en) Method and device for adding video clip class markers
CN105554581A (en) Method and device for bullet screen display
CN104462418A (en) Page displaying method and device and electronic device
CN104717293A (en) Method and device for showing information resources on conversation interface
CN104731880A (en) Image ordering method and device
CN105159661A (en) Icon corner mark display method and device
CN104020924A (en) Label establishing method and device and terminal
CN104090735A (en) Picture projection method and device
CN104853223A (en) Video stream intercutting method and terminal equipment
CN104391711A (en) Method and device for setting screen protection
CN103970576A (en) Installation information displaying method, obtaining method and device
CN105488715A (en) Object information query method and device
CN104883603A (en) Playing control method and system, and terminal device
CN105635795A (en) Collection method and apparatus of television user behavior information
CN105488829A (en) Method and device for generating head portrait

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant