CN108833992A - Caption presentation method and device - Google Patents
Caption presentation method and device Download PDFInfo
- Publication number
- CN108833992A CN108833992A CN201810700364.9A CN201810700364A CN108833992A CN 108833992 A CN108833992 A CN 108833992A CN 201810700364 A CN201810700364 A CN 201810700364A CN 108833992 A CN108833992 A CN 108833992A
- Authority
- CN
- China
- Prior art keywords
- content
- subtitle
- type
- onomatopoeia
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000012545 processing Methods 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 239000013535 sea water Substances 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 239000007921 spray Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
This disclosure relates to a kind of caption presentation method and device.This method includes:According to the content type of the subtitle of video pictures in target video, display content and the display area of the subtitle of video pictures are determined, content type includes onomatopoeia type, wherein belonging to has onomatopoeia content in the subtitle of onomatopoeia type;Controlling terminal shows the display content of subtitle during playing target video in the display area of video pictures.According to the embodiment of the present disclosure, it can be according to the content type of the subtitle of video pictures in target video, determine display content and the display area of the subtitle of video pictures, content type includes onomatopoeia type, and controlling terminal is during playing target video, the display content of subtitle is shown in the display area of video pictures, so that the display content of subtitle and display area can be different according to the content type of subtitle, meets user to the subtitling view demand of different content type.
Description
Technical field
This disclosure relates to field of computer technology more particularly to a kind of caption presentation method and device.
Background technique
With the continuous development of science and technology, user can pass through whenever and wherever possible various terminals (for example, mobile phone) and watch it
The video liked, and combination subtitle understands the content of video.However, in the related technology, in video display process, subtitle is shown
The mode of showing fixes, is single, is unable to satisfy user and checks demand to subtitle in video.
Summary of the invention
In view of this, can satisfy user to different content class the present disclosure proposes a kind of caption presentation method and device
The subtitling view demand of type.
According to the one side of the disclosure, a kind of caption presentation method is provided, the method includes:
According to the content type of the subtitle of video pictures in target video, in the display for determining the subtitle of the video pictures
Hold and display area, the content type include onomatopoeia type, wherein belonging to has onomatopoeia content in the subtitle of onomatopoeia type;
Controlling terminal is during playing the target video, in the display area of the video pictures described in display
The display content of subtitle.
In one possible implementation, the method also includes:
According at least one of the initial content, video frame and audio content of the subtitle of the target video, determine
The content type of the subtitle of video pictures in the target video.
In one possible implementation, according to the content type of the subtitle of video pictures in target video, institute is determined
Display content and the display area of the subtitle of video pictures are stated, including:
The content type be onomatopoeia type when, determine in the video pictures with the onomatopoeia content phase of the subtitle
Corresponding target object;
According to target object region locating in the video pictures, the display area is determined.
In one possible implementation, according to the initial content, video frame and sound of the subtitle of the target video
At least one of frequency content determines the content type of the subtitle of video pictures in the target video, including:
At least one of initial content, video frame and audio content to subtitle carry out the processing of onomatopoeia content recognition,
Obtain recognition result;
When including onomatopoeia content in the recognition result, determine that the content type of the subtitle is onomatopoeia type.
In one possible implementation, the method also includes:
According to the onomatopoeia content in the recognition result, the display content of the subtitle of the video pictures is determined.
In one possible implementation, the content type further includes non-onomatopoeia type,
Wherein, according at least one of initial content, video frame and audio content of the subtitle of the target video,
The content type for determining the subtitle of video pictures in the target video further includes:
When not including onomatopoeia content in the recognition result, determine that the content type of the subtitle is non-onomatopoeia type.
According to another aspect of the present disclosure, a kind of subtitling display equipment is provided, described device includes:
Determining module determines the video pictures for the content type according to the subtitle of video pictures in target video
Subtitle display content and display area, the content type include onomatopoeia type, wherein belong in the subtitle of onomatopoeia type
With onomatopoeia content;
Control module, for controlling terminal during playing the target video, in the display of the video pictures
The display content of the subtitle is shown in region.
In one possible implementation, described device further includes:
Content type determining module, for initial content, video frame and the audio according to the subtitle of the target video
At least one of content determines the content type of the subtitle of video pictures in the target video.
In one possible implementation, the determining module includes:
First determine submodule, for the content type be onomatopoeia type when, determine in the video pictures with
The corresponding target object of onomatopoeia content of the subtitle;
Second determines submodule, for the region locating in the video pictures according to the target object, determines institute
State display area.
In one possible implementation, the content type determining module includes:
As a result submodule is obtained, at least one of the initial content, video frame and audio content of subtitle are intended
The processing of sound content recognition, obtains recognition result;
Third determines submodule, when for including onomatopoeia content in the recognition result, determines the content of the subtitle
Type is onomatopoeia type.
In one possible implementation, described device further includes:
Content determination module is shown, for determining the video pictures according to the onomatopoeia content in the recognition result
The display content of subtitle.
In one possible implementation, the content type further includes non-onomatopoeia type,
Wherein, the content type determining module further includes:
4th determines that submodule determines the interior of the subtitle when for not including onomatopoeia content in the recognition result
Appearance type is non-onomatopoeia type.
According to another aspect of the present disclosure, a kind of subtitling display equipment is provided, including:Processor;It is handled for storage
The memory of device executable instruction;Wherein, the processor is configured to executing the above method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with
Computer program instructions, wherein the computer program instructions realize above-mentioned caption presentation method when being executed by processor.
According to the embodiment of the present disclosure, video can be determined according to the content type of the subtitle of video pictures in target video
The display content of the subtitle of picture and display area, content type includes onomatopoeia type, and controlling terminal is playing target video
During, the display content of subtitle is shown in the display area of video pictures, so that the display content of subtitle and aobvious
Show that region can be different according to the content type of subtitle, meets user to the subtitling view demand of different content type.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become
It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the disclosure
Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 2 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 3 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 4 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 5 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 6 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.
Fig. 7 is a kind of schematic diagram of the application scenarios of caption presentation method shown according to an exemplary embodiment.
Fig. 8 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.
Fig. 9 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.
Figure 10 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.
Figure 11 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure.
It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.This method can be applied to
In terminal device (for example, mobile phone, tablet computer etc.), server, the disclosure to this with no restriction.As shown in Figure 1, according to this public affairs
The caption presentation method for opening embodiment includes:
In step s 11, according to the content type of the subtitle of video pictures in target video, the video pictures are determined
The display content of subtitle and display area, the content type include onomatopoeia type, wherein belong in the subtitle of onomatopoeia type and have
There is onomatopoeia content;
In step s 12, controlling terminal is during playing the target video, in the viewing area of the video pictures
The display content of the subtitle is shown in domain.
According to the embodiment of the present disclosure, video can be determined according to the content type of the subtitle of video pictures in target video
The display content of the subtitle of picture and display area, content type includes onomatopoeia type, and controlling terminal is playing target video
During, the display content of subtitle is shown in the display area of video pictures, so that the display content of subtitle and aobvious
Show that region can be different according to the content type of subtitle, meets user to the subtitling view demand of different content type.
Wherein, target video can be the video that film, TV play, variety show etc. can be watched for user.Video is drawn
The subtitle in face can be the subtitle (for example, the subtitle made offline by staff) of pre-production, be also possible to online recognition
Word content in video pictures out, and the subtitle determined.The content type of subtitle can be classifies according to caption content
As a result, for example, the content type of subtitle can be divided into onomatopoeia type and non-according to whether including onomatopoeia content in subtitle
Onomatopoeia type.Wherein, onomatopoeia content (or referred to as onomatopoeia content) can be the vocabulary simulating the nature sound and making, for example,
" thumping is prominent " (for example, sound when simulated automotive starts), " gurgling " (for example, sound of wave simulated), " creakily~" (example
Such as, the sound of enabling is simulated) and " ticking " (for example, sound of simulation water droplet) etc..The subtitle of video pictures may include one
Or multiple display contents, it may be determined that the corresponding display area of each display content, the disclosure to this with no restriction.
In illustrative application scenarios, user wishes to play target video A, for example, passing through its terminal (for example, mobile phone)
Play target video A.The content type of the subtitle of video pictures may include onomatopoeia type in target video A.User's
Mobile phone is during playing target video A, for example, the subtitle in a certain video pictures includes enabling sound " creakily ", the onomatopoeia
The subtitle of type shows that the subtitle in the video pictures can also include other content type at the doorframe in video pictures,
For example, caption content is the dialogue (for example, non-onomatopoeia type) of two people, the dialogue of two people may be displayed on video pictures
In lower zone.
In this way, subtitle is combined with the scene of video pictures, the display mode of subtitle is enriched, convenient user is immersed in
In plot.Also, the display content of subtitle and display area can be different according to the content type of subtitle, meet user couple
The viewing demand of the subtitle of different content type.
For example, the video pictures can be determined according to the content type of the subtitle of video pictures in target video
Subtitle display content and display area, the content type include onomatopoeia type.
It should be understood that can be before or while playing target video A, according to the subtitle of video pictures in target video A
Content type determines display content and the display area of the subtitle of video pictures.
Wherein, in target video A the content type of the subtitle of video pictures can be in advance manually determine (for example,
During production subtitle offline, according to the content of the subtitle in video pictures, the content type of subtitle, example are determined
Such as, it is determined as onomatopoeia type or non-onomatopoeia type).Alternatively, can also be preparatory or in real time automatically according to the view of target video
The content of subtitle in frequency picture, determining content type is (for example, carry out voice knowledge according to the audio content of target video in real time
Not, recognition result is obtained, the content of subtitle and subtitle is determined according to recognition result, and content class is determined according to the content of subtitle
Type etc.).As long as determining the display of the subtitle of video pictures in the content type according to the subtitle of video pictures in target video
Determined before content and display area, the disclosure to time of the content type for determining subtitle, determine main body, method of determination
Deng with no restriction.
Fig. 2 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.In a kind of possible reality
In existing mode, as shown in Fig. 2, the method also includes:
In step s 13, according in the initial content, video frame and audio content of the subtitle of the target video extremely
Few one kind, determines the content type of the subtitle of video pictures in the target video.
It for example, can be in initial content, video frame and the audio content to the subtitle of the target video extremely
Few one kind is analyzed and processed, and according to analysis and processing result, determines the content class of the subtitle of video pictures in the target video
Type.
It, can be with for example, include that sound of sea wave " gurgles " this onomatopoeia content in the initial content of the subtitle of the target video
Determine the content type (for example, being onomatopoeia type) for being somebody's turn to do " gurgling " subtitle.Alternatively, can be carried out to the video frame of target video
Analysis processing, for example, by carrying out Text region (for example, identifying " gurgling " this word content) to video frame, it can be with
According to the text recognition result, determine that the content type of the subtitle (for example, " gurgling " is onomatopoeia content) of video pictures is quasi-
Sound type.Alternatively, speech recognition can also be carried out to the audio content of target video, for example, identifying the view of the target video
Include the sound to open the door in the corresponding audio content of frequency picture, can determine the content of the subtitle (for example, " creakily ") of video pictures
Type is onomatopoeia type.It should be understood that can be combined in the initial content of the subtitle of the target video, video frame and audio
It is multiple in appearance, determine the content type of the subtitle of video pictures, the disclosure is to according to the initial of the subtitle of the target video
At least one of content, video frame and audio content determine the content class of the subtitle of video pictures in the target video
The mode of type is with no restriction.
Fig. 3 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.In a kind of possible reality
In existing mode, as shown in figure 3, step S13 may include:
In step S131, onomatopoeia is carried out at least one of initial content, video frame and audio content of subtitle
Content recognition processing, obtains recognition result;
In step S132, when in the recognition result including onomatopoeia content, determine that the content type of the subtitle is
Onomatopoeia type.
Wherein, onomatopoeia content recognition processing can refer to by all kinds of identification methods (for example, Text region, image recognition,
One of speech recognition is a variety of) carry out onomatopoeia content identifying processing.
For example, it can be the speech recognition to audio content, for example, automatic speech recognition technology ASR can be passed through
(Automatic Speech Recognition) carries out speech recognition to audio content.Alternatively, can be calculated by deep learning
Method, training obtain the neural network that can be used for carrying out speech recognition.Language is carried out to audio content using trained neural network
Sound identification.For example, identification obtains the corresponding text results of the audio content.It can also be the initial content to subtitle, video frame
At least one of carry out Text region.For example, with by utilizing optical character identification OCR (Optical Character
Recognition) technology carries out Text region at least one of the initial content of subtitle, video frame.Furthermore it is also possible to be
Image recognition is carried out to video frame, for example, can be by identifying mould based on the good human face recognition model of neural metwork training, article
Type carries out recognition of face to video frame, article identifies, obtains corresponding recognition result.It can be combined with various identification methods, it is right
At least one of initial content, video frame and audio content of subtitle carry out the processing of onomatopoeia content recognition, obtain identification knot
Fruit.
For example, multiple recognition results (for example, for multiple subtitles) for each video pictures of target video can be obtained.Institute
When stating in recognition result including onomatopoeia content, determine that the content type of the subtitle is onomatopoeia type.For example, a certain recognition result
For " gurgling ", then the content type of the subtitle is onomatopoeia type.
In this way, the content type of the subtitle of video pictures can be determined quickly and accurately.The disclosure is to progress onomatopoeia content
The mode of identifying processing is with no restriction.
Fig. 4 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.In a kind of possible reality
In existing mode, as shown in figure 4, step S13 can also include:
In step S133, when in the recognition result not including onomatopoeia content, the content type of the subtitle is determined
For non-onomatopoeia type.
For example, as it was noted above, content type can also include non-onomatopoeia type.For example, at onomatopoeia content recognition
It manages in obtained multiple recognition results, does not include onomatopoeia content in partial recognition result.For example, a certain identification content is some angle
One sentence pair of color is white (for example, I likes you).Do not include onomatopoeia content in the recognition result, can determine that (I likes the subtitle
You) content type be non-onomatopoeia type.
In this way, the content type of the subtitle of video pictures can be determined quickly and accurately.
Fig. 5 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.In a kind of possible reality
In existing mode, as shown in figure 5, the method also includes:
In step S14, according to the onomatopoeia content in the recognition result, the display of the subtitle of the video pictures is determined
Content.
For example, in progress onomatopoeia content recognition processing, obtained recognition result, it may include onomatopoeia content, it can
To determine the display content of the subtitle of the video pictures according to the onomatopoeia content in the recognition result.For example, being regarded to target
The audio content of frequency carries out the processing of onomatopoeia content recognition, includes onomatopoeia content in the recognition result of acquisition.For example, in target video
Having a video clip is car engine, sound when including car engine in audio content, includes in the recognition result of acquisition
Onomatopoeia content (" thumping is prominent ").For example, do not include the onomatopoeia content in the initial content of the subtitle of the target video, it can basis
The onomatopoeia content determines the display content of the subtitle of the video pictures.For example, the onomatopoeia content " thumping is prominent " is added to this
In the subtitle of video pictures, the display content of the subtitle as video pictures.
In this way, the display content of the subtitle of video pictures can be enriched.The disclosure is to according to quasi- in the recognition result
Sound content determines the mode of display content of the subtitle of the video pictures with no restriction.
As shown in Figure 1, in step s 11, according to the content type of the subtitle of video pictures in target video, determine described in
The display content of the subtitle of video pictures and display area, the content type include onomatopoeia type, wherein belong to onomatopoeia type
Subtitle in have onomatopoeia content.
For example, in target video the part subtitle of video pictures content be non-onomatopoeia type, other subtitles it is interior
Holding is onomatopoeia type.The display content that can determine the subtitle of non-onomatopoeia type and display area are (for example, display area is view
The lower section of frequency picture).The content type that can determine subtitle in video pictures respectively is in the display of each subtitle of onomatopoeia type
Appearance and corresponding display area.
For example, including the subtitle that two content types are onomatopoeia type in video pictures, one is that the target video is initial
Subtitle in subtitle (for example, car engine sound " thumping is prominent ", the display content of subtitle is " thumping is prominent ").The other is to mesh
The audio content for marking video carries out the processing of onomatopoeia content recognition, according to the onomatopoeia content in obtained recognition result (for example, opening the door
Sound " creak "), determine subtitle that type be onomatopoeia type (for example, determining that the display content of subtitle is " creakily ").
The corresponding display area of display content of each subtitle can be determined respectively.For example, determining that display content is the subtitle of " thumping is prominent "
Corresponding display area is car engine cover.Determine that the corresponding display area of subtitle that display content is " creak " is doorframe
On.
Fig. 6 is a kind of flow chart of caption presentation method shown according to an exemplary embodiment.In a kind of possible reality
In existing mode, as shown in fig. 6, step S11 may include:
In step S111, the content type be onomatopoeia type when, determine in the video pictures with the word
The corresponding target object of onomatopoeia content of curtain;
In step S112, according to target object region locating in the video pictures, the display is determined
Region.
Wherein, target object can be object corresponding with the onomatopoeia content of the subtitle in video pictures, for example, can
To be the sounding object etc. of onomatopoeia content.Target object region locating in the video pictures can be the area of target object
Domain, is also possible to target object peripheral region, the disclosure to this with no restriction.
For example, it when content type is onomatopoeia type, can determine in video pictures and in the onomatopoeia of the subtitle
Hold corresponding target object.For example, a certain subtitle is " " (for example, being the sound of the leaf by breeze on seashore),
The content type of this subtitle is onomatopoeia type, can determine mesh corresponding with the onomatopoeia content of the subtitle in video pictures
Mark the leaf that object is seashore.
Region that can be locating in the video pictures according to target object, for example, on the leaf of seashore or the tree of seashore
Leaf beside is determined as the display area of the subtitle.
In some alternative embodiments, a certain subtitle is " gurgling ", is the sound of wave.It can determine video pictures
In target object corresponding with the onomatopoeia content of the subtitle be wave.It can be according to the region locating for wave, described in determination
Display area, for example, the region around wave to be determined as to the display area of subtitle.For example, can be according to the floating side of seawater
The spray that sea laps go out is determined as the display area of the subtitle by formula.
In this way, the subtitle of onomatopoeia type can be dissolved into video scene, the mode of Subtitle Demonstration is enriched.The disclosure pair
Determine target object corresponding with the onomatopoeia content of the subtitle in the video pictures and according to the target object
Locating region, determines the mode of the display area with no restriction in the video pictures.
As shown in Figure 1, in step s 12, controlling terminal is drawn during playing the target video in the video
The display content of the subtitle is shown in the display area in face.
For example, user is during passing through terminal plays target video, for example, as it was noted above, a certain subtitle
It is the sound of wave, the display content of the subtitle determined is " gurgling ", and display area goes out for sea laps for " gurgling "
Spray on.In target video playing process, " gurgling " can be according to the ralocatable mode of seawater, in the water that seawater is patted out
Take display (for example, this wave can be followed, showing from the distant to the near).
Using example
Below in conjunction with " user play video " property application scenarios as an example, answering according to the embodiment of the present disclosure is provided
With example, in order to understand the process of caption presentation method.It will be understood by those skilled in the art that being only below using example
In the purpose for being easy to understand the embodiment of the present disclosure, it is not construed as the limitation to the embodiment of the present disclosure.
Fig. 7 is a kind of schematic diagram of the application scenarios of caption presentation method shown according to an exemplary embodiment.At this
Using in example, user wishes to play a video, and the playing request for being directed to video is initiated by its mobile phone.Server is receiving
The playing request determines that user mobile phone wishes the video played.Server can return to the video file of the video, for example, control
User mobile phone processed plays the video.
This using in example, server can be in advance or in real time according in the video in the subtitle of video pictures
Hold type, determines display content and the display area of the subtitle of video pictures.For example, being with a certain video pictures in the video
Example, the video pictures are that leading lady starts automobile, prepare working of driving, and talk to onself " it is desirable that Chi Dao not!", wind compared at that time
Greatly, blow roadside leaf sound.
It is applied in example at this, server can determine the content type of the subtitle of video pictures in advance or in real time.Example
Such as, the initial content of the subtitle of the video pictures is " it is desirable that Chi Dao not!" and " ".Server can also be by this
Video pictures correspond to audio content and carry out the processing of onomatopoeia content recognition, obtain recognition result.For example, recognition result includes " thumping
It is prominent " (for example, the sound issued for starting automobile)." thumping is prominent " in recognition result can be determined as the video and drawn by server
The subtitle in face.
It is applied in example at this, server can determine subtitle " it is desirable that Chi Dao not!" content type be non-onomatopoeia type,
The content type of subtitle " thumping prominent " and subtitle " " is onomatopoeia type.Server can determine video pictures subtitle
Content and display area are shown, for example, determining content type is the subtitle of non-onomatopoeia type " it is desirable that Chi Dao not!" in video pictures
Lower section shows, content type be the subtitle " thumping is prominent " of onomatopoeia type it is corresponding be automobile in video pictures, can be according to vapour
Vehicle region locating in video pictures determines that subtitle " thumping is prominent " is shown on automobile.Content type is the word of onomatopoeia type
It is leaf in video pictures that curtain " " is corresponding, region that can be locating in video pictures according to leaf, determines word
Curtain " " is shown on leaf.
It is applied in example at this, server can control user mobile phone in the video pictures for being played to the video,
The display content of subtitle is shown in the display area of video pictures.For example, as shown in fig. 7, showing that subtitle is " uncommon below frequency picture
Prestige Chi Dao not!", show on automobile subtitle " thumping is prominent " and shown on leaf subtitle " ".
Fig. 8 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.As shown in figure 8, the dress
Set including:
Determining module 21 determines that the video is drawn for the content type according to the subtitle of video pictures in target video
The display content of the subtitle in face and display area, the content type include onomatopoeia type, wherein belong to the subtitle of onomatopoeia type
In have onomatopoeia content;
Control module 22, for controlling terminal during playing the target video, in the aobvious of the video pictures
Show the display content that the subtitle is shown in region.
Fig. 9 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.As shown in figure 9, in one kind
In possible implementation, described device further includes:
Content type determining module 23, for initial content, video frame and the sound according to the subtitle of the target video
At least one of frequency content determines the content type of the subtitle of video pictures in the target video.
As shown in figure 9, in one possible implementation, the determining module 21 includes:
First determines submodule 211, for determining in the video pictures when the content type is onomatopoeia type
Target object corresponding with the onomatopoeia content of the subtitle;
Second determines submodule 212, for the region locating in the video pictures according to the target object, determines
The display area.
As shown in figure 9, in one possible implementation, the content type determining module 23 includes:
As a result submodule 231 is obtained, at least one of initial content, video frame and audio content of subtitle are carried out
The processing of onomatopoeia content recognition, obtains recognition result;
Third determines submodule 232, when for including onomatopoeia content in the recognition result, determines the interior of the subtitle
Appearance type is onomatopoeia type.
As shown in figure 9, in one possible implementation, described device further includes:
Content determination module 24 is shown, for determining the video pictures according to the onomatopoeia content in the recognition result
Subtitle display content.
As shown in figure 9, in one possible implementation, the content type further includes non-onomatopoeia type,
Wherein, the content type determining module 23 further includes:
4th determines that submodule 233 determines the subtitle when for not including onomatopoeia content in the recognition result
Content type is non-onomatopoeia type.
Figure 10 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.For example, device 800 can be with
It is mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building
Equipment, personal digital assistant etc..
Referring to Fig.1 0, device 800 may include following one or more components:Processing component 802, memory 804, power supply
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and
Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown
Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system
System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to:Home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed above-mentioned to complete by the processor 820 of device 800
Method.
Figure 11 is a kind of block diagram of subtitling display equipment shown according to an exemplary embodiment.For example, device 1900 can
To be provided as a server.Referring to Fig.1 1, it further comprises one or more places that device 1900, which includes processing component 1922,
Manage device and memory resource represented by a memory 1932, for store can by the instruction of the execution of processing component 1922,
Such as application program.The application program stored in memory 1932 may include it is one or more each correspond to one
The module of group instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and one
Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface
1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac
OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 1932 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1922 of device 1900 to complete
The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment
Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes:Portable computer diskette, random access memory (RAM), read-only is deposited hard disk
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network
Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one
Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas
The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced
The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction
Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce
Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment
Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use
The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology
Other those of ordinary skill in domain can understand each embodiment disclosed herein.
Claims (14)
1. a kind of caption presentation method, which is characterized in that the method includes:
According to the content type of the subtitle of video pictures in target video, determine the subtitle of the video pictures display content and
Display area, the content type include onomatopoeia type, wherein belonging to has onomatopoeia content in the subtitle of onomatopoeia type;
Controlling terminal shows the subtitle during playing the target video in the display area of the video pictures
Display content.
2. the method according to claim 1, wherein the method also includes:
According at least one of the initial content, video frame and audio content of the subtitle of the target video, determine described in
The content type of the subtitle of video pictures in target video.
3. the method according to claim 1, wherein according to the content class of the subtitle of video pictures in target video
Type determines display content and the display area of the subtitle of the video pictures, including:
When the content type is onomatopoeia type, determination is corresponding with the onomatopoeia content of the subtitle in the video pictures
Target object;
According to target object region locating in the video pictures, the display area is determined.
4. according to the method described in claim 2, it is characterized in that, according to the initial content of the subtitle of the target video, view
At least one of frequency frame and audio content determine the content type of the subtitle of video pictures in the target video, including:
At least one of initial content, video frame and audio content to subtitle carry out the processing of onomatopoeia content recognition, obtain
Recognition result;
When including onomatopoeia content in the recognition result, determine that the content type of the subtitle is onomatopoeia type.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
According to the onomatopoeia content in the recognition result, the display content of the subtitle of the video pictures is determined.
6. according to the method described in claim 4, it is characterized in that, the content type further includes non-onomatopoeia type,
Wherein, it according at least one of initial content, video frame and audio content of the subtitle of the target video, determines
The content type of the subtitle of video pictures in the target video further includes:
When not including onomatopoeia content in the recognition result, determine that the content type of the subtitle is non-onomatopoeia type.
7. a kind of subtitling display equipment, which is characterized in that described device includes:
Determining module determines the word of the video pictures for the content type according to the subtitle of video pictures in target video
The display content and display area, the content type of curtain include onomatopoeia type, wherein belonging in the subtitle of onomatopoeia type has
Onomatopoeia content;
Control module, for controlling terminal during playing the target video, in the display area of the video pictures
The display content of the middle display subtitle.
8. device according to claim 7, which is characterized in that described device further includes:
Content type determining module, for initial content, video frame and the audio content according to the subtitle of the target video
At least one of, determine the content type of the subtitle of video pictures in the target video.
9. device according to claim 7, which is characterized in that the determining module includes:
First determine submodule, for the content type be onomatopoeia type when, determine in the video pictures with it is described
The corresponding target object of onomatopoeia content of subtitle;
Second determines submodule, for the region locating in the video pictures according to the target object, determines described aobvious
Show region.
10. device according to claim 8, which is characterized in that the content type determining module includes:
As a result submodule is obtained, at least one of initial content, video frame and audio content of subtitle are carried out in onomatopoeia
Hold identifying processing, obtains recognition result;
Third determines submodule, when for including onomatopoeia content in the recognition result, determines the content type of the subtitle
For onomatopoeia type.
11. device according to claim 10, which is characterized in that described device further includes:
Content determination module is shown, for determining the subtitle of the video pictures according to the onomatopoeia content in the recognition result
Display content.
12. device according to claim 10, which is characterized in that the content type further includes non-onomatopoeia type,
Wherein, the content type determining module further includes:
4th determines submodule, when for not including onomatopoeia content in the recognition result, determines the content class of the subtitle
Type is non-onomatopoeia type.
13. a kind of subtitling display equipment, which is characterized in that including:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:Method described in any one of perform claim requirement 1 to 6.
14. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute
It states and realizes method described in any one of claim 1 to 6 when computer program instructions are executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810700364.9A CN108833992A (en) | 2018-06-29 | 2018-06-29 | Caption presentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810700364.9A CN108833992A (en) | 2018-06-29 | 2018-06-29 | Caption presentation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108833992A true CN108833992A (en) | 2018-11-16 |
Family
ID=64134937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810700364.9A Pending CN108833992A (en) | 2018-06-29 | 2018-06-29 | Caption presentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108833992A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111491212A (en) * | 2020-04-17 | 2020-08-04 | 维沃移动通信有限公司 | Video processing method and electronic device |
WO2023071349A1 (en) * | 2021-10-27 | 2023-05-04 | 海信视像科技股份有限公司 | Display device |
CN118785590A (en) * | 2024-09-02 | 2024-10-15 | 深圳市智岩科技有限公司 | Lighting effect display method and its device, equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182501A1 (en) * | 2009-01-20 | 2010-07-22 | Koji Sato | Information processing apparatus, information processing method, and program |
CN103139375A (en) * | 2011-12-02 | 2013-06-05 | Lg电子株式会社 | Mobile terminal and control method thereof |
CN103680492A (en) * | 2012-09-24 | 2014-03-26 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN103959802A (en) * | 2012-08-10 | 2014-07-30 | 松下电器产业株式会社 | Video provision method, transmission device, and reception device |
CN105247879A (en) * | 2013-05-30 | 2016-01-13 | 索尼公司 | Client device, control method, system and program |
CN106170986A (en) * | 2014-09-26 | 2016-11-30 | 株式会社阿斯台姆 | Program output device, program server, auxiliary information management server, program and the output intent of auxiliary information and storage medium |
CN108055592A (en) * | 2017-11-21 | 2018-05-18 | 广州视源电子科技股份有限公司 | Subtitle display method and device, mobile terminal and storage medium |
-
2018
- 2018-06-29 CN CN201810700364.9A patent/CN108833992A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182501A1 (en) * | 2009-01-20 | 2010-07-22 | Koji Sato | Information processing apparatus, information processing method, and program |
CN103139375A (en) * | 2011-12-02 | 2013-06-05 | Lg电子株式会社 | Mobile terminal and control method thereof |
CN103959802A (en) * | 2012-08-10 | 2014-07-30 | 松下电器产业株式会社 | Video provision method, transmission device, and reception device |
CN103680492A (en) * | 2012-09-24 | 2014-03-26 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN105247879A (en) * | 2013-05-30 | 2016-01-13 | 索尼公司 | Client device, control method, system and program |
CN106170986A (en) * | 2014-09-26 | 2016-11-30 | 株式会社阿斯台姆 | Program output device, program server, auxiliary information management server, program and the output intent of auxiliary information and storage medium |
CN108055592A (en) * | 2017-11-21 | 2018-05-18 | 广州视源电子科技股份有限公司 | Subtitle display method and device, mobile terminal and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111491212A (en) * | 2020-04-17 | 2020-08-04 | 维沃移动通信有限公司 | Video processing method and electronic device |
WO2023071349A1 (en) * | 2021-10-27 | 2023-05-04 | 海信视像科技股份有限公司 | Display device |
CN118785590A (en) * | 2024-09-02 | 2024-10-15 | 深圳市智岩科技有限公司 | Lighting effect display method and its device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109618184A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN109089170A (en) | Barrage display methods and device | |
CN110210310B (en) | Video processing method and device for video processing | |
CN108540845A (en) | Barrage method for information display and device | |
CN108363706A (en) | The method and apparatus of human-computer dialogue interaction, the device interacted for human-computer dialogue | |
US11848029B2 (en) | Method and device for detecting audio signal, and storage medium | |
CN108985176A (en) | image generating method and device | |
CN109257645A (en) | Video cover generation method and device | |
CN109151356A (en) | video recording method and device | |
CN109257659A (en) | Subtitle adding method, device, electronic equipment and computer readable storage medium | |
CN108762494A (en) | Show the method, apparatus and storage medium of information | |
CN108924644A (en) | Video clip extracting method and device | |
CN109729435A (en) | The extracting method and device of video clip | |
CN110121083A (en) | The generation method and device of barrage | |
CN108260020A (en) | The method and apparatus that interactive information is shown in panoramic video | |
CN108596093A (en) | The localization method and device of human face characteristic point | |
CN110148406B (en) | Data processing method and device for data processing | |
CN108650543A (en) | The caption editing method and device of video | |
CN108833992A (en) | Caption presentation method and device | |
CN110209877A (en) | Video analysis method and device | |
CN110519655A (en) | Video clipping method and device | |
CN113113040A (en) | Audio processing method and device, terminal and storage medium | |
CN110990534A (en) | Data processing method and device and data processing device | |
CN109407944A (en) | Multimedia resource plays adjusting method and device | |
CN110121106A (en) | Video broadcasting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200422 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 100000 room 26, 9 Building 9, Wangjing east garden four, Chaoyang District, Beijing. Applicant before: BEIJING YOUKU TECHNOLOGY Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181116 |