[go: up one dir, main page]

CN107180058B - Method and device for inquiring based on subtitle information - Google Patents

Method and device for inquiring based on subtitle information Download PDF

Info

Publication number
CN107180058B
CN107180058B CN201610140826.7A CN201610140826A CN107180058B CN 107180058 B CN107180058 B CN 107180058B CN 201610140826 A CN201610140826 A CN 201610140826A CN 107180058 B CN107180058 B CN 107180058B
Authority
CN
China
Prior art keywords
information
query
candidate query
search
subtitle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610140826.7A
Other languages
Chinese (zh)
Other versions
CN107180058A (en
Inventor
刘俊启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201610140826.7A priority Critical patent/CN107180058B/en
Publication of CN107180058A publication Critical patent/CN107180058A/en
Application granted granted Critical
Publication of CN107180058B publication Critical patent/CN107180058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention aims to provide a method and a device for inquiring based on subtitle information. The method according to the invention comprises the following steps: providing one or more candidate query information corresponding to current subtitle information while presenting subtitle information corresponding to the played audio/video; when the user selects a provided candidate query information on the current playing interface, a search operation is performed based on the selected candidate query information. The invention has the advantages that: by providing the candidate query information corresponding to the current caption information, the user can conveniently and directly select the interested candidate query information from the displayed captions to search in the process of watching the video, a brand new way of quickly initiating the search is provided, the user operation is reduced, and the efficiency is improved.

Description

Method and device for inquiring based on subtitle information
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for querying based on subtitle information.
Background
In the prior art, a user generally cannot operate a subtitle during watching audio or video, and if the user finds interesting information in the subtitle presented by the audio or video and wants to search based on the information, the user needs to enter a corresponding search interface and input the information to search, which may take a long time to see a corresponding search result. Thus, based on prior art schemes, users cannot quickly initiate searches based on audio or video subtitles.
Disclosure of Invention
The invention aims to provide a method and a device for inquiring based on subtitle information.
According to an aspect of the present invention, there is provided a method for querying based on subtitle information, wherein the method includes the steps of:
-providing one or more candidate query information items corresponding to the current subtitle information while presenting the subtitle information corresponding to the played audio/video;
-when the user selects a provided candidate query information on the current playing interface, performing a search operation based on the selected candidate query information.
According to an aspect of the present invention, there is also provided a query apparatus for performing a query based on subtitle information, wherein the query apparatus includes:
Means for providing one or more candidate query information items corresponding to current subtitle information while presenting subtitle information corresponding to the played audio/video;
when the user selects a provided candidate query information on the current playing interface, means for performing a search operation based on the selected candidate query information.
Compared with the prior art, the invention has the following advantages: by providing the candidate query information corresponding to the current caption information, the user can conveniently and directly select the interested candidate query information from the displayed captions to search in the process of watching the video, a brand new way of quickly initiating the search is provided, the user operation is reduced, and the efficiency is improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
fig. 1 illustrates a flow chart of a method for querying based on caption information in accordance with the present invention;
Fig. 2 is a schematic diagram illustrating a structure of a query apparatus for performing a query based on subtitle information according to the present invention.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
Fig. 1 illustrates a flow chart of a method for querying based on caption information according to the present invention. The method according to the invention comprises a step S1 and a step S2.
The method according to the invention is implemented, among other things, by a querying device contained in a computer device. The computer device comprises an electronic device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the electronic device comprises, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a programmable gate array (FPGA), a digital processor (DSP), an embedded device and the like. The computer device comprises a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers, wherein Cloud Computing is one of distributed Computing, and is a super virtual computer composed of a group of loosely coupled computer sets. The user equipment includes, but is not limited to, any electronic product that can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a PDA, a game console, an IPTV, or the like. The network where the user equipment and the network equipment are located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user equipment, the network equipment and the network are merely examples, and other user equipment, network equipment and network that may be present in the present invention or may appear in the future are applicable to the present invention, and are also included in the scope of the present invention and are incorporated herein by reference.
Prior to the steps of fig. 1, the querying device determines one or more candidate query information by performing step S3 (not shown) and step S4 (not shown).
In step S3, the query device acquires subtitle information for audio/video.
Wherein the caption information includes a caption file corresponding to the audio/video.
In step S4, the querying device determines one or more candidate query information available for querying based on the subtitle information.
Specifically, the query device performs semantic analysis processing on the content information in the subtitle information to determine one or more candidate query information available for query.
For example, the query device performs semantic analysis processing on the content information in the subtitle information, and takes one or more keywords in the content information as candidate query information.
Preferably, the query device performs semantic analysis processing on the content information in the caption information and screens out one or more candidate query information available for query from the caption information based on a predetermined screening rule.
For example, the uncommon words contained in the caption information are used as candidate query information.
For another example, popular query information contained in the caption information or query information searched by the user is used as candidate query information, and the like.
Preferably, when the query means is included in the network device, the query means transmits the determined one or more candidate query information items corresponding to the subtitle information to the corresponding user device after performing the operations of step S3 and step S4.
According to a first example of the present invention, a query device is included in a player application of a mobile device, and the query device obtains a subtitle file of a video video_1 to be played in the player application, and performs semantic analysis processing on content information in the video video_1 subtitle file to obtain 3 candidate query information query_1, query_2, and query_3 that can be used for performing a query.
Preferably, before the step of fig. 1, the method further comprises a step S5 (not shown).
In step S5, the query means generates search link information corresponding to the one or more candidate query information, respectively.
Wherein the search link information includes link information for obtaining search results of the obtained, corresponding candidate query information.
Specifically, the query device searches with the one or more candidate query information respectively, and generates respective corresponding search link information based on search results obtained by the search and corresponding to the one or more candidate query information respectively.
Next, referring to fig. 1, in step S1, upon presenting subtitle information corresponding to a played audio/video, a query device provides one or more candidate query information items corresponding to current subtitle information.
Preferably, the step S1 further includes a step S101 (not shown) and a step S102 (not shown).
In step S101, the query device generates layer information of the candidate query information corresponding to the subtitle information.
Preferably, the layer information background is transparent.
Specifically, the query device generates layer information of the same size based on the size of the current playing interface, and adds a search trigger element for triggering a search operation based on candidate query information, for example, a tag or a link corresponding to each candidate query information, or the like, to the layer information.
Preferably, the query means adds search link information corresponding to each candidate query information generated in step S4 to the layer information.
Preferably, the query device determines the display position of each item of one or more items of candidate query information on the current playing interface, and adds a search trigger element or search link information corresponding to each item of candidate query information in the region corresponding to the display position in the layer information.
Next, in step S102, when presenting subtitle information corresponding to the played audio/video, the query device outputs the layer information accordingly to provide the user with one or more candidate query information corresponding to the current subtitle information.
Preferably, the query means provides one or more candidate query information corresponding to the current subtitle information in a predetermined display style when the subtitle information corresponding to the played audio/video is presented.
For example, one or more candidate query information items corresponding to the subtitle information are bolded for display or font color conversion, etc.
Continuing with the foregoing first example, query_1 is included in subtitle information sub_t1 corresponding to time information time_1, and query_2 and query_3 are included in subtitle information sub_t2 corresponding to time information time_2.
The query device generates a layer layer_1 of candidate query information query_1 corresponding to the subtitle information sub_t1 in step S101, and adds a tag label_1 in an area corresponding to the display position of the query_1 in the layer layer_1, where the tag is used for triggering a search operation based on the query_1. Similarly, the querying device generates a layer layer_2 corresponding to the query_2 and the query_3, wherein the layer layer_2 comprises a layer_2 and a layer_3 for triggering the query_2 and the query_3 to perform search operation.
When presenting the subtitle information sub_t1, the query device correspondingly outputs the layer information layer_1 to be covered on the current playing interface so as to provide candidate query information query_1 corresponding to the current subtitle information sub_t1 for the user. Similarly, when presenting the subtitle sub_t2, the query device outputs its corresponding layer layer_2 accordingly to provide the user with candidate query information query_2 and query_3 corresponding to the current subtitle information sub_t1.
With continued reference to fig. 1, in step S2, when the user selects a provided candidate query information on the current playing interface, the query means performs a search operation based on the selected candidate query information.
Specifically, the query means determines candidate query information selected by the user based on the selection operation of the user, and performs a search operation based on the selected candidate query information.
The selection operation includes various operations that can be used to select candidate query information, such as a click operation, or a long press operation in a touch screen device, etc.
Preferably, the method further comprises step S6 (not shown).
In step S6, the query device triggers a predetermined search engine to search based on the selected candidate query information, so as to obtain a corresponding search result.
Continuing to describe the first example, if the user clicks the label label_1 corresponding to the query_1 in the playing interface, the query device triggers the search engine to search based on the selected candidate query information query_1, so as to obtain a search result corresponding to the candidate query information query_1.
Preferably, the method further comprises step S7 (not shown).
In step S7, the query means presents search results obtained after performing a search operation based on the selected candidate query information to the user.
In particular, the querying device may present the obtained search results to the user in a new page.
Or the query device generates a window in the current playing interface to present the obtained search results to the user.
Preferably, the step S7 further includes a step S701 (not shown) and a step S702 (not shown).
In step S701, the querying device prompts the user whether or not the search result needs to be viewed.
In step S702, when a user determines that viewing is required, a querying device presents the search results to the user.
Continuing with the first example described above, the query device pop-up prompts the user if the user needs to view the search results, and if the user selects to view the search results, the query device presents the search results corresponding to the candidate query information query_1 to the user in a new page.
According to the method, the candidate query information corresponding to the current caption information is provided, so that a user can conveniently and directly select interested candidate query information from displayed captions to search in the process of watching videos, a brand new rapid search initiation mode is provided, user operation is reduced, and efficiency is improved.
Fig. 2 is a schematic diagram illustrating a structure of a query apparatus for performing a query based on subtitle information according to the present invention.
The inquiry apparatus according to the present invention includes: means for providing one or more items of candidate query information corresponding to the current subtitle information (hereinafter referred to as "providing means 1") when subtitle information corresponding to the played audio/video is presented, and means for performing a search operation (hereinafter referred to as "searching means 2") based on the selected candidate query information when a user selects one of the provided candidate query information on the current playing interface.
The query means first determines one or more candidate query information, the query means including means for acquiring subtitle information for audio/video (not shown in the drawings, hereinafter referred to as "subtitle acquisition means"), and means for determining one or more candidate query information available for query based on the subtitle information (not shown in the drawings, hereinafter referred to as "candidate determination means").
The caption acquisition means acquires caption information of the audio/video.
Wherein the caption information includes a caption file corresponding to the audio/video.
The candidate determining means determines one or more candidate query information available for making a query based on the subtitle information.
Specifically, the candidate determining means performs semantic analysis processing on the content information in the caption information to determine one or more candidate query information usable for making a query.
For example, the candidate determining device performs semantic analysis processing on the content information in the caption information, and uses one or more keywords in the content information as candidate query information.
Preferably, the candidate determining device performs semantic analysis processing on the content information in the caption information and screens out one or more candidate query information available for query from the caption information based on a predetermined screening rule.
For example, the uncommon words contained in the caption information are used as candidate query information.
For another example, popular query information contained in the caption information or query information searched by the user is used as candidate query information, and the like.
Preferably, when the query means is included in the network device, the candidate determining means transmits the determined one or more candidate query information items corresponding to the subtitle information to the corresponding user device after the candidate query information items have been executed.
According to a first example of the present invention, a query device is included in a player application of a mobile device, a subtitle acquiring device acquires a subtitle file of a video video_1 to be played in the player application, and performs semantic analysis processing on content information in the video video_1 subtitle file to obtain 3 candidate query information query_1, query_2, and query_3 that can be used for performing a query.
Preferably, the query means further includes means for generating search link information (not shown, hereinafter referred to as "link generating means") corresponding to the one or more candidate query information, respectively.
The link generation means generates search link information respectively corresponding to the one or more pieces of candidate query information.
Wherein the search link information includes link information for obtaining search results of the obtained, corresponding candidate query information.
Specifically, the link generation device searches with the one or more candidate query information respectively, and generates the corresponding search link information respectively based on the search results obtained by the search and corresponding to the one or more candidate query information respectively.
Next, referring to fig. 2, when subtitle information corresponding to a played audio/video is presented, the providing apparatus 1 provides one or more items of candidate query information corresponding to current subtitle information.
Preferably, the providing apparatus 1 further includes means (not shown in the drawings, hereinafter referred to as "layer generating means") for generating layer information of the candidate query information corresponding to the subtitle information, and means (not shown in the drawings, hereinafter referred to as "layer outputting means") for outputting the layer information accordingly when the subtitle information corresponding to the played audio/video is presented, to provide the user with one or more candidate query information items corresponding to the current subtitle information.
The layer generating means generates layer information of the candidate query information corresponding to the subtitle information.
Preferably, the layer information background is transparent.
Specifically, the layer generating means generates layer information of the same size based on the size of the current playing interface, and adds a search trigger element for triggering a search operation based on candidate query information, for example, a tag or link or the like corresponding to each candidate query information, to the layer information.
Preferably, the layer generating means adds search link information corresponding to each candidate query information generated by the link generating means to the layer information.
Preferably, the layer generating device determines the display position of each item of one or more items of candidate query information on the current playing interface, and adds a search trigger element or search link information corresponding to each item of candidate query information in the region corresponding to the display position in the layer information.
Then, when presenting the subtitle information corresponding to the played audio/video, the layer output device correspondingly outputs the layer information to provide the user with one or more candidate query information corresponding to the current subtitle information.
Preferably, when presenting subtitle information corresponding to the played audio/video, the providing apparatus 1 provides one or more items of candidate query information corresponding to the current subtitle information in a predetermined display style.
For example, one or more candidate query information items corresponding to the subtitle information are bolded for display or font color conversion, etc.
Continuing with the foregoing first example, query_1 is included in subtitle information sub_t1 corresponding to time information time_1, and query_2 and query_3 are included in subtitle information sub_t2 corresponding to time information time_2.
The layer generating device generates a layer layer_1 of candidate query information query_1 corresponding to the subtitle information sub_t1, and adds a label label_1 in an area corresponding to the display position of the query_1 in the layer layer_1, wherein the label label_1 is used for triggering search operation based on the query_1. Similarly, the layer generating device generates layer information layer_2 corresponding to the query_2 and the query_3, and adds layer_2 and layer_3 for triggering the query_2 and the query_3 to perform the search operation in the layer layer_2.
When the subtitle information sub_t1 is presented, the layer output device correspondingly outputs the layer information layer_1 to cover the current playing interface so as to provide candidate query information query_1 corresponding to the current subtitle information sub_t1 for the user. Similarly, when the subtitle sub_t2 is presented, the layer output device correspondingly outputs the layer layer_2 corresponding to the subtitle sub_t2 to provide candidate query information query_2 and query_3 corresponding to the current subtitle information sub_t1 to the user.
With continued reference to fig. 2, when the user selects one of the provided candidate query information on the current playback interface, the search apparatus 2 performs a search operation based on the selected candidate query information.
Specifically, the search apparatus 2 determines candidate query information selected by the user based on the selection operation of the user, and performs the search operation based on the selected candidate query information.
The selection operation includes various operations that can be used to select candidate query information, such as a click operation, or a long press operation in a touch screen device, etc.
Preferably, the query means further comprises means (not shown, hereinafter referred to as "search triggering means") for triggering a predetermined search engine to search for a corresponding search result based on the selected candidate query information.
The search triggering device triggers a preset search engine to search based on the selected candidate query information so as to obtain corresponding search results.
Continuing to describe the first example, if the user clicks the label label_1 corresponding to the query_1 in the playing interface, the search triggering device triggers the search engine to search based on the selected candidate query information query_1, so as to obtain a search result corresponding to the candidate query information query_1.
Preferably, the query means further includes means for presenting search results obtained after performing a search operation based on the selected candidate query information to the user (not shown, hereinafter referred to as "result presenting means").
The result presentation means presents search results obtained after performing a search operation based on the selected candidate query information to the user.
In particular, the result presentation means may present the obtained search results to the user in a new page.
Or the result presentation means generates a window in the current playing interface to present the obtained search result to the user.
Preferably, the result presentation means further includes means for prompting a user whether viewing of the search result is required (not shown in the figure, hereinafter referred to as "prompting means"), and means for presenting the search result to the user when the user determines that viewing is required (not shown in the figure, hereinafter referred to as "sub-presentation means").
The prompting device prompts a user whether the search result needs to be checked or not;
when a user determines that viewing is required, a sub-presentation device presents the search results to the user.
Continuing to describe the first example, the prompting device popup window prompts the user whether to view the search result, and if the user selects to view the search result, the sub-presenting device presents the search result corresponding to the candidate query information query_1 to the user in the new page.
According to the scheme of the invention, the candidate query information corresponding to the current caption information is provided, so that a user can conveniently and directly select interested candidate query information from displayed captions to search in the process of watching videos, a brand new rapid search initiation mode is provided, user operation is reduced, and efficiency is improved.
The software program of the present invention may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present invention (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various functions or steps.
Furthermore, portions of the present invention may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present invention by way of operation of the computer. Program instructions for invoking the inventive methods may be stored in fixed or removable recording media and/or transmitted via a data stream in a broadcast or other signal bearing medium and/or stored within a working memory of a computer device operating according to the program instructions. An embodiment according to the invention comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
While the foregoing particularly illustrates and describes exemplary embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the claims. The protection sought herein is as set forth in the claims below. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for querying based on caption information, wherein the method comprises the steps of:
-providing one or more candidate query information items corresponding to the current subtitle information while presenting the subtitle information corresponding to the played audio/video;
-when the user selects a provided candidate query information on the current playing interface, performing a search operation based on the selected candidate query information.
2. The method of clause 1, wherein the step of providing one or more candidate query information corresponding to the current subtitle information when presenting the subtitle information corresponding to the played audio/video further comprises the steps of:
-generating layer information of the candidate query information corresponding to the subtitle information;
-upon presentation of subtitle information corresponding to the played audio/video, outputting said layer information accordingly to provide the user with one or more candidate query information corresponding to the current subtitle information.
3. The method of clause 1 or 2, wherein the step of providing one or more candidate query information corresponding to the current subtitle information when presenting the subtitle information corresponding to the played audio/video further comprises the steps of:
-providing one or more candidate query information items corresponding to the current subtitle information in a predetermined display style when presenting subtitle information corresponding to the played audio/video.
4. The method according to any one of clauses 1 to 3, wherein the method further comprises the steps of:
-triggering a predetermined search engine to search based on the selected candidate query information to obtain corresponding search results.
5. The method according to any one of clauses 1 to 4, wherein the method further comprises the steps of:
-presenting search results obtained after performing a search operation based on the selected candidate query information to the user.
6. The method of clause 4 or 5, wherein the step of presenting search results obtained after performing a search operation based on the selected candidate query information to the user further comprises the steps of:
-prompting the user if the search results need to be viewed;
-presenting the search results to the user when the user determines that viewing is required.
7. The method according to any one of clauses 1 to 6, wherein the method further comprises the steps of:
-acquiring subtitle information for audio/video;
-determining one or more candidate query information available for making a query based on the subtitle information.
8. The method of clause 7, wherein the step of determining one or more candidate query information available for querying based on the caption information further comprises the steps of:
-semantically processing the content information in the subtitle information to determine one or more candidate query information available for making a query.
9. The method of clause 7 or 8, wherein the method further comprises the steps of:
-generating search link information corresponding to the one or more candidate query information, respectively.
10. A query apparatus for performing a query based on subtitle information, wherein the query apparatus comprises:
Means for providing one or more candidate query information items corresponding to current subtitle information while presenting subtitle information corresponding to the played audio/video;
when the user selects a provided candidate query information on the current playing interface, means for performing a search operation based on the selected candidate query information.
11. The query device of clause 10, wherein the means for providing one or more candidate query information corresponding to the current subtitle information when presenting the subtitle information corresponding to the played audio/video further comprises:
means for generating layer information corresponding to the subtitle information including a search link corresponding to the candidate query information;
And means for outputting the layer information accordingly when presenting subtitle information corresponding to the played audio/video, to provide the user with one or more candidate query information items corresponding to the current subtitle information.
12. The query means of clause 10 or 11, wherein the means for providing one or more candidate query information corresponding to the current subtitle information when presenting the subtitle information corresponding to the played audio/video is further for:
-providing one or more candidate query information items corresponding to the current subtitle information in a predetermined display style when presenting subtitle information corresponding to the played audio/video.
13. The query device according to any one of clauses 10 to 12, wherein the query device further comprises:
and means for triggering a predetermined search engine to search based on the selected candidate query information to obtain corresponding search results.
14. The query device according to any one of clauses 10 to 13, wherein the query device further comprises:
Means for presenting search results obtained after performing a search operation based on the selected candidate query information to the user.
15. The query apparatus of clause 13 or 14, wherein the means for presenting search results obtained after performing a query operation based on the selected candidate query information to the user further comprises:
means for prompting a user if the search results need to be viewed;
Means for presenting the search results to a user when the user determines that viewing is required.
16. The query device according to any one of clauses 8 to 12, wherein the query device further comprises:
means for acquiring subtitle information for audio/video;
Means for determining one or more candidate query information available for conducting a query based on the caption information.
17. The query apparatus of clause 16, wherein the means for determining one or more candidate query information available for querying based on the caption information further comprises:
Means for semantically processing content information in the caption information to determine one or more candidate query information items that are available for query.
18. The query device according to clause 16 or 17, wherein the query device further comprises:
means for generating search link information corresponding to the one or more candidate query information, respectively.

Claims (8)

1. A method for querying based on caption information, wherein the method comprises the steps of:
-generating search link information corresponding to one or more candidate query information, respectively, the one or more candidate query information corresponding to subtitle information of a current interface of the played audio/video;
-determining a display position of each of the one or more candidate query information items at the current interface, and adding a search trigger element or search link information corresponding to each candidate query information item, respectively, in a region of the layer information corresponding to the display position;
-upon presentation of the subtitle information for the current interface of the played audio/video, outputting said layer information corresponding to the subtitle information for the current interface to provide the user with one or more candidate query information corresponding to the subtitle information for the current interface, said candidate query information being displayed in a predetermined pattern, and said search link information;
-when the user selects a provided candidate query information on the current playing interface, performing a search operation based on search link information corresponding to the selected candidate query information;
-the pop-up window prompts the user if he needs to view search results obtained after performing a search operation based on the selected candidate query information;
-presenting the search results to the user when the user determines that viewing is required.
2. The method of claim 1, wherein the method further comprises the steps of:
-triggering a predetermined search engine to search based on the selected candidate query information to obtain corresponding search results.
3. The method of claim 1, wherein the method further comprises the steps of:
-acquiring subtitle information for audio/video;
-determining one or more candidate query information available for making a query based on the subtitle information.
4. The method of claim 3, wherein the step of determining one or more candidate query information available for querying based on the caption information further comprises the steps of:
-semantically processing the content information in the subtitle information to determine one or more candidate query information available for making a query.
5. A query apparatus for performing a query based on subtitle information, wherein the query apparatus comprises:
Means for generating search link information corresponding to one or more candidate query information corresponding to subtitle information of a current interface of the played audio/video, respectively;
means for determining a display position of each of the one or more candidate query information items on the current interface, and adding a search trigger element or search link information corresponding to each candidate query information item, respectively, to an area corresponding to the display position in the layer information item;
means for outputting the layer information corresponding to the subtitle information of the current interface when the subtitle information of the played audio/video current interface is presented, so as to provide one or more items of candidate query information corresponding to the subtitle information of the current interface and the search link information to the user, the candidate query information being displayed in a predetermined style;
means for performing a search operation based on search link information corresponding to the selected candidate query information when a user selects one of the provided candidate query information on the current playback interface;
means for a pop-up window prompting a user whether to view search results obtained after performing a search operation based on the selected candidate query information;
Means for presenting the search results to a user when the user determines that viewing is required.
6. The query device of claim 5, wherein the query device further comprises:
and means for triggering a predetermined search engine to search based on the selected candidate query information to obtain corresponding search results.
7. The query device of claim 5, wherein the query device further comprises:
means for acquiring subtitle information for audio/video;
Means for determining one or more candidate query information available for conducting a query based on the caption information.
8. The query device of claim 7, wherein the means for determining one or more candidate query information items available for querying based on the caption information further comprises:
Means for semantically processing content information in the caption information to determine one or more candidate query information items that are available for query.
CN201610140826.7A 2016-03-11 2016-03-11 Method and device for inquiring based on subtitle information Active CN107180058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610140826.7A CN107180058B (en) 2016-03-11 2016-03-11 Method and device for inquiring based on subtitle information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610140826.7A CN107180058B (en) 2016-03-11 2016-03-11 Method and device for inquiring based on subtitle information

Publications (2)

Publication Number Publication Date
CN107180058A CN107180058A (en) 2017-09-19
CN107180058B true CN107180058B (en) 2024-06-18

Family

ID=59830803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610140826.7A Active CN107180058B (en) 2016-03-11 2016-03-11 Method and device for inquiring based on subtitle information

Country Status (1)

Country Link
CN (1) CN107180058B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110620960B (en) * 2018-06-20 2022-01-25 阿里巴巴(中国)有限公司 Video subtitle processing method and device
CN113068077B (en) * 2020-01-02 2023-08-25 腾讯科技(深圳)有限公司 Subtitle file processing method and device
CN111753135B (en) * 2020-05-21 2024-02-06 北京达佳互联信息技术有限公司 Video display method, device, terminal, server, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101296362A (en) * 2007-04-25 2008-10-29 三星电子株式会社 Method and system for providing users with access to information of potential interest
CN101595481A (en) * 2007-01-29 2009-12-02 三星电子株式会社 Method and system for facilitating information search on electronic device
CN104102683A (en) * 2013-04-05 2014-10-15 联想(新加坡)私人有限公司 Contextual queries for augmenting video display

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8115869B2 (en) * 2007-02-28 2012-02-14 Samsung Electronics Co., Ltd. Method and system for extracting relevant information from content metadata
CN100423004C (en) * 2006-10-10 2008-10-01 北京新岸线网络技术有限公司 Video search dispatching system based on content
CN101262494A (en) * 2008-01-23 2008-09-10 华为技术有限公司 Method, client, server and system for processing published information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101595481A (en) * 2007-01-29 2009-12-02 三星电子株式会社 Method and system for facilitating information search on electronic device
CN101296362A (en) * 2007-04-25 2008-10-29 三星电子株式会社 Method and system for providing users with access to information of potential interest
CN104102683A (en) * 2013-04-05 2014-10-15 联想(新加坡)私人有限公司 Contextual queries for augmenting video display

Also Published As

Publication number Publication date
CN107180058A (en) 2017-09-19

Similar Documents

Publication Publication Date Title
US10031649B2 (en) Automated content detection, analysis, visual synthesis and repurposing
KR102436734B1 (en) method for confirming a position of video playback node, apparatus, electronic equipment, computer readable storage medium and computer program
JP6015568B2 (en) Method, apparatus, and program for generating content link
KR102028198B1 (en) Device for authoring video scene and metadata
US20200322684A1 (en) Video recommendation method and apparatus
KR101967036B1 (en) Methods, systems, and media for searching for video content
US20090150784A1 (en) User interface for previewing video items
US10977317B2 (en) Search result displaying method and apparatus
US20130308922A1 (en) Enhanced video discovery and productivity through accessibility
CN109558513B (en) Content recommendation method, device, terminal and storage medium
CN111309200B (en) Method, device, equipment and storage medium for determining extended reading content
KR20150052123A (en) Aiding discovery of program content by providing deeplinks into most interesting moments via social media
US20180276298A1 (en) Analyzing user searches of verbal media content
US9794638B2 (en) Caption replacement service system and method for interactive service in video on demand
KR102786461B1 (en) Video timed anchors
JP2015204105A (en) Method and apparatus for providing recommendation information
CN104822078B (en) The occlusion method and device of a kind of video caption
US20170242554A1 (en) Method and apparatus for providing summary information of a video
CN113747230B (en) Audio and video processing method and device, electronic equipment and readable storage medium
CN113553466A (en) Page display method, apparatus, medium and computing device
CN107180058B (en) Method and device for inquiring based on subtitle information
CN116049490A (en) Material searching method and device and electronic equipment
US20100281046A1 (en) Method and web server of processing a dynamic picture for searching purpose
US20140181672A1 (en) Information processing method and electronic apparatus
JP6811811B1 (en) Metadata generation system, video content management system and programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant