CN111601142B - Subtitle display method and display equipment - Google Patents
Subtitle display method and display equipment Download PDFInfo
- Publication number
- CN111601142B CN111601142B CN202010383784.6A CN202010383784A CN111601142B CN 111601142 B CN111601142 B CN 111601142B CN 202010383784 A CN202010383784 A CN 202010383784A CN 111601142 B CN111601142 B CN 111601142B
- Authority
- CN
- China
- Prior art keywords
- subtitle
- stream data
- caption
- elementary stream
- pipeline
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a subtitle display method and display equipment, which are used for realizing dynamic subtitle loading. The method comprises the following steps: displaying image content and subtitle content on a display when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet in response to a control instruction input by a user and indicating to display subtitles; the audio and video elementary stream data and the subtitle elementary stream data are separated.
Description
Technical Field
The present application relates to the field of display technologies, and in particular, to a subtitle display method and a subtitle display device.
Background
The media files provided by the external server include audio data, video data, and subtitle data. Generally, when subtitle data and audio data, video data belong to the same URL, it is called embedded subtitle of media file; when the caption data, the audio data and the video data belong to different URLs, the caption is called as a plug-in caption of the media file. Based on the caption data, the user can freely select a caption language.
However, for the plug-in subtitles, subtitle data needs to be imported in advance when the audio and video files are played to achieve subtitle display, so that the requirement that a user selects dynamic loading subtitles in the process of playing the audio and video files cannot be met.
Disclosure of Invention
The embodiment of the application provides a subtitle display method and display equipment, which are used for dynamically loading subtitles and further meeting the requirements of users.
In a first aspect, there is provided a display device comprising:
a display;
a user interface;
the network module is used for browsing and/or downloading the media files from the server;
a decoder for decoding a media file;
a controller for performing:
displaying image content and subtitle content on a display when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet in response to a control instruction input by a user and indicating to display subtitles;
the audio and video elementary stream data and the subtitle elementary stream data are separated.
In some embodiments, the controller specifically performs:
in response to a control instruction input by a user and indicating to display subtitles, when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet, disconnecting and releasing a connection between an existing subtitle parsing element and a general subtitle rendering element in the player pipeline, and creating a new subtitle parsing element in the player pipeline, wherein the general subtitle rendering element is used for rendering subtitle elementary stream data in a text format or a picture format.
In some embodiments, the controller is further configured to perform:
in response to a user-entered control instruction indicating display of subtitles, upon determining that audiovisual elementary stream data is active in a current player pipeline and that subtitle elementary stream data is not present in a currently playing packet, creating a subtitle parsing element directly in the player pipeline.
In some embodiments, the controller is further configured to perform:
responding to a control instruction which is input by a user and indicates to display subtitles, and waiting for activation of the audio and video elementary stream data when determining that the audio and video elementary stream data in the current player pipeline is not activated;
upon determining that audiovisual elementary stream data has been activated in a current player pipeline, creating a subtitle parsing element in the player pipeline.
In some embodiments, the subtitle parsing element includes at least: the system comprises a subtitle downloading module, a subtitle parsing module and a subtitle synchronizing module;
the caption downloading module is used for downloading caption data according to the set caption path;
the caption analysis module is used for analyzing the caption data downloaded by the caption downloading module to obtain caption basic stream data;
the caption synchronization module is configured to determine, from the caption elementary stream data obtained by parsing by the caption parsing module, caption elementary stream data that matches audio/video elementary stream data currently played by the player pipeline, and output the matched caption elementary stream data to a general caption rendering module in the player pipeline, so that the general caption rendering module renders the matched caption elementary stream data.
In a second aspect, a method for displaying subtitles is provided, including:
displaying image content and subtitle content on a display when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet in response to a control instruction input by a user and indicating to display subtitles;
the audio and video elementary stream data and the subtitle elementary stream data are separated.
In some embodiments, the method further comprises:
in response to a control instruction input by a user and indicating to display subtitles, when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet, disconnecting and releasing a connection between an existing subtitle parsing element and a general subtitle rendering element in the player pipeline, and creating a new subtitle parsing element in the player pipeline, wherein the general subtitle rendering element is used for rendering subtitle elementary stream data in a text format or a picture format.
In some embodiments, the method further comprises:
in response to a user-entered control instruction indicating display of subtitles, upon determining that audiovisual elementary stream data is active in a current player pipeline and that subtitle elementary stream data is not present in a currently playing packet, creating a subtitle parsing element directly in the player pipeline.
In some embodiments, the method further comprises:
responding to a control instruction which is input by a user and indicates to display subtitles, and waiting for activation of the audio and video elementary stream data when determining that the audio and video elementary stream data in the current player pipeline is not activated;
upon determining that audiovisual elementary stream data has been activated in a current player pipeline, creating a subtitle parsing element in the player pipeline.
In some embodiments, the subtitle parsing element includes at least: the system comprises a subtitle downloading module, a subtitle parsing module and a subtitle synchronizing module;
the caption downloading module is used for downloading caption data according to the set caption path;
the caption analysis module is used for analyzing the caption data downloaded by the caption downloading module to obtain caption basic stream data;
the caption synchronization module is configured to determine, from the caption elementary stream data obtained by parsing by the caption parsing module, caption elementary stream data that matches audio/video elementary stream data currently played by the player pipeline, and output the matched caption elementary stream data to a general caption rendering module in the player pipeline, so that the general caption rendering module renders the matched caption elementary stream data.
In the above embodiment, by responding to a control instruction for displaying a subtitle in response to an instruction input by a user, when it is determined that the audio/video elementary stream data in the current player pipeline is activated and the subtitle elementary stream data exists in the currently played packet, disconnecting the existing subtitle parsing element and the general subtitle rendering element in the player pipeline and releasing the existing subtitle parsing element, and creating a new subtitle parsing element in the player pipeline, it is possible to dynamically load a subtitle in the audio/video playing process.
Drawings
Fig. 1A schematically illustrates an operation scenario between the display device 200 and the control 100;
fig. 1B is a block diagram schematically illustrating a configuration of the control apparatus 100 in fig. 1A;
fig. 1C is a block diagram schematically illustrating a configuration of the display device 200 in fig. 1A;
FIG. 1D is a block diagram illustrating an architectural configuration of an operating system in memory of display device 200;
FIG. 2 is a schematic diagram of a player pipeline in the related art;
fig. 3 is a schematic structural diagram of a player pipeline provided in an embodiment of the present application;
FIG. 4A is a schematic diagram of a web player setup menu;
FIG. 4B is a diagram illustrating a display interface 400 of the web page player in a subtitle-free state;
FIG. 4C is a diagram illustrating options set on a display interface of the web player;
FIG. 4D is a schematic diagram of a caption selection interface;
FIG. 4E is a diagram of a display interface 400 of a web player with subtitles;
fig. 5 is a flowchart illustrating a subtitle display method according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
The terms "comprises" and "comprising," and any variations thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1A is a schematic diagram illustrating an operation scenario between the display device 200 and the control apparatus 100. As shown in fig. 1A, the control apparatus 100 and the display device 200 may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display apparatus 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application program may provide various controls to a user through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile terminal 100B may install a software application with the display device 200 to implement connection communication through a network communication protocol for the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the display device 200 to implement the functions of the physical keys as arranged in the remote control 100A by operating various function keys or virtual buttons of the user interface provided on the mobile terminal 100B. The audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
The display apparatus 200 may be implemented as a television, and may provide an intelligent network television function of a broadcast receiving television function as well as a computer support function. Examples of the display device include a digital television, a web television, a smart television, an Internet Protocol Television (IPTV), and the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection display device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
Fig. 1B is a block diagram illustrating the configuration of the control device 100. As shown in fig. 1B, the control device 100 includes a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160.
The controller 110 includes a Random Access Memory (RAM)111, a Read Only Memory (ROM)112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the display device 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 120 may store various control signal commands input by a user.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the display device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the display device 200 via the communicator 130. The communicator 130 may include an infrared signal interface 131 and a radio frequency signal interface 132. For example: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding controlling the display apparatus 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the display apparatus 200, or outputs an image or voice signal received by the display apparatus 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal such as audio, video, or data from the output interface 150, and display the output signal in the form of an image on the display 154, in the form of audio on the sound output interface 153, or in the form of vibration on the vibration interface 152.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110. In the form of a battery and associated control circuitry.
A hardware configuration block diagram of the display device 200 is exemplarily illustrated in fig. 1C. As shown in fig. 1C, the display apparatus 200 may further include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio input interface 285, and a power supply 290.
The tuner demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, and resonance, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth communication protocol module 222, and a wired ethernet communication protocol module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 230 may include an image collector 231, such as a camera, a video camera, etc., which may be used to collect external environment scenes to adaptively change the display parameters of the display device 200; and the function of acquiring the attribute of the user or interacting gestures with the user so as to realize the interaction between the display equipment and the user. A light receiver 232 may also be included to collect ambient light intensity to adapt to changes in display parameters of the display device 200, etc.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor, such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
In some other exemplary embodiments, the detector 230, which may further include a sound collector, such as a microphone, may be configured to receive a sound of a user, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
The external device interface 240 is a component for providing the controller 210 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260.
As shown in fig. 1C, the controller 250 includes a Random Access Memory (RAM)251, a Read Only Memory (ROM)252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphic processor 253, and the CPU processor 254 are connected to each other through a communication bus 256 through a communication interface 255.
The ROM252 stores various system boot instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running and starting the various application programs.
A graphic processor 253 for generating screen images of various graphic objects such as icons, images, and operation menus. The graphic processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, and further displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received user input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preload mode and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface to an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user input command for selecting a GUI object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user input command.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to an icon. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a user uttering voice.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting GUI objects are stored.
In some embodiments, the memory 260 is specifically configured to store drivers and related data for the tuner demodulator 210, the communicator 220, the detector 230, the external device interface 240, the video processor 270, the display 275, the audio processor 280, and the like, external data (e.g., audio-visual data) received from the external device interface, or user data (e.g., key information, voice information, touch information, and the like) received from the user interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
A block diagram of the architectural configuration of the operating system in the memory of the display device 200 is illustrated in fig. 1D. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer and are responsible for direct interaction with users. The application layer may include a plurality of applications such as NETFLIX applications, setup applications, media center applications, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML, Cascading Style Sheets (CSS), and JavaScript.
Here, HTML, which is called HyperText Markup Language (HyperText Markup Language), is a standard Markup Language for creating web pages, and describes the web pages by Markup tags, where the HTML tags are used to describe characters, graphics, animation, sound, tables, links, etc., and a browser reads an HTML document, interprets the content of the tags in the document, and displays the content in the form of web pages.
CSS, known as Cascading Style Sheets (Cascading Style Sheets), is a computer language used to represent the Style of HTML documents, and may be used to define Style structures, such as fonts, colors, locations, etc. The CSS style can be directly stored in the HTML webpage or a separate style file, so that the style in the webpage can be controlled.
JavaScript, a language applied to Web page programming, can be inserted into an HTML page and interpreted and executed by a browser. The interaction logic of the Web application is realized by JavaScript. The JavaScript can package a JavaScript extension interface through a browser, realize the communication with the kernel layer,
the middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as multimedia and hypermedia information coding experts group (MHEG) middleware related to data broadcasting, DLNA middleware which is middleware related to communication with an external device, middleware which provides a browser environment in which each application program in the display device operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: provide display driver for the display, provide camera driver for the camera, provide button driver for the remote controller, provide wiFi driver for the WIFI module, provide audio driver for audio output interface, provide power management drive for Power Management (PM) module etc..
A user interface 265 receives various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller 100A may transmit an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by the user to the user interface 265, and then the input signal is transferred to the controller 250 through the user interface 265; alternatively, the remote controller 100A may receive an output signal such as audio, video, or data output from the user interface 265 via the controller 250, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on the display 275, and the user interface 265 receives the user input commands through the GUI. Specifically, the user interface 265 may receive user input commands for controlling the position of a selector in the GUI to select different objects or items.
Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user interface 265 receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 270 is configured to receive an external video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 275.
Illustratively, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is configured to demultiplex an input audio/video data stream, where, for example, an input MPEG-2 stream (based on a compression standard of a digital storage media moving image and voice), the demultiplexing module demultiplexes the input audio/video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, for example, convert a frame rate of an input 60Hz video into a frame rate of 120Hz or 240Hz, where a common format is implemented by using, for example, an interpolation frame method.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
And a display 275 for receiving the image signal from the output of the video processor 270 and displaying video, images and menu manipulation interfaces. For example, the display may display video from a broadcast signal received by the tuner demodulator 210, may display video input from the communicator 220 or the external device interface 240, and may display an image stored in the memory 260. The display 275, while displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, the display 275 may include a display screen assembly for presenting a picture and a driving assembly for driving the display of an image. Alternatively, a projection device and projection screen may be included, provided display 275 is a projection display.
The audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played by the speaker 286.
Illustratively, audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, Advanced Audio Coding (AAC), high efficiency AAC (HE-AAC), and the like.
In other exemplary embodiments, video processor 270 may comprise one or more chips. Audio processor 280 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 270 and the audio processor 280 may be separate chips or may be integrated with the controller 250 in one or more chips.
And a power supply 290 for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the display apparatus 200 or may be a power supply installed outside the display apparatus 200.
The plug-in caption is separation of caption data, audio data and video data (hereinafter referred to as audio/video data) of a media file, that is, the caption data, the audio data and the video data belong to different URLs. At present, when a media file containing a plug-in subtitle is played, in order to ensure that audio and video data and subtitle data can be rendered synchronously, the subtitle data and the audio and video data to be played are divided into the same playing group, and meanwhile, a path of the subtitle data, namely a Uniform Resource Locator (URL), is required to be set for a player, and the player can create a subtitle parsing element in a player pipeline in an initialization process to realize subtitle display. Fig. 2 is a schematic diagram of a structure of a player pipeline in the related art.
The player pipeline illustrated in fig. 2 includes an audio/video parsing element, a subtitle parsing element, a code stream synchronization element, and a pipeline output element.
Wherein, audio frequency and video analytic element is used for analyzing, buffering audio frequency and video data, includes: the device comprises a video downloading module, a video format detection module, a media buffering module, an audio and video de-encapsulation module and a multi-buffer queue. The video downloading module is used for downloading the audio and video data according to the set audio and video path. The video format detection module is used for identifying the format of the audio and video data. The media buffering module is used for: and buffering the data downloaded by the video downloading module so as to buffer enough data for processing of the subsequent module. The audio/video decapsulation module is configured to decapsulate audio/video data to obtain audio elementary stream data and video elementary stream data, and may provide different functions for different media formats, for example, a protocol decapsulation function and a format decapsulation function for media in formats such as HLS, MSS, DASH, and the like, while only providing a format decapsulation function for general network media. The multi-buffer queue module is configured to: and buffering basic stream data in different coding formats output by the audio/video decapsulating module.
The caption analysis element is used for analyzing caption data and comprises a caption downloading module, a caption format detection module, a media buffer module and a caption analysis module. And the caption downloading module is used for downloading caption data according to the set caption path and supporting the downloading of local media and network media. The caption format detection module is used for identifying the format of the caption data. The media buffering module is used for: and buffering the whole external subtitle file. The Subtitle parsing module is used for parsing Subtitle data to obtain Subtitle elementary stream data, and supports parsing of Subtitle data in different formats, for example, text format Subtitle data such as TTML, WEBVTT, SRT, ASS and the like, and picture format Subtitle data such as DVB-Subtitle and the like.
The stream synchronization element is used for synchronizing subtitle elementary stream data, audio elementary stream data and video elementary stream data (hereinafter referred to as audio and video elementary stream data). It should be noted that, at present, only when the audio/video parsing element and the subtitle parsing element both parse the elementary stream data, the code stream synchronization element is loaded.
The pipeline output element is the output end of the whole player pipeline, and provides different types of code streams, such as a video elementary stream, an audio elementary stream and a subtitle elementary stream, and the synchronous injection function of the platform chip comprises a video elementary stream injection module, an audio elementary stream injection module and a subtitle elementary stream injection module.
As can be seen from the above description, currently, subtitle display can be achieved only before the player is initialized, i.e., a path of subtitle data is set to the player. That is, if the path of the subtitle data is not set to the player before the player is initialized, the player cannot display the subtitle when playing the video, which means that the user is not supported to dynamically load the subtitle during the process of playing the video.
Based on the above, the application provides a subtitle display method, which is used for dynamically loading subtitles to further meet the user requirements. The method is described below:
to facilitate understanding of the present application, the following first describes a player pipeline provided in an embodiment of the present application:
please refer to fig. 3, which is a schematic structural diagram of a player pipeline according to an embodiment of the present application. As shown in fig. 3, the player pipeline provided in the embodiment of the present application includes an audio/video parsing element, a subtitle parsing element, and a pipeline output element.
The structure and application of the audio/video analyzing element are the same as those of the audio/video analyzing element in fig. 2, and are not described herein again.
The subtitle parsing element is different from the subtitle parsing element in fig. 2 in that the subtitle parsing element illustrated in fig. 3 further includes a subtitle synchronization module, and the subtitle synchronization module is configured to determine subtitle elementary stream data that matches audio/video elementary stream data currently played by the playback pipeline from subtitle elementary stream data obtained by parsing by the subtitle parsing module, and output the matched subtitle elementary stream data to a general subtitle rendering module in the pipeline output element. As used herein, "match" means: the display time stamp of the audio and video elementary stream data is in the display time range of the subtitle elementary stream data, and the display time range is obtained by subtracting the subtitle rendering time from the display time stamp range of the subtitle elementary stream data.
A pipe output element, different from the pipe output element in fig. 2: the general caption rendering module is adopted to replace the caption elementary stream injection module in fig. 2, and comprises a general caption elementary stream injection module and a caption rendering module, wherein a caption interface provided by the general caption elementary stream injection module not only supports the captions in the text format, but also supports the captions in the picture format, and provides a function of injecting caption elementary stream data into the caption rendering module. And the caption rendering module is used for rendering the caption elementary streams injected by the general caption elementary stream injection module. Therefore, the general caption rendering module provided by the application can render the caption elementary stream data in the text format and can also render the caption elementary stream data in the picture format.
In some embodiments, whether or not the path of the subtitle data has been set to the player, the player creates an audio-video parsing element and a pipe output element in the player pipe upon initialization, and loads a universal subtitle rendering module in the pipe output element. By this kind of processing, it is possible to realize: in the process of playing the audio and video by the player, if a user has a demand for playing the subtitles, the player does not need to establish a pipeline output element again in a player pipeline, so that the influence on the injection of the audio and video basic code stream and the influence on the smoothness of the audio and video playing can be avoided.
In some embodiments, the user can select whether the player supports dynamic loading of subtitles according to actual needs. For example, as shown in fig. 4A, a schematic diagram of a menu is set for a web player. Before viewing a video using the web player, the user may first initially set the web player based on the setting menu illustrated in fig. 4A, such as checking the "allow dynamic subtitle loading" option box, which indicates that the web player is supported to dynamically load subtitles. Based on the method, the user can select to load the subtitles at any time in the process of playing the video by the webpage player.
Illustratively, as shown in fig. 4B, the display interface 400 of the web page player in the subtitle-free state is illustrated. In the process of playing the video by the web player, the user may call a setting option on the display interface of the web player through a control device, such as a remote controller, a mouse, and the like, as shown in fig. 4C, which is a schematic diagram of setting options on the display interface of the web player.
Based on fig. 4C, the user may trigger the icon 410 of fig. 4C through the control device, and after the icon 410 is triggered, the subtitle selection interface illustrated in fig. 4D may be displayed on the upper layer of the display interface of the web page player. As can be seen from fig. 4D, the user may select to load local subtitle data or may select to load network subtitle data through the trigger icon 420.
After the user selects the subtitle data to be loaded, as shown in the flow chart of fig. 5, the application firstly sets the path of the subtitle data selected by the user to the player, and when the player receives a control instruction which is input by the user and indicates that the subtitle is displayed, and determines that the subtitle is required to be displayed, the player firstly checks whether the audio and video elementary stream data in the current player pipeline is activated, wherein the activated audio and video elementary stream data means that the audio and video parsing element is connected with the pipeline output element. If so, continuously checking whether the caption basic stream data exists in the current playing packet, if so, stopping caption rendering, disconnecting the connection between the caption analysis element in the figure 3 and the universal caption rendering module in the pipeline output element, and releasing the caption analysis element. Through the processing procedure, under the condition that the player currently plays the subtitles, if the user has a new subtitle loading requirement, the current subtitles are stopped to be played.
And then adding the subtitle to be loaded into the current playing group according to the newly set subtitle path, creating a new subtitle analyzing element, and downloading and analyzing the plug-in subtitle through the newly created subtitle analyzing element.
After the subtitle analysis element obtains subtitle elementary stream data through analysis of the subtitle analysis module, the subtitle synchronization module carries out frame loss processing on the subtitle elementary stream data obtained through analysis of the subtitle analysis module according to the display time stamp of the audio and video elementary stream data currently played by the player pipeline, and subtitle elementary stream data matched with the audio and video elementary stream data currently played by the player pipeline are determined. And next, establishing a connection between the subtitle parsing element and the universal subtitle rendering module to send the matched subtitle elementary stream data to the universal subtitle rendering module, so that the universal subtitle rendering module performs injection and rendering on the subtitle elementary stream data to realize subtitle display. Fig. 4E is a schematic diagram of the display interface 400 of the web player in the subtitle state.
In the flow shown in fig. 5, if it is checked that the audio-video elementary stream data is not activated in the current player pipe, it may wait for the audio-video elementary stream data to be activated. Here, when the application has grouped audio and video data to be played and subtitle data to be loaded into the same group, and has created a pipe output element in the player pipe and established a connection of the audio and video parsing element and the pipe output element, it is determined that the audio and video elementary stream data has been activated. At this point, a subtitle parsing element may then be created in the player pipeline. After the subtitle parsing element is created, the subtitle display may be implemented using the created subtitle parsing element.
As can be seen from the above embodiments, by responding to a control instruction for displaying subtitles according to an instruction input by a user, when it is determined that the av elementary stream data in the current player pipeline is activated and subtitle elementary stream data exists in the currently played packet, disconnecting the existing subtitle parsing element and the general subtitle rendering element in the player pipeline and releasing the existing subtitle parsing element, and creating a new subtitle parsing element in the player pipeline, it is possible to dynamically load subtitles during the av playing process.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (8)
1. A display device, comprising:
a display;
a user interface;
the network module is used for browsing and/or downloading the media files from the server;
a decoder for decoding a media file;
a controller for performing:
in response to a control instruction input by a user and indicating to display subtitles, when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet, disconnecting and releasing a connection between an existing subtitle parsing element and a general subtitle rendering element in the player pipeline, and creating a new subtitle parsing element in the player pipeline, wherein the general subtitle rendering element is used for rendering subtitle elementary stream data in a text format or a picture format;
and displaying the image content and the subtitle content on a display, wherein the audio and video elementary stream data is separated from the subtitle elementary stream data.
2. The display device according to claim 1, wherein the controller is further configured to perform:
in response to a user-entered control instruction indicating display of subtitles, upon determining that audiovisual elementary stream data is active in a current player pipeline and that subtitle elementary stream data is not present in a currently playing packet, creating a subtitle parsing element directly in the player pipeline.
3. The display device according to claim 1, wherein the controller is further configured to perform:
responding to a control instruction which is input by a user and indicates to display subtitles, and waiting for activation of the audio and video elementary stream data when determining that the audio and video elementary stream data in the current player pipeline is not activated;
upon determining that audiovisual elementary stream data has been activated in a current player pipeline, creating a subtitle parsing element in the player pipeline.
4. The display device according to any one of claims 2 to 3, wherein the subtitle parsing component includes at least: the system comprises a subtitle downloading module, a subtitle parsing module and a subtitle synchronizing module;
the caption downloading module is used for downloading caption data according to the set caption path;
the caption analysis module is used for analyzing the caption data downloaded by the caption downloading module to obtain caption basic stream data;
the caption synchronization module is configured to determine, from the caption elementary stream data obtained by parsing by the caption parsing module, caption elementary stream data that matches audio/video elementary stream data currently played by the player pipeline, and output the matched caption elementary stream data to a general caption rendering module in the player pipeline, so that the general caption rendering module renders the matched caption elementary stream data.
5. A method for displaying subtitles, the method comprising:
in response to a control instruction input by a user and indicating to display subtitles, when determining that audio and video elementary stream data in a current player pipeline is activated and subtitle elementary stream data exists in a current playing packet, disconnecting and releasing a connection between an existing subtitle parsing element and a general subtitle rendering element in the player pipeline, and creating a new subtitle parsing element in the player pipeline, wherein the general subtitle rendering element is used for rendering subtitle elementary stream data in a text format or a picture format;
and displaying the image content and the subtitle content on a display, wherein the audio and video elementary stream data is separated from the subtitle elementary stream data.
6. The method of claim 5, further comprising:
in response to a user-entered control instruction indicating display of subtitles, upon determining that audiovisual elementary stream data is active in a current player pipeline and that subtitle elementary stream data is not present in a currently playing packet, creating a subtitle parsing element directly in the player pipeline.
7. The method of claim 5, further comprising:
responding to a control instruction which is input by a user and indicates to display subtitles, and waiting for activation of the audio and video elementary stream data when determining that the audio and video elementary stream data in the current player pipeline is not activated;
upon determining that audiovisual elementary stream data has been activated in a current player pipeline, creating a subtitle parsing element in the player pipeline.
8. The method according to any one of claims 6 to 7, wherein the subtitle parsing component comprises at least: the system comprises a subtitle downloading module, a subtitle parsing module and a subtitle synchronizing module;
the caption downloading module is used for downloading caption data according to the set caption path;
the caption analysis module is used for analyzing the caption data downloaded by the caption downloading module to obtain caption basic stream data;
the caption synchronization module is configured to determine, from the caption elementary stream data obtained by parsing by the caption parsing module, caption elementary stream data that matches audio/video elementary stream data currently played by the player pipeline, and output the matched caption elementary stream data to a general caption rendering module in the player pipeline, so that the general caption rendering module renders the matched caption elementary stream data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010383784.6A CN111601142B (en) | 2020-05-08 | 2020-05-08 | Subtitle display method and display equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010383784.6A CN111601142B (en) | 2020-05-08 | 2020-05-08 | Subtitle display method and display equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111601142A CN111601142A (en) | 2020-08-28 |
CN111601142B true CN111601142B (en) | 2022-03-01 |
Family
ID=72191090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010383784.6A Active CN111601142B (en) | 2020-05-08 | 2020-05-08 | Subtitle display method and display equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111601142B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114979788A (en) * | 2021-02-24 | 2022-08-30 | 上海哔哩哔哩科技有限公司 | Bullet screen display method and device |
CN115119030B (en) * | 2021-03-19 | 2024-11-08 | 海信视像科技股份有限公司 | A subtitle processing method and device |
CN113127785A (en) * | 2021-05-18 | 2021-07-16 | 深圳Tcl新技术有限公司 | Subtitle processing method, subtitle processing device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1905645A (en) * | 2005-05-26 | 2007-01-31 | 三星电子株式会社 | Apparatus and method for providing addition information using extension subtitle file |
CN102655571A (en) * | 2011-03-01 | 2012-09-05 | 上海清鹤数码科技有限公司 | Digital television stream media middleware multi-subtitle display assembly based on embedded platform |
CN105163170A (en) * | 2015-08-27 | 2015-12-16 | 北京暴风科技股份有限公司 | Method and system for analyzing and displaying video captions |
CN105704579A (en) * | 2014-11-27 | 2016-06-22 | 南京苏宁软件技术有限公司 | Real-time automatic caption translation method during media playing and system |
CN108055574A (en) * | 2017-11-29 | 2018-05-18 | 上海网达软件股份有限公司 | Media file transcoding generates the method and system of multitone rail multi-subtitle on-demand content |
CN108377416A (en) * | 2018-02-27 | 2018-08-07 | 维沃移动通信有限公司 | A kind of video broadcasting method and mobile terminal |
CN108769778A (en) * | 2018-06-04 | 2018-11-06 | 北京搜狐新动力信息技术有限公司 | A kind of display control method and device of video caption |
CN109274696A (en) * | 2018-09-20 | 2019-01-25 | 青岛海信电器股份有限公司 | Flow media playing method and device based on DASH agreement |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110056476A (en) * | 2008-06-06 | 2011-05-30 | 디브이엑스, 인크. | Multimedia Distribution and Playback Systems and Methods Using Enhanced Metadata Structures |
CN101867733A (en) * | 2009-04-14 | 2010-10-20 | 联发科技(新加坡)私人有限公司 | Processing method of subtitle data stream of video programme and video displaying system |
CN103117077A (en) * | 2013-01-17 | 2013-05-22 | 广东欧珀移动通信有限公司 | A system and method for displaying external subtitles on a Blu-ray player disc |
CN105100833A (en) * | 2015-07-01 | 2015-11-25 | 北京奇虎科技有限公司 | Subtitle loading method and device for online playback |
-
2020
- 2020-05-08 CN CN202010383784.6A patent/CN111601142B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1905645A (en) * | 2005-05-26 | 2007-01-31 | 三星电子株式会社 | Apparatus and method for providing addition information using extension subtitle file |
CN102655571A (en) * | 2011-03-01 | 2012-09-05 | 上海清鹤数码科技有限公司 | Digital television stream media middleware multi-subtitle display assembly based on embedded platform |
CN105704579A (en) * | 2014-11-27 | 2016-06-22 | 南京苏宁软件技术有限公司 | Real-time automatic caption translation method during media playing and system |
CN105163170A (en) * | 2015-08-27 | 2015-12-16 | 北京暴风科技股份有限公司 | Method and system for analyzing and displaying video captions |
CN108055574A (en) * | 2017-11-29 | 2018-05-18 | 上海网达软件股份有限公司 | Media file transcoding generates the method and system of multitone rail multi-subtitle on-demand content |
CN108377416A (en) * | 2018-02-27 | 2018-08-07 | 维沃移动通信有限公司 | A kind of video broadcasting method and mobile terminal |
CN108769778A (en) * | 2018-06-04 | 2018-11-06 | 北京搜狐新动力信息技术有限公司 | A kind of display control method and device of video caption |
CN109274696A (en) * | 2018-09-20 | 2019-01-25 | 青岛海信电器股份有限公司 | Flow media playing method and device based on DASH agreement |
Non-Patent Citations (2)
Title |
---|
A Self-Adapting Approach to Topic Boundary Detection in Video Subtitles;X. Qi et al.;《2016 6th International Conference on Digital Home (ICDH)》;20170918;全文 * |
一种面向家庭的多屏互动媒体播放系统的设计与实现;陈猛;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊),2019年第12期》;20191215;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111601142A (en) | 2020-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111698551B (en) | Content display method and display equipment | |
CN111601142B (en) | Subtitle display method and display equipment | |
CN111417027A (en) | Method for switching small window playing of full-screen playing of webpage video and display equipment | |
CN111246309A (en) | Method for displaying channel list in display device and display device | |
CN111045557A (en) | Moving method of focus object and display device | |
CN111726673B (en) | Channel switching method and display device | |
CN111343492B (en) | Display method and display device of browser in different layers | |
CN111629249B (en) | Method for playing startup picture and display device | |
CN111614995A (en) | Menu display method and display equipment | |
WO2021232914A1 (en) | Display method and display device | |
CN111526401B (en) | Video playing control method and display equipment | |
CN111405329B (en) | Display device and control method for EPG user interface display | |
CN111726674B (en) | HbbTV application starting method and display equipment | |
CN113378092A (en) | Video playing management method and display equipment | |
CN112040307A (en) | Play control method and display device | |
CN111885415B (en) | Audio data rapid output method and display device | |
CN111343498B (en) | Mute control method and device and smart television | |
CN112040285B (en) | Interface display method and display equipment | |
CN113497906B (en) | Volume adjusting method and device and terminal | |
CN113407346A (en) | Browser memory adjusting method and display device | |
CN112291598A (en) | Display equipment function control method and display equipment | |
CN113010074A (en) | Webpage Video control bar display method and display equipment | |
CN113382291A (en) | Display device and streaming media playing method | |
CN111586457A (en) | Method for repeatedly executing corresponding operation of input instruction and display device | |
CN111405332B (en) | Display device and control method for EPG user interface display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221018 Address after: 83 Intekte Street, Devon, Netherlands Patentee after: VIDAA (Netherlands) International Holdings Ltd. Address before: 266061 room 131, 248 Hong Kong East Road, Laoshan District, Qingdao City, Shandong Province Patentee before: QINGDAO HISENSE MEDIA NETWORKS Ltd. |
|
TR01 | Transfer of patent right |