[go: up one dir, main page]

CN104104900A - Data playing method - Google Patents

Data playing method Download PDF

Info

Publication number
CN104104900A
CN104104900A CN201410354486.9A CN201410354486A CN104104900A CN 104104900 A CN104104900 A CN 104104900A CN 201410354486 A CN201410354486 A CN 201410354486A CN 104104900 A CN104104900 A CN 104104900A
Authority
CN
China
Prior art keywords
data
fragment
video
time
handwriting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410354486.9A
Other languages
Chinese (zh)
Other versions
CN104104900B (en
Inventor
严杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TVM Beijing Education Technology Co Ltd
Original Assignee
TVM Beijing Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TVM Beijing Education Technology Co Ltd filed Critical TVM Beijing Education Technology Co Ltd
Priority to CN201410354486.9A priority Critical patent/CN104104900B/en
Publication of CN104104900A publication Critical patent/CN104104900A/en
Application granted granted Critical
Publication of CN104104900B publication Critical patent/CN104104900B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a data playing method. The data playing method includes: receiving a playing request for a piece of picture data; obtaining the piece of the picture data and fragmented multimedia data between occurrence time of the piece of the picture data and occurrence time of a next piece of picture data, wherein the fragmented multimedia data comprises at least one type of fragmented video data which is divided according to the minimum preset unit and video data occurrence time, voice data and voice data occurrence time, the picture data and the picture data occurrence time, and note data and note data occurrence time; playing the piece of the picture data and the fragmented multimedia data in chronological order according to the piece of the picture data and the fragmented multimedia data. Accordingly, a certain knowledge point in a meeting can be accurately positioned and precisely checked, a user can conveniently and pointedly replay or search meeting content, and the user experience degree of watching meeting records is improved.

Description

A kind of data playing method
Technical field
The present invention relates to multimedia conferencing technical field, relate in particular to a kind of data playing method.
Background technology
In multimedia conferencing, speaker, telling about in process, generally serves as theme with speech manuscript PPT, explanation page by page.In whole conference process, can produce many data, as: PPT page turning (single page PPT), for the audio/video of the explanation of current PPT and notes etc.In correlation technique, record whole meeting with image capture method.Although intactly recorded whole meeting,, if follow-up certain content of wanting to check in meeting of user can only from the beginning play the video recording of whole meeting and inquire about, cannot clear location and accurately check certain content in conference process.
Summary of the invention
The embodiment of the present invention provides a kind of data playing method, plays fragmentation multi-medium data after treatment for realizing.
A kind of data playing method, comprising:
Receive the playing request of an image data; A described image data comprises the time of origin of the page data in a page data and the described PPT data in PPT data;
Obtain a described image data, and multi-medium data from the time of origin of a described image data to the fragmentation the time of origin of next image data; There is at least one data and the data time of origin in time, voice data and voice data time of origin, image data and image data time of origin, note data and note data time of origin in the video data of the fragmentation that the least unit that the multi-medium data of described fragmentation comprises presetting is divided and video data;
According to the multi-medium data of a described image data and described fragmentation, play in chronological order the multi-medium data of a described image data and described fragmentation.
Preferably, the video data of described fragmentation comprises video type and the video fragment identification marking of video fragment, video fragment; Or the video data of described fragmentation comprises outside the video type and video fragment identification marking of video fragment, video fragment, also comprises video fragment address;
The described video data generation time comprises video fragment timestamp; Or the described video data generation time comprises outside video fragment timestamp, also comprises the video fragment duration.
Preferably, the voice data of described fragmentation comprises audio types and the audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment; Or the voice data of described fragmentation comprises outside the audio types and audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment, also comprises audio frequency fragment address;
Described voice data time of origin comprises audio frequency fragment timestamp; Or described voice data time of origin comprises outside audio frequency fragment timestamp, also comprises the audio frequency fragment duration.
Preferably, the image data of described fragmentation comprises: picture, picture/mb-type, picture recognition mark, or the image data of described fragmentation comprises: outside picture, picture/mb-type, picture recognition mark, also comprise at least one in picture size and picture address;
Described image data time of origin comprises: picture timestamp.
Preferably, the handwriting data of described fragmentation comprises: the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, or the handwriting data of described fragmentation comprises: outside the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, also comprise stroke order, stroke coordinate, base map size and write at least one in container size;
Described handwriting data time of origin comprises person's handwriting fragment timestamp.
Preferably, described default least unit comprises default minimum time unit.
Preferably, described default least unit comprises default minimum stroke unit.
Preferably, the scope of described default least unit is 1 second to 1 hour.
Some beneficial effects of the embodiment of the present invention can comprise: can accurately locate and accurately check certain " knowledge point " in conference process, facilitate user to conference content playback or search targetedly, make user watch the Experience Degree of minutes to improve.
Other features and advantages of the present invention will be set forth in the following description, and, partly from specification, become apparent, or understand by implementing the present invention.Object of the present invention and other advantages can be realized and be obtained by specifically noted structure in write specification, claims and accompanying drawing.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Brief description of the drawings
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for specification, for explaining the present invention, is not construed as limiting the invention together with embodiments of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of a kind of data playing method in the embodiment of the present invention;
Fig. 2 is video data and the video data generation time diagram of fragmentation in the embodiment of the present invention;
Fig. 3 is voice data and the voice data time of origin schematic diagram of fragmentation in the embodiment of the present invention;
Fig. 4 is image data and the image data time of origin schematic diagram of fragmentation in the embodiment of the present invention;
Fig. 5 is handwriting data and the handwriting data time of origin schematic diagram of fragmentation in the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein, only for description and interpretation the present invention, is not intended to limit the present invention.
In the present invention, the new data that produce in the conference process of serving as theme with PPT, as: PPT page turning, notes, circle picture, audio frequency and video etc., all can be described as " knowledge point ".
Technical scheme provided by the invention is for the playback of autostore PPT data in meeting, for the mode of the storage data with shooting mode recording of video, can only in the time of playback, start anew to play, and for conferencing data being carried out to fragmentation storage mode after treatment, can in the time of playback, select " knowledge point " that will watch.
In the process of storage data, first the image data in PPT data (comprising the time of origin of photo current and photo current) is stored in cloud server, then the photo current receiving to the multi-medium data occurring between lower one page picture is carried out to fragmentation processing and is stored in cloud server.
Below respectively the storage mode of dissimilar multi-medium data is described:
(1) for PPT data
In the time that speaker carries out PPT page turning in explanation process, download one page PPT of current demonstration, one page PPT of current demonstration is converted to picture format, and records the time of origin of this PPT page-turning instruction.
(2) for note data
In the time of notes that speaker writes in explanation process, can divide according to stroke the word of writing, every data comprise the data of a stroke and the writing time of each stroke.Or, also can divide taking the time as unit the word of writing, for example, in seconds, in every data, comprise the stroke in 1 second, and record in seconds current time.
(3) for audio, video data
Explanation audio or video for speaker in conference process, divided taking the time as unit, for example, in seconds, the audio or video data that comprise 1 second in every data, and record in seconds current time.
For the storage mode of above-mentioned multi-medium data, the invention discloses a kind of data playing method, as shown in Figure 1, comprise the following steps:
Step 101: the playing request that receives an image data; A described image data comprises the time of origin of the page data in a page data and the described PPT data in PPT data.
Step 102: obtain a described image data, and multi-medium data from the time of origin of a described image data to the fragmentation the time of origin of next image data; There is at least one data and the data time of origin in time, voice data and voice data time of origin, image data and image data time of origin and note data and note data time of origin in the video data of the fragmentation that the least unit that the multi-medium data of described fragmentation comprises presetting is divided and video data.
Step 103: according to the multi-medium data of a described image data and described fragmentation, play in chronological order the multi-medium data of a described image data and described fragmentation.
Described default least unit comprises default minimum time unit and/or default minimum stroke unit.The scope of described default least unit is 1 second to 1 hour.
Default least unit can be set to default minimum time unit, the scope of described default least unit can be set to 1 second to 1 hour, for example 1 second, voice data or the current time of video data and this second in every 1 second of obtaining were the multi-medium data of a fragmentation.Default least unit can be set to time or stroke, and for example, while being set to stroke, each of the note data of obtaining and the writing time of these notes are the multi-medium data of a fragmentation.
Adopt technical scheme of the present invention, can accurately locate and accurately check certain " knowledge point " in conference process, facilitate user to conference content playback or search targetedly, make user watch the Experience Degree of minutes to improve.
Be video data and the video data generation time diagram of fragmentation in the embodiment of the present invention referring to Fig. 2, the video data of described fragmentation comprises video type and the video fragment identification marking of video fragment, video fragment; Or the video data of described fragmentation comprises outside the video type and video fragment identification marking of video fragment, video fragment, also comprises video fragment address; The described video data generation time comprises video fragment timestamp; Or the described video data generation time comprises outside video fragment timestamp, also comprises the video fragment duration.。
In the present embodiment, there is the time in video data and the video data of the fragmentation such as video type, video fragment identification marking, video fragment timestamp, video fragment address, video fragment title and video fragment duration by obtaining video fragment, video fragment, can be quick and precisely comprehensively playing video data facilitate user to conference content playback or search targetedly.
Be voice data and the voice data time of origin schematic diagram of fragmentation in the embodiment of the present invention referring to Fig. 3, the voice data of described fragmentation comprises audio types and the audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment; Or the voice data of described fragmentation comprises outside the audio types and audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment, also comprises audio frequency fragment address; Described voice data time of origin comprises audio frequency fragment timestamp; Or described voice data time of origin comprises outside audio frequency fragment timestamp, also comprises the audio frequency fragment duration.
In the present embodiment, by obtaining voice data and the voice data time of origin of the fragmentation such as audio types, audio frequency fragment identification marking, audio frequency fragment timestamp, audio frequency fragment address, audio frequency fragment title, audio frequency fragment duration of audio frequency fragment, audio frequency fragment, can be quick and precisely comprehensively playing audio-fequency data facilitate user to conference content playback or search targetedly.
Be image data and the image data time of origin schematic diagram of fragmentation in the embodiment of the present invention referring to Fig. 4, the image data of described fragmentation comprises: picture, picture/mb-type, picture recognition mark, or the image data of described fragmentation comprises: outside picture, picture/mb-type, picture recognition mark, also comprise at least one in picture size and picture address; Described image data time of origin comprises: picture timestamp.
In the present embodiment, by obtaining image data and the image data time of origin of the fragmentations such as picture, picture/mb-type, picture recognition mark, picture timestamp, picture size, picture address, quick and precisely comprehensive playing pictures data facilitate user to conference content playback or search targetedly.
Be handwriting data and the handwriting data time of origin schematic diagram of fragmentation in the embodiment of the present invention referring to Fig. 5, the handwriting data of described fragmentation comprises: the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, or the handwriting data of described fragmentation comprises: outside the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, also comprise stroke order, stroke coordinate, base map size and write at least one in container size; Described handwriting data time of origin comprises person's handwriting fragment timestamp.Described default least unit can comprise default minimum stroke unit.
In the present embodiment, by obtaining person's handwriting type, person's handwriting fragment identification marking, person's handwriting fragment timestamp, stroke order, stroke coordinate, the base map size of person's handwriting fragment, person's handwriting fragment and writing handwriting data and the handwriting data time of origin of the fragmentations such as container size, can quick and precisely comprehensively play handwriting data and facilitate user to conference content playback or search targetedly.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt complete hardware implementation example, completely implement software example or the form in conjunction with the embodiment of software and hardware aspect.And the present invention can adopt the form at one or more upper computer programs of implementing of computer-usable storage medium (including but not limited to magnetic disc store and optical memory etc.) that wherein include computer usable program code.
The present invention is with reference to describing according to flow chart and/or the block diagram of the method for the embodiment of the present invention, equipment (system) and computer program.Should understand can be by the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or the combination of square frame.Can provide these computer program instructions to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction that makes to carry out by the processor of computer or other programmable data processing device produces the device for realizing the function of specifying at flow process of flow chart or multiple flow process and/or square frame of block diagram or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of specifying in flow process of flow chart or multiple flow process and/or square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computer or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of specifying in flow process of flow chart or multiple flow process and/or square frame of block diagram or multiple square frame on computer or other programmable devices.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if these amendments of the present invention and within modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (8)

1. a data playing method, is characterized in that, comprising:
Receive the playing request of an image data; A described image data comprises the time of origin of the page data in a page data and the described PPT data in PPT data;
Obtain a described image data, and multi-medium data from the time of origin of a described image data to the fragmentation the time of origin of next image data; There is at least one data and the data time of origin in time, voice data and voice data time of origin, image data and image data time of origin, note data and note data time of origin in the video data of the fragmentation that the least unit that the multi-medium data of described fragmentation comprises presetting is divided and video data;
According to the multi-medium data of a described image data and described fragmentation, play in chronological order the multi-medium data of a described image data and described fragmentation.
2. the method for claim 1, is characterized in that,
The video data of described fragmentation comprises video type and the video fragment identification marking of video fragment, video fragment; Or the video data of described fragmentation comprises outside the video type and video fragment identification marking of video fragment, video fragment, also comprises video fragment address;
The described video data generation time comprises video fragment timestamp; Or the described video data generation time comprises outside video fragment timestamp, also comprises the video fragment duration.
3. the method for claim 1, is characterized in that,
The voice data of described fragmentation comprises audio types and the audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment; Or the voice data of described fragmentation comprises outside the audio types and audio frequency fragment identification marking of audio frequency fragment, audio frequency fragment, also comprises audio frequency fragment address;
Described voice data time of origin comprises audio frequency fragment timestamp; Or described voice data time of origin comprises outside audio frequency fragment timestamp, also comprises the audio frequency fragment duration.
4. the method for claim 1, is characterized in that,
The image data of described fragmentation comprises: picture, picture/mb-type, picture recognition mark, or the image data of described fragmentation comprises: outside picture, picture/mb-type, picture recognition mark, also comprise at least one in picture size and picture address;
Described image data time of origin comprises: picture timestamp.
5. the method for claim 1, is characterized in that,
The handwriting data of described fragmentation comprises: the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, or the handwriting data of described fragmentation comprises: outside the person's handwriting type of person's handwriting fragment, person's handwriting fragment, person's handwriting fragment identification marking, also comprise stroke order, stroke coordinate, base map size and write at least one in container size;
Described handwriting data time of origin comprises person's handwriting fragment timestamp.
6. the method as described in claim 1 to 5, is characterized in that, described default least unit comprises default minimum time unit.
7. method as claimed in claim 5, is characterized in that, described default least unit comprises default minimum stroke unit.
8. the method for claim 1, is characterized in that, the scope of described default least unit is 1 second to 1 hour.
CN201410354486.9A 2014-07-23 2014-07-23 A kind of data playing method Expired - Fee Related CN104104900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410354486.9A CN104104900B (en) 2014-07-23 2014-07-23 A kind of data playing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410354486.9A CN104104900B (en) 2014-07-23 2014-07-23 A kind of data playing method

Publications (2)

Publication Number Publication Date
CN104104900A true CN104104900A (en) 2014-10-15
CN104104900B CN104104900B (en) 2018-03-06

Family

ID=51672666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410354486.9A Expired - Fee Related CN104104900B (en) 2014-07-23 2014-07-23 A kind of data playing method

Country Status (1)

Country Link
CN (1) CN104104900B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504628A (en) * 2014-12-17 2015-04-08 天脉聚源(北京)教育科技有限公司 Data structure for teaching data information of intelligent teaching system
CN109672940A (en) * 2018-12-11 2019-04-23 北京新鼎峰软件科技有限公司 Video playback method and video playback system based on note contents
CN110928478A (en) * 2019-11-21 2020-03-27 广州摩翼信息科技有限公司 Handwriting reproduction system, method and device applied to teaching
CN111914103A (en) * 2020-04-24 2020-11-10 南京航空航天大学 A conference recording method, computer-readable storage medium and device
CN112087656A (en) * 2020-09-08 2020-12-15 远光软件股份有限公司 Online note generation method and device and electronic equipment
CN113259619A (en) * 2021-05-07 2021-08-13 北京字跳网络技术有限公司 Information sending and displaying method, device, storage medium and conference system
CN114020939A (en) * 2021-09-17 2022-02-08 联想(北京)有限公司 Multimedia file control method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078868A1 (en) * 2003-09-26 2005-04-14 William Chen Method and apparatus for summarizing and indexing the contents of an audio-visual presentation
CN101272469A (en) * 2007-03-23 2008-09-24 杨子江 Recording system and method used for teaching and meeting place
CN102509249A (en) * 2011-10-14 2012-06-20 郭华 Micro-lesson system based on knowledge points and location and construction method thereof
CN103136332A (en) * 2013-01-28 2013-06-05 福州新锐同创电子科技有限公司 Method for achieving making, management and retrieval of knowledge points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078868A1 (en) * 2003-09-26 2005-04-14 William Chen Method and apparatus for summarizing and indexing the contents of an audio-visual presentation
CN101272469A (en) * 2007-03-23 2008-09-24 杨子江 Recording system and method used for teaching and meeting place
CN102509249A (en) * 2011-10-14 2012-06-20 郭华 Micro-lesson system based on knowledge points and location and construction method thereof
CN103136332A (en) * 2013-01-28 2013-06-05 福州新锐同创电子科技有限公司 Method for achieving making, management and retrieval of knowledge points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘小晶等: "教学视频微型化改造与应用的新探索", 《中国电化教育》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504628A (en) * 2014-12-17 2015-04-08 天脉聚源(北京)教育科技有限公司 Data structure for teaching data information of intelligent teaching system
CN109672940A (en) * 2018-12-11 2019-04-23 北京新鼎峰软件科技有限公司 Video playback method and video playback system based on note contents
CN110928478A (en) * 2019-11-21 2020-03-27 广州摩翼信息科技有限公司 Handwriting reproduction system, method and device applied to teaching
CN111914103A (en) * 2020-04-24 2020-11-10 南京航空航天大学 A conference recording method, computer-readable storage medium and device
CN112087656A (en) * 2020-09-08 2020-12-15 远光软件股份有限公司 Online note generation method and device and electronic equipment
CN113259619A (en) * 2021-05-07 2021-08-13 北京字跳网络技术有限公司 Information sending and displaying method, device, storage medium and conference system
CN114020939A (en) * 2021-09-17 2022-02-08 联想(北京)有限公司 Multimedia file control method and device
CN114020939B (en) * 2021-09-17 2025-04-22 联想(北京)有限公司 Multimedia file control method and device

Also Published As

Publication number Publication date
CN104104900B (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN104104900A (en) Data playing method
CN105228050B (en) The method of adjustment and device of earphone sound quality in terminal
WO2017092280A1 (en) Multimedia photo generation method, apparatus and device, and mobile phone
WO2014161282A1 (en) Method and device for adjusting playback progress of video file
WO2019051938A1 (en) Live video preservation method and device, and server, anchor terminal and medium
CN102801942A (en) Method and device for recording video and generating GIF (Graphic Interchange Format) dynamic graph
US20130222526A1 (en) System and Method of a Remote Conference
CN111527746B (en) Method for controlling electronic equipment and electronic equipment
CN111835985B (en) Video editing method, device, apparatus and storage medium
US20150088513A1 (en) Sound processing system and related method
CN105763925A (en) Video recording method and device for presentation files
US20150097658A1 (en) Data processing apparatus and data processing program
WO2016197708A1 (en) Recording method and terminal
JP2020514936A (en) Method and device for quick insertion of voice carrier text
WO2017157135A1 (en) Media information processing method, media information processing device and storage medium
CN108174133A (en) A court trial video display method, device, electronic equipment and storage medium
CN104104901A (en) Method and device for playing data
CN105142018A (en) Programme identification method and programme identification device based on audio fingerprints
CN104253943B (en) Use the video capture method and apparatus of mobile terminal
CN114341866A (en) Simultaneous interpretation method, device, server and storage medium
US20150363157A1 (en) Electrical device and associated operating method for displaying user interface related to a sound track
CN103594086A (en) Voice processing system, device and method
CN104092553A (en) Data processing method and device and conference system
CN104185032A (en) Video identification method and system
US20160133243A1 (en) Musical performance system, musical performance method and musical performance program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180306

Termination date: 20210723

CF01 Termination of patent right due to non-payment of annual fee