CN120111165B - A method for marking the progress of surgical video playback and editing - Google Patents
A method for marking the progress of surgical video playback and editingInfo
- Publication number
- CN120111165B CN120111165B CN202510305202.5A CN202510305202A CN120111165B CN 120111165 B CN120111165 B CN 120111165B CN 202510305202 A CN202510305202 A CN 202510305202A CN 120111165 B CN120111165 B CN 120111165B
- Authority
- CN
- China
- Prior art keywords
- video
- video segment
- marked
- time
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a method for marking the playing and editing progress of video of surgical video, which relates to the technical field of video editing of medical surgical video, and comprises the steps of firstly obtaining each video to be surgical, sequencing, simultaneously obtaining surgical parts in each video to be surgical, feeding back to a user interface, analyzing, obtaining each key video segment, establishing a virtual progress bar, marking each key video segment on the virtual progress bar, meanwhile, the starting time and the ending time of video segment interception are set on the virtual progress bar, whether the set time is correct or not is analyzed, meanwhile, the intercepted video segments are marked, finally, whether each marked video segment meets the cutting requirement or not is analyzed, cutting is carried out according to the analysis result, the use of resources is reduced, a user can quickly select a required operation video, and the cutting efficiency, the accuracy of the marked time of the user and the effectiveness of video segment marking are guaranteed.
Description
Technical Field
The invention relates to the technical field of video editing of medical operation video, in particular to a method for marking the playing and editing progress of an operation video.
Background
In the operation process of a patient, a digital operating room system can be used for generating video recordings, video files of the video recordings are stored on a digital system server, files are large, the video files can be processed to generate a rich media operation medical record document or an operation electronic medical record called image-text mixed arrangement, and the rich media operation medical record document is returned and stored in a traditional electronic medical record system.
The traditional operation video playing and editing progress marking method has the advantages that 1, the traditional operation video playing and editing progress marking method is characterized in that marking is carried out on the actual playing progress bar of the operation video, previewing and progress dragging are needed in the marking process, the buffering and loading processes of the playing progress bar of the video and dragging progress on the previewing and viewing server are needed, waiting is needed, and the efficiency of editing cannot be guaranteed.
2. The traditional operation video playing and editing progress marking method is characterized in that after the operation name is input by a user, the operation name is searched, the operation videos which are the same as the operation name input by the user are fed back to a user interface, the user selects the operation video which needs to be clipped according to the fed back information, and the user cannot be ensured to quickly select the required operation video.
3. The traditional operation video playing editing progress marking method does not check the time marked by the user when the user marks, and cannot determine whether the time marked by the user is correct or not and the validity of the time slice marking.
4. When each marked video segment is cut, the traditional marking method for the playing and editing progress of the surgical video lacks judgment on whether each marked video segment is cut or not, and clips are carried out on all the marked video segments, so that the use of resources is increased.
Disclosure of Invention
Aiming at the technical defects, the invention aims to provide a method for marking the playing and editing progress of the video of the surgical video.
The video acquisition, according to the operation name input by the user, acquires each video to be operated, analyzes the difficulty coefficient of the operation in each video to be operated, sorts each video to be operated according to the difficulty coefficient, feeds back to the user interface, and the user selects a target operation video according to feedback information.
And secondly, analyzing the video, namely analyzing the target operation video by using the original video tag.
Step three, segment marking, namely manufacturing a virtual progress bar, acquiring each key video segment in a target operation video, starting time and ending time of each key video segment, acquiring positions of the starting time and the ending time of each key video segment on the virtual progress bar, marking the starting time and the ending time of each key video segment on the positions of each key video segment on the virtual progress bar, marking the virtual progress bar between the starting time position and the ending time position of each key video segment by using yellow, setting a starting time input frame and an ending time input frame intercepted by the video segment, analyzing whether the time input by a user is correct when the starting time and the ending time of the intercepted video segment are input in the starting time input frame and the ending time input frame, prompting the user if the starting time and the ending time input by the user are incorrect, acquiring the positions of the starting time and the ending time input by the user on the virtual progress bar if the starting time and the ending time are correct, marking the virtual progress bar between the starting time position and the ending time position by using green, acquiring each video segment with color mark, acquiring each video segment with the marking video segment as each marking video segment, acquiring each video segment with the marking, and storing operation coefficient in each operation coefficient according to each operation coefficient in each video segment in each marking operation data, and analyzing operation coefficient in each segment.
And step four, video editing, namely acquiring each marked video segment, analyzing whether each marked video segment meets the cutting requirement, cutting each marked video segment according to the analysis result, and downloading each marked video segment after cutting.
The invention has the beneficial effects that 1, the invention provides a method for marking the playing and editing progress of the video of the operation, firstly, all the videos required to be operated are obtained and sequenced, meanwhile, the operation positions in all the videos required to be operated are obtained and fed back to a user interface, then analysis is carried out, then all the key video fragments are obtained, a virtual progress bar is established, all the key video fragments are marked on the virtual progress bar, meanwhile, the starting time and the ending time of video fragment interception are set on the virtual progress bar, whether the set time is correct or not is analyzed, meanwhile, the intercepted video fragments are marked, finally, whether all the marked video fragments meet the shearing requirement is analyzed, shearing is carried out according to the analysis result, the use of resources is reduced, a user can quickly select the required operation videos, and the editing efficiency, the accuracy of the marking time of the user and the effectiveness of video fragment marking are ensured.
2. According to the invention, each required operation video is obtained according to the operation name input by the user, the operation difficulty coefficient in each required operation video is calculated, each required operation is arranged according to the order from small to large in each required operation video, the operation part in each required operation video is obtained at the same time, and is fed back to the user interface, the user selects the target operation video according to the fed back information, and the user is ensured to be able to select the required operation video quickly.
3. The virtual progress bar is established, and the target operation video is marked on the virtual progress bar to mark each key video segment and the intercepted video segment, so that the editing efficiency is ensured.
4. After the user inputs the starting time and the ending time of the video clip interception, the method analyzes whether the time input by the user is correct or not according to the starting time, the ending time, the starting time, the ending time and the starting time and the ending time of the target operation input by the user, and if not, prompts, and if so, marks the video clip, thereby ensuring the correctness of the marked time of the user and the effectiveness of the marking of the video clip.
5. When each marked video segment is cut, judging whether each marked video segment meets the cutting requirement, if so, cutting, if not, skipping, cutting the next marked video segment, obtaining each marked video segment which does not meet the cutting requirement after cutting is completed, namely each secondary marked video segment, feeding back to a user interface, judging whether each secondary marked video segment needs to be cut after receiving each fed back secondary marked video segment, if not, downloading each marked video segment which is cut, if so, obtaining each secondary marked video segment which needs to be cut, cutting, and simultaneously downloading each marked video segment which is cut, thereby reducing the use of resources.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the method of the present invention.
Fig. 2 is a schematic diagram of a basic framework for implementing the present invention.
Fig. 3 shows basic operation steps of the video editing of the surgical video according to the present invention.
FIG. 4 is a schematic view of the video editing schedule of the surgical video according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a method for marking the playing and editing progress of a video of a surgical video, which comprises the following steps of firstly, obtaining videos according to a surgical name input by a user, analyzing difficulty coefficients of operations in the videos, sorting the videos according to the difficulty coefficients, feeding back to a user interface, and selecting a target surgical video according to feedback information by the user.
S11, acquiring the operation names input by the user, comparing the operation names with the operation video names in the database, and if the name of one operation video in the database is the same as the operation name input by the user, calling the operation video as the required operation video, so as to acquire each required operation video in the database.
S12, acquiring each piece of operation information in each piece of operation video required, analyzing the operation difficulty coefficient in each piece of operation video required according to each piece of operation information in each piece of operation video required, sequencing each piece of operation video required according to the order of the operation difficulty coefficient in each piece of operation video required from small to large, acquiring operation parts in each piece of operation video required simultaneously, feeding back to a user interface, and selecting one piece of operation video required by a user according to the fed back information, wherein the operation video is called a target operation video.
The surgical information includes the surgical time, the surgical site, the total number of surgical steps, and the time spent for the critical surgical steps.
The method comprises the steps of acquiring operation time length, operation positions, total number of operation steps and time length spent by key operation steps in each required operation video from a database, carrying out normalization processing on the operation time length, the total number of operation steps and the time length spent by the key operation steps, and then carrying out the analysis according to an analysis formula: obtaining a difficulty coefficient alpha a of the operation in the a-th required operation video, wherein A a represents operation time length in the a-th required operation video, B a represents total number of operation steps in the a-th required operation video, C a represents time length spent by key operation steps in the a-th required operation video, A weight coefficient representing the surgical site in the a-th required surgical video, e represents a natural constant, a represents the number of each required surgical video, a=1, 2, 3.
The method includes the steps that each surgical site is taken as a center, each surgical site area is divided by taking a preset distance threshold as a radius, the number of blood vessels and the number of tissues in each surgical site area are obtained, and then the analysis formula is used: Obtaining a weight coefficient eta m of an mth operation position, wherein mu m represents the number of blood vessels in an mth operation position area, theta m represents the number of tissues in the mth operation position area, rho m represents the operation difficulty index of the mth operation position, e represents a natural constant, m represents the number of each operation position, m=1, 2,3, and n represents the total number of operation positions, and m and n are positive integers.
The surgical difficulty index of each surgical site is set by a medical expert.
It should also be noted that the preset distance threshold is set by a medical expert.
And secondly, analyzing the video, namely analyzing the target operation video by using the original video tag.
It should be noted that < video > is a native tag introduced in HTML5 for embedding and playing video content on a web page.
Step three, segment marking, namely manufacturing a virtual progress bar, acquiring each key video segment in a target operation video, starting time and ending time of each key video segment, acquiring positions of the starting time and the ending time of each key video segment on the virtual progress bar, marking the starting time and the ending time of each key video segment on the positions of each key video segment on the virtual progress bar, marking the virtual progress bar between the starting time position and the ending time position of each key video segment by using yellow, setting a starting time input frame and an ending time input frame intercepted by the video segment, analyzing whether the time input by a user is correct when the starting time and the ending time of the intercepted video segment are input in the starting time input frame and the ending time input frame, prompting the user if the starting time and the ending time input by the user are incorrect, acquiring the positions of the starting time and the ending time input by the user on the virtual progress bar if the starting time and the ending time are correct, marking the virtual progress bar between the starting time position and the ending time position by using green, acquiring each video segment with color mark, acquiring each video segment with the marking video segment as each marking video segment, acquiring each video segment with the marking, and storing operation coefficient in each operation coefficient according to each operation coefficient in each video segment in each marking operation data, and analyzing operation coefficient in each segment.
It should be noted that, the process of creating a virtual progress bar based on video parsing of HTML5< video > software includes creating a < video > tag for playing video, creating a < div > element as a container of the progress bar, creating a sub < div > element in the container as a filling part of the progress bar, setting timeupdate event of monitoring video, updating filling width of the progress bar according to current playing time and total duration of video, monitoring click event of the progress bar container, and changing playing position of video according to click position.
It should be noted that the procedure of setting the start time input box and the end time input box of the video clip interception is to add the basic page element in the HTML file first, and then add the < input > element for inputting the start time and the end time.
It should also be noted that the < input > element is an important element in HTML for creating various form input fields, and can be used to collect data entered by a user.
The virtual progress bar is not colored, and each video clip with a color mark refers to each video clip with a yellow mark and a green mark.
Dividing the target operation video into video segments according to a preset time threshold, acquiring operation operations in the video segments through a depth learning model, simultaneously acquiring operation operations of the video segments from a database, comparing the operation operations of the video segments with the operation operations in the target operation video, if the operation of a certain video segment is the same as a certain operation in the target operation video, the video segment is called as the key video segment in the target operation video, and if the operation of a certain video segment is different from the operation of the target operation video, the video segment is called as a non-key video segment, judging whether the video segments are key video segments of the target operation video or not by the method, and acquiring the key video segments of the target operation video.
It should be noted that the preset time period threshold is set by a designer.
The process of obtaining the operation in each video segment through the deep learning model includes firstly carrying out operations such as segmentation, frame extraction and normalization on each video segment, converting the video segment into a format suitable for model input, then inputting each processed video segment into the trained deep learning model, predicting the operation in each video segment by the model, and outputting the operation of each video segment.
In the above, the specific process of acquiring the positions of the start time and the end time of each key video on the virtual progress bar includes S31, acquiring the total duration and the start time of the target operation video.
It should be noted that, the total duration of the target surgical video is obtained from the database, and the start time of the target surgical video is set to 00:00.
S32, arranging the key video clips according to a time sequence, numbering the key video clips, and obtaining the starting time and the ending time of the key video clips according to an analysis formula: Obtaining a play ratio beta c at the starting time and a play ratio χ c at the ending time of the c-th key video piece, wherein D c' represents the starting time of the c-th key video piece, D c ″ represents the ending time of the c-th key video piece, D represents the starting time of the target operation video, E represents the total duration of the target operation video, c represents the number of each key video piece, c=1, 2,3,., D, D represents the total number of the key video pieces, and c and D are positive integers.
S33, taking a starting point of the virtual progress bar as an origin, taking the virtual progress bar as an abscissa axis, establishing a two-dimensional coordinate system, acquiring the length of the virtual progress bar, marking as F, and simultaneously acquiring the play ratio at the starting time and the play ratio at the ending time of each key video piece according to an analysis formula: The position (delta c, 0) of the starting time and the position (epsilon c, 0) of the ending time of the c-th key video clip on the virtual progress bar are obtained.
In the above, the specific process of analyzing whether the time input by the user is correct is as follows: when the user inputs the intercepting time and the ending time of the video segments in the starting time input box and the ending time input box, the starting time and the ending time input by the user are acquired, the starting time and the ending time input by the user are simultaneously acquired, the starting time and the ending time input by the user are acquired, and the starting time and the ending time input by the user are compared with the starting time and the ending time of the target operation video, if the starting time input by the user is smaller than the starting time of the target operation video, or the ending time input by the user is larger than the ending time of the target operation video, the starting time input by the user is compared with the starting time and the ending time of each key video, if the starting time input by the user is larger than the starting time of the target operation video, the starting time input by the user is compared with the starting time of each key video segment, the starting time input by the user is calculated, and the starting time of each key video segment is compared with the starting time input by the user is calculated, and the method is referred to the starting time of each segment of the key video segment is acquired, and comparing, namely selecting a reference video segment with the smallest difference value as a comparison video segment, and acquiring the starting time and the ending time of the comparison video segment, and the ending time of the previous video segment and the starting time of the next video segment of the comparison video segment, wherein the starting time and the ending time of the previous video segment and the starting time of the next video segment of the comparison video segment are:
Where D 'represents the start time of the user input, d″ represents the end time of the user input, D 1' represents the start time of the comparative video segment, D 1 "represents the end time of the comparative video segment, D 0" represents the end time of the video segment preceding the comparative video segment, D 2 'represents the start time of the video segment following the first comparative video segment, D represents the start time of the target surgical video, D' "represents the end time of the target surgical video, phi represents the correct coefficient of the user input time, when phi=false or phi=0, represents the time error of the user input, and when phi=1, represents the time error of the user input.
The method comprises the steps of acquiring operation types in each marked video segment and operation types in a target operation video, comparing the operation types in the target operation video with the comparison operation types in a database, selecting the comparison operation types which are the same as the operation types in the target operation video, simultaneously acquiring each operation type in the comparison operation types, comparing each operation type in the comparison operation types with each operation type in each marked video segment, acquiring a standard range of each operation data of each operation if the operation type of a certain marked video segment is the same as the operation type of a certain operation type in the comparison operation type, taking the standard range as the standard range of each operation data of the operation type in the marked video segment, and acquiring the standard range of each operation data of the operation type in each marked video by the method, wherein the standard range of each operation data of the operation type in each marked video is as follows:
In the middle of A value representing the g-th operation data in the f-th marked video segment,Representing the minimum value of the standard range of the g-th operation data in the f-th marked video segment,Represents the maximum value of the standard range of the g-th operation data in the f-th marked video segment, e represents a natural constant,A standard coefficient representing the f-th marked video segment, f representing the number of each marked video segment, f=1, 2,3,..i , i representing the total number of marked video segments, g representing the number of each operation data, g=1, 2,3,..h, h representing the total number of operation data, f, i, g, and h being positive integers.
The standard range of the operation data of each operation of each comparison operation type in the database is set by a medical expert according to experience, for example, the anastomotic site in the gastrointestinal tract reconstruction operation in the stomach cancer laparoscopic surgery is located at the jejunum 15-20cm away from the ligament of dropsy.
It should be noted that each comparative operation type includes a strabismus correction operation, a stomach cancer laparoscopic operation, and the like.
It should also be noted that the surgical procedures for different types of surgery, such as gastric cancer laparoscopic surgery, include creating pneumoperitoneum, inserting a punch, probing the abdominal cavity, reconstructing the digestive tract, and irrigating the abdominal cavity.
It should be noted that, the operation data of each operation is also different, for example, the operation data of the reconstruction of the digestive tract includes the anastomotic site, the anastomotic stoma size, the input loop length, and the like.
And step four, video editing, namely acquiring each marked video segment, analyzing whether each marked video segment meets the cutting requirement, cutting each marked video segment according to the analysis result, and downloading each marked video segment after cutting.
It should be noted that, the video cutting service of ffmpeg encapsulation is used to cut each marked video segment, the process is that firstly, the user sends the target operation video with marking information to the service end, and sends the request containing cutting marking information, after receiving the request, the service end checks the file name, starting time and ending time of the target operation video, then builds the ffmpeg command, and finally uses the sub-process.
In a specific embodiment, the video editing method specifically includes the steps of cutting each marked video segment according to the time sequence of each marked video segment after the target operation video is marked, acquiring the standard coefficient of each marked video segment in the cutting process, analyzing whether the standard coefficient of each marked video segment meets the cutting requirement, cutting the marked video segment if the standard coefficient of a certain marked video segment meets the cutting requirement, skipping the marked video segment if the standard coefficient of a certain marked video segment does not meet the cutting requirement, cutting the next marked video segment, acquiring each marked video segment which does not meet the cutting requirement after the cutting is completed, calling each marked video segment as each secondary marked video segment, feeding the secondary marked video segment back to a user interface, judging whether each secondary marked video segment needs to be cut after receiving each secondary marked video segment fed back, if not, downloading each marked video segment which is cut, acquiring each secondary marked video segment which needs to be cut, cutting, and simultaneously downloading each marked video segment which is cut.
In the above, the step of analyzing whether the standard coefficient of each marked video meets the shearing requirement comprises the following steps of:
Where γ f represents the standard coefficient cutoff for the f-th marked video segment.
Comparing the standard coefficient of each marked video segment with the standard coefficient limit value of each marked video segment, if the standard coefficient of a certain marked video segment is higher than the standard coefficient limit value of the marked video segment, the marked video segment is in accordance with the cutting requirement, and if the standard coefficient of a certain video segment is lower than the standard coefficient limit value of the marked video segment, the marked video segment is not in accordance with the cutting requirement, and judging whether each marked video is in accordance with the cutting requirement by the method.
According to the embodiment of the invention, each video to be operated is firstly obtained and sequenced, meanwhile, the operation part in each video to be operated is obtained and fed back to the user interface, then analysis is carried out, each key video segment is obtained, a virtual progress bar is established, each key video segment is marked on the virtual progress bar, meanwhile, the starting time and the ending time of video segment interception are set on the virtual progress bar, whether the set time is correct or not is analyzed, meanwhile, the intercepted video segments are marked, finally, whether each marked video segment meets the shearing requirement or not is analyzed, shearing is carried out according to the analysis result, the use of resources is reduced, the user can rapidly select the required operation video, and the editing efficiency, the accuracy of the time marked by the user and the effectiveness of video segment marking are ensured.
The foregoing is merely illustrative and explanatory of the principles of the invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of the invention or beyond the scope of the invention as defined in the description.
Claims (9)
1. The method for marking the progress of playing and editing the video of the surgical video is characterized by comprising the following steps:
Step one, video acquisition, namely acquiring each required operation video according to an operation name input by a user, analyzing the difficulty coefficient of operation in each required operation video, sequencing each required operation video according to the difficulty coefficient, feeding back to a user interface, and selecting a target operation video according to feedback information by the user;
Analyzing the target operation video by using a native video tag;
Step three, a segment marking, namely manufacturing a virtual progress bar, acquiring each key video segment in a target operation video, the starting time and the ending time of each key video segment, acquiring the position of the starting time and the ending time of each key video segment on the virtual progress bar, marking the starting time and the ending time of each key video segment on the position of each key video segment on the virtual progress bar, marking the virtual progress bar between the starting time position and the ending time position of each key video segment by yellow, setting a starting time input frame and an ending time input frame intercepted by the video segment, analyzing whether the time input by a user is correct when the starting time and the ending time of the intercepted video segment are input in the starting time input frame and the ending time input frame, prompting the user if the starting time and the ending time input by the user are incorrect, acquiring the position of the starting time and the ending time input by the user on the virtual progress bar if the starting time and the ending time are correct, marking the virtual progress bar between the starting time position and the ending time position by green, acquiring each video segment with a color mark, marking each video segment, acquiring each video segment marked by the color, marking each video segment, acquiring each video segment in each mark, and acquiring operation coefficient of each operation data in each video segment according to each operation coefficient in each marking video segment, and analyzing operation coefficient in each operation standard;
And step four, video editing, namely acquiring each marked video fragment, analyzing whether each marked video fragment meets the cutting requirement, cutting each marked video according to an analysis result, and downloading each marked video fragment after cutting.
2. The method for marking progress of playing and editing video of surgical video according to claim 1, wherein the video acquisition comprises the following steps:
s11, acquiring a surgical name input by a user, comparing the surgical name with each surgical video name in a database, and if the name of a certain surgical video in the database is the same as the surgical name input by the user, calling the surgical video as a required surgical video, so as to acquire each required surgical video in the database;
s12, acquiring each piece of operation information in each piece of operation video required, analyzing the operation difficulty coefficient in each piece of operation video required according to each piece of operation information in each piece of operation video required, sequencing each piece of operation video required according to the order of the operation difficulty coefficient in each piece of operation video required from small to large, acquiring operation parts in each piece of operation video required simultaneously, feeding back to a user interface, and selecting one piece of operation video required by a user according to the fed back information, wherein the operation video is called a target operation video.
3. The method for marking the progress of playing and editing the video of the surgical video according to claim 2, wherein the analyzing the difficulty coefficient of the surgical in each surgical video comprises the following steps:
Acquiring the operation time length, the operation position, the total number of operation steps and the time length spent by key operation steps in each needed operation video from a database, and carrying out normalization processing on the operation time length, the total number of operation steps and the time length spent by key steps, wherein the operation time length, the total number of operation steps and the time length spent by key steps are calculated according to an analysis formula: obtaining a difficulty coefficient alpha a of the operation in the a-th required operation video, wherein A a represents operation time length in the a-th required operation video, B a represents total number of operation steps in the a-th required operation video, C a represents time length spent by key operation steps in the a-th required operation video, A weight coefficient representing the surgical site in the a-th required surgical video, e represents a natural constant, a represents the number of each required surgical video, a=1, 2, 3.
4. The method for marking the playing and editing progress of the surgical video according to claim 1, wherein the specific process of obtaining each key video segment in the target surgical video is as follows:
Dividing a target operation video into video segments according to preset duration, acquiring operation operations in the video segments through a depth learning model, acquiring key operations of operations in the target operation video from a database, comparing the operation operations of the video segments with the key operations of operations in the target operation video, if the operation of a certain video segment is identical to the key operation of operations in the target operation video, then the video segment is called a key video segment in the target operation video, and if the operation of a certain video segment is not identical to the key operations of operations in the target operation video, then the video segment is called a non-key video segment, judging whether the video segments are key video segments of the target operation video by the method, and acquiring the key video segments of the target operation video.
5. The method for marking progress of playing and editing video of surgical video according to claim 4, wherein the step of obtaining the positions of the start time and the end time of each key video piece on the virtual progress bar comprises the following steps:
S31, acquiring the total duration and the starting time of a target operation video;
S32, arranging the key video clips according to a time sequence, numbering the key video clips, and obtaining the starting time and the ending time of the key video clips according to an analysis formula: Obtaining a play ratio beta c at the starting time and a play ratio χ c at the ending time of the c-th key video piece, wherein D c' represents the starting time of the c-th key video piece, D c ″ represents the ending time of the c-th key video piece, D represents the starting time of the target operation video, E represents the total duration of the target operation video, c represents the number of each key video piece, c=1, 2,3,., D, D represents the total number of the key video pieces, and c and D are positive integers;
S33, taking a starting point of the virtual progress bar as an origin, taking the virtual progress bar as an abscissa axis, establishing a two-dimensional coordinate system, acquiring the length of the virtual progress bar, marking as F, and simultaneously acquiring the play ratio at the starting time and the play ratio at the ending time of each key video piece according to an analysis formula: The position (delta c, 0) of the starting time and the position (epsilon c, 0) of the ending time of the c-th key video clip on the virtual progress bar are obtained.
6. The method for marking progress of video playing and editing of surgical video according to claim 5, wherein the analyzing whether the time input by the user is correct comprises the following steps:
When the user inputs the intercepting time and the ending time of the video clips in the starting time input box and the ending time input box, acquiring the starting time and the ending time input by the user, acquiring the starting time and the ending time of the operation target video, and the starting time and the ending time of each key video clip in the target operation video, comparing the starting time and the ending time input by the user, if the ending time input by the user is smaller than the starting time input by the user, representing the time error input by the user, prompting at the moment, and if the ending time input by the user is larger than the starting time input by the user, comparing the starting time and the ending time input by the user with the starting time and the ending time of the target operation video:
If the starting time input by the user is smaller than the starting time of the target operation video or the ending time input by the user is larger than the ending time of the target operation video, representing that the time input by the user is wrong, prompting at the moment, and if the starting time input by the user is larger than the starting time of the target operation video and the ending time input by the user is smaller than the ending time of the target operation video, comparing the starting time and the ending time input by the user with the starting time and the ending time of each key video:
Acquiring the starting time input by a user, comparing the starting time with the starting time of each key video segment, taking a key video segment as a reference video segment if the starting time of a certain key video segment is larger than the starting time input by the user, acquiring each reference video segment by the method, calculating the difference value between the starting time of each reference video segment and the starting time input by the user, comparing the reference video segments, selecting the reference video segment with the smallest difference value as a comparison video segment, acquiring the starting time and the ending time of the comparison video segment, and the ending time of the previous video segment and the starting time of the next video segment of the comparison video segment, wherein the steps are as follows:
Where D 'represents the start time of the user input, d″ represents the end time of the user input, D 1' represents the start time of the comparative video segment, D 1 "represents the end time of the comparative video segment, D 0" represents the end time of the video segment preceding the comparative video segment, D 2 'represents the start time of the video segment following the first comparative video segment, D represents the start time of the target surgical video, D' "represents the end time of the target surgical video, phi represents the correct coefficient of the user input time, when phi=false or phi=0, represents the time error of the user input, and when phi=1, represents the time error of the user input.
7. The method for marking the progress of playing and editing the video of the surgical video according to claim 1, wherein the step of analyzing the standard coefficient of the surgical operation in each marked video comprises the following steps:
Acquiring operation types in each marked video segment and operation types in the target operation video, comparing the operation types in the target operation video with each comparison operation type in a database, selecting the comparison operation type identical to the operation type in the target operation video, simultaneously acquiring each operation type in the comparison operation type, comparing each operation type in the comparison operation type with each operation type in each marked video segment, acquiring a standard range of each operation data of each operation type if the operation type of a certain marked video segment is identical to the operation type of a certain operation type in the comparison operation type, taking the standard range as the standard range of each operation data of each operation type in the marked video segment, and acquiring the standard range of each operation data of each operation type in each marked video by the method, wherein the standard range of each operation data of each operation type in each marked video is as follows:
In the middle of A value representing the g-th operation data in the f-th marked video segment,Representing the minimum value of the standard range of the g-th operation data in the f-th marked video segment,Represents the maximum value of the standard range of the g-th operation data in the f-th marked video segment, e represents a natural constant,A standard coefficient representing the f-th marked video segment, f representing the number of each marked video segment, f=1, 2,3,..i , i representing the total number of marked video segments, g representing the number of each operation data, g=1, 2,3,..h, h representing the total number of operation data, f, i, g, and h being positive integers.
8. The method for marking progress of video playing and editing of surgical video according to claim 7, wherein the video editing comprises the following steps:
After the target operation video is marked, cutting each marked video segment according to the time sequence of each marked video segment, acquiring the standard coefficient of each marked video segment in the cutting process, analyzing whether each marked video segment meets the cutting requirement, cutting each marked video segment if a certain marked video segment meets the cutting requirement, skipping the marked video segment if a certain marked video segment does not meet the cutting requirement, cutting the next marked video segment, acquiring each marked video segment which does not meet the cutting requirement after cutting is completed, calling each marked video segment as each secondary marked video segment, feeding back to a user interface, judging whether each secondary marked video segment needs to be cut after receiving each secondary marked video segment fed back by a user, downloading each marked video segment which is cut if not needed, acquiring each secondary marked video segment which needs to be cut, cutting, and simultaneously downloading each marked video segment which is cut.
9. The method for marking the progress of playing and editing the video of the surgical video according to claim 8, wherein the step of analyzing whether each marked video meets the cutting requirement comprises the following steps:
setting a standard coefficient limit value:
Wherein gamma f represents the standard coefficient limit value of the f-th marked video segment;
comparing the standard coefficient of each marked video segment with the standard coefficient limit value of each marked video segment, if the standard coefficient of a certain marked video segment is higher than the standard coefficient limit value of the marked video segment, the marked video segment is in accordance with the cutting requirement, and if the standard coefficient of a certain video segment is lower than the standard coefficient limit value of the marked video segment, the marked video segment is not in accordance with the cutting requirement, and judging whether each marked video is in accordance with the cutting requirement by the method.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510305202.5A CN120111165B (en) | 2025-03-14 | 2025-03-14 | A method for marking the progress of surgical video playback and editing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510305202.5A CN120111165B (en) | 2025-03-14 | 2025-03-14 | A method for marking the progress of surgical video playback and editing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120111165A CN120111165A (en) | 2025-06-06 |
| CN120111165B true CN120111165B (en) | 2025-10-17 |
Family
ID=95887111
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510305202.5A Active CN120111165B (en) | 2025-03-14 | 2025-03-14 | A method for marking the progress of surgical video playback and editing |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120111165B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103531218A (en) * | 2013-04-17 | 2014-01-22 | Tcl集团股份有限公司 | Online multimedia file editing method and system |
| CN111919260A (en) * | 2018-03-19 | 2020-11-10 | 威里利生命科学有限责任公司 | Surgical video retrieval based on preoperative images |
| CN113747230A (en) * | 2021-08-30 | 2021-12-03 | 维沃移动通信(杭州)有限公司 | Audio and video processing method and device, electronic equipment and readable storage medium |
| CN119153075A (en) * | 2024-11-12 | 2024-12-17 | 大连玖柒医疗科技有限公司 | Data analysis-based method and system for evaluating complications after cardiothoracic surgery |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104185077A (en) * | 2014-09-12 | 2014-12-03 | 飞狐信息技术(天津)有限公司 | Video editing method and device |
| CN113742527A (en) * | 2021-11-08 | 2021-12-03 | 成都与睿创新科技有限公司 | Method and system for retrieving and extracting operation video clips based on artificial intelligence |
| US20240273899A1 (en) * | 2023-02-13 | 2024-08-15 | Regents Of The University Of Michigan | AI-Powered Surgical Video Analysis |
| CN119484734A (en) * | 2023-08-11 | 2025-02-18 | 南京迈瑞生物医疗电子有限公司 | A surgical video editing method, electronic device and storage medium |
-
2025
- 2025-03-14 CN CN202510305202.5A patent/CN120111165B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103531218A (en) * | 2013-04-17 | 2014-01-22 | Tcl集团股份有限公司 | Online multimedia file editing method and system |
| CN111919260A (en) * | 2018-03-19 | 2020-11-10 | 威里利生命科学有限责任公司 | Surgical video retrieval based on preoperative images |
| CN113747230A (en) * | 2021-08-30 | 2021-12-03 | 维沃移动通信(杭州)有限公司 | Audio and video processing method and device, electronic equipment and readable storage medium |
| CN119153075A (en) * | 2024-11-12 | 2024-12-17 | 大连玖柒医疗科技有限公司 | Data analysis-based method and system for evaluating complications after cardiothoracic surgery |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120111165A (en) | 2025-06-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112257613B (en) | Physical examination report information structured extraction method and device and computer equipment | |
| US20070244935A1 (en) | Method, system, and computer-readable medium to provide version management of documents in a file management system | |
| US10762377B2 (en) | Floating form processing based on topological structures of documents | |
| US6254395B1 (en) | System and method for automated testing of writing skill | |
| CN101198974A (en) | Method and device for constructing test questions, test paper produced by using the method, and computer-readable recording medium storing a test question constructing program for executing the method | |
| CN117079720B (en) | Processing method and device for high-throughput sequencing data | |
| CN109902670A (en) | Data entry method and system | |
| US20170032558A1 (en) | Multi-format calendar digitization | |
| KR20140053888A (en) | Method and device for acquiring structured information in layout file | |
| CN109993315B (en) | Data processing method and device and electronic equipment | |
| CN117115291A (en) | CT image generation method and device based on large model | |
| CN120111165B (en) | A method for marking the progress of surgical video playback and editing | |
| CN114121208A (en) | Operation record quality control method based on visual data | |
| CN113033177B (en) | Method and device for analyzing electronic medical record data | |
| US20110258546A1 (en) | Edited information provision device, edited information provision method, program, and recording medium | |
| US20230177271A1 (en) | Entity recognition methods and apparatuses, electronic devices and storage media | |
| JP2004302678A (en) | Database search path display method | |
| GB2366633A (en) | Analysing hypertext documents | |
| Danisman | Artificial intelligence web-based cephalometric analysis platform: comparison with the computer assisted cephalometric method | |
| CN111368929B (en) | Picture marking method | |
| CN113283231A (en) | Method for acquiring signature bit, setting system, signature system and storage medium | |
| CN116579016A (en) | Desensitization method for medical image data | |
| CN110909187A (en) | Image storage method, image reading method, image memory and storage medium | |
| CN110110123A (en) | The training set update method and device of detection model | |
| CN116564482A (en) | Method and system for mutual recognition of medical image inspection results |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |