[go: up one dir, main page]

WO2019209027A1 - Method and system for detecting and tracking video copy - Google Patents

Method and system for detecting and tracking video copy Download PDF

Info

Publication number
WO2019209027A1
WO2019209027A1 PCT/KR2019/004964 KR2019004964W WO2019209027A1 WO 2019209027 A1 WO2019209027 A1 WO 2019209027A1 KR 2019004964 W KR2019004964 W KR 2019004964W WO 2019209027 A1 WO2019209027 A1 WO 2019209027A1
Authority
WO
WIPO (PCT)
Prior art keywords
identification information
candidate
area
video
average color
Prior art date
Application number
PCT/KR2019/004964
Other languages
French (fr)
Korean (ko)
Inventor
이준영
Original Assignee
리마 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 리마 주식회사 filed Critical 리마 주식회사
Priority claimed from KR1020190047761A external-priority patent/KR102227370B1/en
Publication of WO2019209027A1 publication Critical patent/WO2019209027A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream

Definitions

  • the present invention relates to a method and a system for detecting and tracking a moving picture copy, and more particularly, to a method and a system for preventing distribution of a moving picture by unauthorized copying and detecting and tracking a leaked copy of a moving picture.
  • the original producer prevents unauthorized copying and distribution of contents by unauthorized third parties, and traces leakers by reason of compensation and punishment if leaked. Means are required to enable the claim.
  • video content among digital contents is one of the most active illegal outflow.
  • the reason is that the video content can easily obtain a URL containing the video, and illegal download is possible using various software and programs based on the obtained URL.
  • DRM Digital Rights Management
  • the problem to be solved by the present invention is to provide a method and system that can generate a video capable of tracking the leak of unauthorized video.
  • the problem to be solved by the present invention is to provide a method and system that can detect whether or not unauthorized copying to the distributed video and track the leaker.
  • a method for detecting and tracking video duplication includes extracting one or more candidate frames from an original video, which is performed by a computer, wherein the candidate frames request playback of the original video.
  • a candidate frame extracting step comprising a candidate region into which identification information of a user can be inserted; And generating a corrected moving picture in which the identification information is inserted into a candidate region in a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
  • the step of generating the corrected video includes setting an identification information input area as an average color of the candidate area in the candidate area. Doing; And inserting identification information of the user into a color that differs from the average color by a minimum value in the identification information input area.
  • the method for detecting and tracking video duplication is composed of the average color of the candidate area in the candidate area, the average color area that can insert the user's identification information And extracting the identification information input area, wherein the identification information input area is located in the average color area.
  • the method for detecting and tracking video duplication is composed of the average color of the candidate area in the candidate area, Extracting an average color area into which identification information can be inserted; And inserting identification information of the user in the average color area, wherein the identification information of the user is configured by a color that differs from the average color by a minimum value.
  • a method for detecting and tracking video duplication in which the first average color region and the second average color region are determined by a specific rule when the average color region is plural. Extracting; Separating the identification information of the user into a first portion and a second portion; And inserting the first portion and the second portion into the first average color region and the second average color region, respectively.
  • a method for detecting and tracking video duplication extracts one or more candidate frames from a video to be detected, which is performed by a computer, wherein the candidate frames may be used to extract the original video.
  • a candidate frame extracting step comprising a candidate region into which identification information of a user who has requested playback can be inserted; And searching for the identification information with respect to the candidate region within a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
  • the searching for identification information includes searching for an area where a color differs from the candidate area by a minimum value.
  • the identification information is extracted from the area where the color differs by the minimum value.
  • a method for detecting and tracking video duplication is to extract identification information from each of the plurality of areas when a plurality of areas where the color differs by a minimum value is detected. step; And combining the extracted identification information.
  • a system for detecting and tracking video duplication extracts one or more candidate frames from an original video, wherein the candidate frames contain identification information of a user who requested to play the original video.
  • a candidate frame extractor comprising an insertable candidate region;
  • a generation unit for generating a corrected moving picture in which the identification information is inserted into a candidate region in a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a reference value difference.
  • a system for detecting and tracking video duplication extracts one or more vvoframes from a duplicate video, wherein the candidate frame includes identification information of a user who requested to play the original video.
  • a candidate frame extractor comprising an insertable candidate region; And a searcher for searching for the identification information with respect to the candidate region within a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
  • the video author can create a corrected video to track the leaker in preparation for the unauthorized copying and leakage of the original video. Therefore, the video author can prepare for the damage caused by the video leakage in advance, and the video play requester can prevent unauthorized copying and distribution of the video.
  • the video author has the effect of tracking the leaker and requesting compensation for the leaker or requesting the leaker to be punished when damage occurs due to the leaking video.
  • the identification information of the user inserted into the corrected video is inserted in a way that the viewer is difficult to recognize with the naked eye, it is possible to prevent unauthorized copying of the video without inconvenience to the viewers watching the video. It works.
  • FIG. 1 is a block diagram schematically illustrating a configuration of a system for detecting and tracking a video copy according to an embodiment of the present invention.
  • FIG. 2 is a flowchart schematically showing a method for generating a corrected moving picture that can be traced out according to an embodiment of the present invention.
  • FIG 3 is an exemplary view for explaining a step of extracting a candidate frame from a plurality of frames constituting a video according to an embodiment of the present invention.
  • FIG. 4 is a flowchart schematically illustrating a process of inserting user identification information into an original video according to an embodiment of the present invention.
  • FIG 5 is an exemplary view showing a screen of an original video and a corrected video according to an embodiment of the present invention.
  • FIG. 6 is an exemplary view for explaining how the user's identification information is inserted into the identification information input area according to another embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a method of generating a corrected moving picture that can be traced to a leaker, further comprising specifying an area in which identification information is inserted into an original video as an average color area according to another embodiment of the present invention.
  • FIG. 8 is an exemplary view illustrating a candidate region and an average color region in a candidate frame according to another embodiment of the present invention.
  • FIG. 9 is a flowchart schematically illustrating a method of dividing a user's identification information into a plurality of portions and inserting the identification information into a plurality of average color regions according to another embodiment of the present invention.
  • FIG. 10 is an exemplary view showing a case where there are a plurality of average color areas according to another embodiment of the present invention.
  • FIG. 11 is a flowchart schematically illustrating a video copy detection and tracking method according to an embodiment of the present invention.
  • the "original video” is a final version that is a form for distributing a video produced by the original author of the video, and means a video as it is without any processing, modification, change, identification information insertion and duplication.
  • the 'correction video' refers to a video that is generated by inserting user's identification information in a specific way to the original video and tracking the leaked person.
  • 'frame' refers to a cut of an image used in a video.
  • a 'key frame' refers to a frame which is the center of the frame such as a start frame and an end frame of a single operation.
  • 'candidate frame' refers to a frame that satisfies a condition for inserting user's identification information by the method proposed in the present invention among a plurality of frames constituting the original video.
  • the "candidate area” means an area into which identification information of a user can be inserted in the candidate frame.
  • the candidate area is a space over a specific range within the frame, and is a region composed of colors within a difference of a reference value.
  • the present invention is not limited thereto, and may be extracted using a feature that can distinguish a frame in the technical field to which the present invention pertains.
  • the 'average color region' means a region composed of an average color of candidate regions in the candidate region.
  • 'identification information' refers to identification information of a user who requested to play a video.
  • the identification ID of the user the identification number of the playback device, the network IP address (Internet Protocol Address), the MAC address (Media Access Control Address), and the time of the playback request may be included, but are not limited thereto.
  • Various information that can be included may be included.
  • the 'identification information input area' means an area where the user's identification information is located.
  • the present invention includes an invention related to a method for generating a corrected moving picture which can be traced to a leaker, and an invention related to a method of detecting a copy of a moving picture and tracking a leaker.
  • an invention related to a method for generating a corrected video capable of tracking an leaker will be described.
  • FIG. 1 is a block diagram schematically illustrating a configuration of a system for detecting and tracking a video copy according to an embodiment of the present invention.
  • a system 1000 for detecting and tracking a video copy includes a corrected video generating server 100 and a copy video detecting and tracking server 200.
  • the corrected video generation server 100 and the duplicate video detection and tracking server 200 are not necessarily separate servers, but are also managed as one server.
  • the correction video generation server 100 receives the original video 10 and inserts the identification information 60 of the user who requested to play in the original video 10 to generate a correction video 20 for tracking the leaker. Play a role.
  • the corrected video generating server 100 extracts at least one candidate frame 30 into which the identification information 60 of the user can be inserted from a plurality of frames constituting the original video 10.
  • the corrected video generating server 100 extracts the candidate region 40 from the extracted candidate frame 30 and inserts the identification information 60 of the user.
  • the duplicate video detection and tracking server 200 determines whether the corresponding video is a duplicate video with respect to the distributed video, and tracks a leaker if the video is unauthorized copying.
  • the duplicate video detection and tracking server 200 extracts at least one candidate frame 30 in which it is estimated that the user's identification information 60 has been inserted.
  • the duplicate video detection and tracking server 200 extracts the candidate region 40 in the candidate frame 30 extracted from the detection target video.
  • the duplicate video detection and tracking server 200 detects whether the identification information 60 of the user is inserted into the extracted candidate region 40, and extracts the detected identification information 60 of the user.
  • Detecting whether the identification information 60 of the user is inserted may be automatically performed by the server as described above, but the present invention is not limited thereto, and the detection requester may directly confirm the operation.
  • FIG. 2 is a flowchart schematically showing a method for generating a corrected moving picture that can be traced out according to an embodiment of the present invention.
  • a method for generating a corrected moving picture tracking extracting the candidate frame 30 into which identification information can be inserted from the original video (10) (S100), And inserting identification information into the candidate region 40 (S200), and generating a corrected video 20 into which the identification information is inserted (S300).
  • S100 original video
  • S200 candidate region 40
  • S300 corrected video 20 into which the identification information is inserted
  • FIG. 3 is an exemplary view for explaining a step (S100) of extracting a candidate frame from a plurality of frames constituting a video according to an embodiment of the present invention.
  • a video is a video that makes it appear as if it is moving to the human eye by displaying a series of images called frames. Therefore, one video is composed of a plurality of frames, and the higher the frame, the smoother and clearer the video. On the other hand, the number of frames constituting one second of the video is expressed in x length per second, fps and the like.
  • Extracting the candidate frame 30 into which the identification information can be inserted in the original video 10 may insert the user's identification information 60 among a plurality of frames constituting the original video 10. Selecting and extracting a frame including the candidate region 40 satisfying the requirements.
  • the candidate area 40 is a space over a specific range in which the identification information in the frame can be inserted, and means the area composed of colors within the difference between the reference values.
  • the reference value may be set differently according to the type of the original video, and means a difference value in which the color difference is not felt or is very fine when the frame is visually seen.
  • the frames in the first half include the candidate area 40, which is an area composed of colors within a reference value difference at the same or similar position.
  • the frames sharing the same feature are extracted as the candidate frame 30.
  • the candidate frame 30 may form a plurality of candidate groups, and each candidate group includes a candidate frame 30 into which identification information of the same color may be inserted.
  • the candidate group may be selected based on a key frame, but is not limited thereto.
  • Various methods may be applied to extract a frame sharing a color below a reference value.
  • FIG. 4 is a flowchart schematically illustrating a process of inserting user identification information into an original video according to an embodiment of the present invention.
  • the candidate region 40 refers to a region composed of colors within the difference between the reference values shared by the candidate frames 30. In other words, if the difference between the areas adjacent to the color of a specific area in the screen constituting the frame is within the reference value, the corresponding area becomes the candidate area 40. Accordingly, candidate frames 30 sharing the same candidate region 40 can insert identification information at the same position with the same color and size.
  • the candidate frame and the candidate region where the identification information may be inserted are selected and detected without the need to detect the identification information for all regions of all frames in the duplicate video detection and tracking step. You can do it. Therefore, according to the present invention, it is possible to efficiently and quickly detect duplicate video.
  • the identification information input area 50 means an area where the user's identification information 60 is substantially located. For example, it may be displayed as a background color of the user identification information 60 composed of characters.
  • the identification information input area 50 is set to the average color of the candidate area 40 (S220). This is for the viewer of the corrected video 20 to be difficult to find by the naked eye when the identification information input area 50 is inserted into the video, so as not to cause inconvenience to viewing.
  • the identification information 60 of the user is inserted to be located in the identification information input area 50 set as the average color of the candidate area 40.
  • the user identification information 60 is composed of a color that differs by the minimum value from the average color.
  • the color density is represented by a number from 0 to 255.
  • the minimum value means a difference of 1 in the RGB system.
  • the color of the user identification information 60 may be higher than the average color by one, or may be lower.
  • the average color of the candidate area 40 is the color of the green system among the three primary colors and the value of Green is 180
  • the color of the user identification information 60 is the value of the average color and the red color.
  • the value of Blue may be set to 181 or 179 which are the same but differ from the value of green by 180 and 1.
  • FIG 5 is an exemplary view showing a screen of an original video and a corrected video according to an embodiment of the present invention.
  • the original video 10 is a video before inserting identification information, and means a final version that is a form for distributing a video produced by the original author of the video.
  • the corrected video 20 refers to a video generated by the identification information inserted into the candidate area 40 by the embedding method of the present invention.
  • the identification video 'USER' may be inserted into the corrected video 20 as the identification information 60 of the user.
  • the identification information 60 of the user may include, but is not limited to, an identification number of a playback device, a network IP address (Internet Protocol Address), a MAC address (Media Access Control Address), and a time of a playback request.
  • the inserted identification ID 'USER' is not actually visible to the playback requester, the viewer cannot recognize this during video playback.
  • FIG. 6 is an exemplary view for explaining how the user's identification information is inserted into the identification information input area according to another embodiment of the present invention.
  • the identification information input area 50 includes a space where the user's identification information 60 can be inserted.
  • the color of the identification information input area 50 is set to the average color of the candidate area 40, and the user's identification information 60 is set to a color different from the average color by a minimum value.
  • the identification information input area 50 and the user identification information 60 are distinguished from each other for explanation. However, in the actual playback screen, the identification information may not be visually distinguished and the inserted identification information may not be recognized.
  • FIG. 7 is a flowchart illustrating a method of generating a corrected moving picture that can be traced to a leaker, further comprising specifying an area in which identification information is inserted into an original video as an average color area according to another embodiment of the present invention.
  • the step (S200) of inserting identification information into the candidate area 40 will be described focusing on the difference from FIG.
  • the method further includes extracting the average color region 70 from the candidate region 40 as compared with FIG. 4.
  • the average color area is a specific area within the candidate area 40, which is composed of the average color of the candidate area 40, and means an area including a space into which the user identification information 60 can be inserted.
  • the identification information input area 50 is located in the average color area 70.
  • the identification information input area 50 is set to the average color of the candidate area 40, the identification information input area 50 is substantially the same color as the color of the average color area 70.
  • the identification information input area 50 When the identification information input area 50 is located in the average color area 70, it becomes more difficult to visually discriminate than when the identification information input area 50 is located in another space in the candidate area 40, so that the video is reproduced. There is an effect that can make the viewing of the requester more comfortable.
  • FIG. 8 is an exemplary view illustrating a candidate region and an average color region in a candidate frame according to another embodiment of the present invention.
  • the candidate area 40 includes an area composed of colors that do not exactly match the average color of the candidate area 40 and an average color area 70 composed of an average color of the candidate area 40.
  • the candidate region 40 and the average color region 70 are distinguished from each other for explanation, but cannot be distinguished with the naked eye on the actual reproduction screen.
  • the identification information 60 of the user may be inserted into the candidate area 40 in the candidate frame 30 of the video alone without the identification information input area 50. That is, the identification information input area 50 corresponds to a means used to facilitate searching for the identification information 60 of the user by creating an area where the color differs by the minimum value in the duplicate video detection and tracking step.
  • the identification information 60 of the user may be inserted into the average color area 70 without being inserted into any area within the candidate area 40 of the candidate frame 30.
  • the color of the user identification information 60 is composed of a color that differs from the average color by a minimum value. Accordingly, as in the case where the identification information input area 50 including the average color described above is located, it may be detected as an area where the color differs by a minimum value in the detection step.
  • FIG. 9 is a flowchart schematically illustrating a method of dividing a user's identification information into a plurality of portions and inserting the identification information into a plurality of average color regions according to another embodiment of the present invention.
  • the method of inserting the user identification information 60 is a method of inserting the entire user identification information 60 in duplicate, and selecting and inserting one average color area. And a method of separating and inserting the identification information 60 of the user into a plurality of parts, respectively.
  • the plurality of average generation regions 70 basically consist of the average color of the candidate region 40, and are the same in that they include a space into which the identification information 60 of the user can be inserted.
  • the width and position of each average color area 70 are all different. Therefore, the first average color gamut 71 and the second average color gamut 72 are extracted by a specific criterion. For example, the widest average color area may be selected as the first average color area 71 based on the area, and then the widest average color area may be selected as the second average color area 72.
  • the present invention is not limited thereto, and various standards other than the width may be applied.
  • FIG. 10 is an exemplary view showing a case where there are a plurality of average color areas according to another embodiment of the present invention.
  • a plurality of average color areas 70 including the average color of the candidate areas 40 in the candidate area 40 and into which the identification information 60 of the user can be inserted are present. do.
  • each region is referred to as a first average color region 71 and a second average color region 72.
  • the first average color area 71 and the second average color area 72 may correspond to the two widest areas among the plurality of average color areas 70.
  • first average color region 71 and the second average color region 72 a first portion 61 and a second portion 62 which separate the user identification information 60 are inserted.
  • the identification information 60 of the user is 'USER'
  • the first and second average color areas 71 and 72 are separated into a 'US' part and an 'ER' part. You can insert each one.
  • the following describes an invention related to a method for detecting and tracking video copies.
  • FIG. 11 is a flowchart schematically illustrating a video copy detection and tracking method according to an embodiment of the present invention.
  • the present invention it is not necessary to search the entire area of all of the plurality of frames constituting the video to be detected for video duplication detection and tracking, and only the specific candidate area 40 is searched by extracting the specific candidate frame 30. . Therefore, there is an effect that more efficient and economical duplicate video detection and outflow tracking can be performed.
  • Extracting the candidate frame 30 from the detection target video (S400) is the same as the content of step (S100) of extracting the candidate frame 30 from the original video 10 described in the method for generating the corrected video 20 above. Do.
  • a step S500 of searching for the identification information 60 of the user with respect to the candidate region 40 in the extracted candidate frame 30 will be described.
  • Searching for the user identification information 60 targets the candidate area 40 existing in the specific candidate frame 30 extracted in step S400.
  • the identification information input area 50 in the candidate area 40 is set to the average color of the candidate area 40.
  • the inserted user identification information 60 includes colors that differ by the minimum value from the average color.
  • the duplicate video detection and tracking server 200 is inserted into the candidate area 40 by using the feature that the color value of the identification information input area 50 and the color value of the user identification information 60 differ by a minimum value.
  • the duplicate video detection and tracking server 200 searches for an area in which the difference in color values differs by one, which is the minimum value, in the candidate area 40 to be detected.
  • the duplicate video detection and tracking server 200 extracts and obtains the identification information 60 of the user from the corresponding area when a region where the difference in the color value is different by the minimum value 1 is detected.
  • the color value of the identification information input area 50 is the average color of the candidate area 40 and the color value of the green series of the average color is 180
  • the color of the identification information 60 of the user may be the same as the color value of the identification information input area 50 of the red series and the blue series, and the color value of the green series may be 181 (or 179). May be).
  • the duplicate video detection and tracking server 200 searches for an area where the difference in color values of the green series differs by a minimum value of 1.
  • the color values of the green series of the identification information input area 50 and the user identification information 60 are 180 and 181, respectively, and are different from each other by a minimum value of 1, and thus are detected by the duplicate video detection and tracking server 200.
  • the duplicate video detection and tracking server 200 extracts and obtains the user identification information 60 from the detected area.
  • the search may not stop until the region is found.
  • the step of extracting the average color region 70 may be further included.
  • the region into which the identification information 60 of the user is inserted is specified as the average color region 70 composed of the average color of the candidate region 40 existing in the candidate region 40 in the process of generating the corrected moving image.
  • the duplicate video detection and tracking server 200 searches only the extracted average color region 70 without searching the entire candidate candidate region 40. Therefore, the search target area becomes narrower, which enables faster and more efficient search.
  • the identification ID of the user 'USER' as the identification information is divided into 'US', which is the first part 61, and 'ER,' which is the second part 62, and the first average color area 71. Is inserted into the second average color area 72, and 'ER' is inserted into the second average color area 72, respectively.
  • the duplicate video detection and tracking server 200 extracts and acquires portions of identification information 'US' and 'ER' from the first average color region 71 and the second average color region 72, respectively, To obtain the entire identification information.
  • the order of combining the detected pieces of identification information is applied in the same manner as the criteria for selecting the first average color area 71 and the second average color area 72.
  • the widest average color area is defined as the first average color area 71
  • the next widest average color area is defined as the second average color area 72. If you choose), the order of combining will follow. That is, the identification information detected in the first average color area 71 is used as the first part 61 and the identification information detected in the second average color area 72 is the second part 62. 61) The second part 62 is then combined to complete the entire identification information.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Processing (AREA)

Abstract

A method and a system for detecting and tracking a video copy are provided. The method performed by a computer comprises: a candidate frame extraction step of extracting one or more candidate frames from an original video, wherein the candidate frames include candidate areas in which identification information of a user having requested playing of an original video can be inputted; and a step of generating a corrected video in which the identification information has been inputted in the candidate area within a specific candidate frame, wherein the candidate area is a space having a specific range or more within a frame, and has a color within a reference value difference.

Description

동영상 복제 검출 및 추적을 위한 방법 및 시스템Method and system for video copy detection and tracking
본 발명은 동영상 복제 검출 및 추적을 위한 방법 및 시스템에 관한 것으로, 보다 상세하게는 동영상을 무단복제하여 배포하는 것을 방지하고, 복제된 동영상을 검출 및 유출자를 추적하는 방법 및 시스템에 관한 것이다.The present invention relates to a method and a system for detecting and tracking a moving picture copy, and more particularly, to a method and a system for preventing distribution of a moving picture by unauthorized copying and detecting and tracking a leaked copy of a moving picture.
정보통신 기술 및 네트워크가 발달됨에 따라, 오프라인을 통해 제공되던 교육, 출판, 음악, 영화, 게임 등의 다양한 콘텐츠가 디지털 데이터 형태로 제작되어 온라인을 통해 배포 및 제공되고 있다.As information and communication technologies and networks are developed, various contents, such as education, publication, music, movies, and games, which have been provided offline, are produced in digital data form and distributed and provided online.
오프라인을 통한 제공방법과 달리 디지털 데이터 형태로 제작된 콘텐츠는 원본을 획득하여, 복제, 변형, 가공, 수정 및 전파가 훨씬 수월하여 콘텐츠의 원제작자가 불법 유출 등을 원인으로 피해를 입는 사례가 빈번하게 발생하고 있다.Unlike the offline delivery method, contents produced in the form of digital data are easily acquired, copied, transformed, processed, modified and propagated, so that the original creator of the contents is often damaged due to illegal leakage. Is happening.
따라서, 원제작자 입장에서는 제작한 콘텐츠의 가치를 보존하고 정당한 이익 창출을 위해서, 콘텐츠가 권한없는 제3자에 의해 무단으로 복제되어 배포되는 것을 방지하고, 유출된 경우 유출자를 추적하여 합당한 보상 및 처벌을 청구할 수 있도록 하는 수단이 필수적으로 요구된다.Therefore, in order to preserve the value of the produced contents and to create legitimate profits, the original producer prevents unauthorized copying and distribution of contents by unauthorized third parties, and traces leakers by reason of compensation and punishment if leaked. Means are required to enable the claim.
특히, 디지털 콘텐츠 중 동영상 콘텐츠의 경우, 불법 유출이 가장 활발한 분야 중 하나이다. 그 원인은 동영상 콘텐츠는 해당 동영상이 있는 URL을 쉽게 획득할 수 있고, 획득한 URL을 기반으로 다양한 소프트웨어 및 프로그램들을 이용하여 불법 다운로드가 가능하다.In particular, video content among digital contents is one of the most active illegal outflow. The reason is that the video content can easily obtain a URL containing the video, and illegal download is possible using various software and programs based on the obtained URL.
한편, 현재 동영상 콘텐츠의 복제 방지를 위해 많이 쓰이는 방법으로는 동영상 자체를 암호화하는 방법로 디지털 저작권 권리(DRM, Digital Rights Management)를 이용하는 방법이 있다. DRM은 디지털 콘텐츠를 암호화하여 정상적으로 구매한 고객만이 사용할 수 있도록 하는 솔루션을 의미한다. 하지만, 영상을 통째로 캡쳐하거나 녹화할 수 있는 프로그램과 같이 DRM을 무력화하고 회피할 수 있는 방법이 다수 존재하고 있는 실정이다.On the other hand, a current method for copy protection of video content is a method of encrypting the video itself using a digital rights rights (DRM, Digital Rights Management). DRM refers to a solution that encrypts digital content and makes it available only to customers who have purchased it normally. However, there are many ways to disable and avoid DRM, such as programs that can capture or record video as a whole.
본 발명이 해결하고자 하는 과제는 무단복제 동영상의 유출자 추적이 가능한 동영상을 생성할 수 있는 방법 및 시스템을 제공하는 것이다.The problem to be solved by the present invention is to provide a method and system that can generate a video capable of tracking the leak of unauthorized video.
또한, 본 발명이 해결하고자 하는 과제는 배포된 동영상을 대상으로 무단복제 여부를 검출하고 유출자를 추적할 수 있는 방법 및 시스템을 제공하는 것이다.In addition, the problem to be solved by the present invention is to provide a method and system that can detect whether or not unauthorized copying to the distributed video and track the leaker.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Problems to be solved by the present invention are not limited to the above-mentioned problems, and other problems not mentioned will be clearly understood by those skilled in the art from the following description.
상술한 과제를 해결하기 위한 본 발명의 일 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 컴퓨터에 의해 수행되는, 원본 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출단계; 및 특정한 후보프레임 내의 후보영역에 상기 식별정보를 삽입한 보정 동영상을 생성하는 단계;를 포함하되, 상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것을 특징으로 한다.According to an aspect of the present invention, a method for detecting and tracking video duplication according to an aspect of the present invention includes extracting one or more candidate frames from an original video, which is performed by a computer, wherein the candidate frames request playback of the original video. A candidate frame extracting step comprising a candidate region into which identification information of a user can be inserted; And generating a corrected moving picture in which the identification information is inserted into a candidate region in a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
상술한 과제를 해결하기 위한 본 발명의 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 상기 보정 동영상을 생성하는 단계는, 상기 후보영역 내에 식별정보 입력영역을 상기 후보영역의 평균색상으로 설정하는 단계; 및 상기 식별정보 입력영역에 상기 평균색상에서 최소값만큼 차이나는 색상으로 사용자의 식별정보를 삽입하는 단계;를 포함한다.According to another aspect of the present invention, there is provided a method for detecting and tracking video duplication. The step of generating the corrected video includes setting an identification information input area as an average color of the candidate area in the candidate area. Doing; And inserting identification information of the user into a color that differs from the average color by a minimum value in the identification information input area.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 상기 후보영역 내에서 상기 후보영역의 평균색상으로 구성되며, 사용자의 식별정보를 삽입 가능한 평균색상영역을 추출하는 단계를 더 포함하되, 상기 식별정보 입력영역은 상기 평균색상영역에 위치하는 것을 특징으로 한다.According to another aspect of the present invention for solving the above-described problems, the method for detecting and tracking video duplication is composed of the average color of the candidate area in the candidate area, the average color area that can insert the user's identification information And extracting the identification information input area, wherein the identification information input area is located in the average color area.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 상기 보정 동영상을 생성하는 단계는, 상기 후보영역 내에 상기 후보영역의 평균색상으로 구성되며, 사용자의 식별정보를 삽입 가능한 평균색상영역을 추출하는 단계; 및 상기 평균색상영역에 상기 사용자의 식별정보를 삽입하는 단계;를 포함하되, 상기 사용자의 식별정보는 상기 평균색상에서 최소값만큼 차이나는 색상으로 구성된 것을 특징으로 한다.According to another aspect of the present invention for solving the above-described problem, the method for detecting and tracking video duplication, the step of generating the corrected video, is composed of the average color of the candidate area in the candidate area, Extracting an average color area into which identification information can be inserted; And inserting identification information of the user in the average color area, wherein the identification information of the user is configured by a color that differs from the average color by a minimum value.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 상기 평균색상영역이 복수인 경우, 특정한 규칙에 의해 제1 평균색상영역 및 제2 평균색상영역을 추출하는 단계; 상기 사용자의 식별정보를 제1 부분 및 제2 부분으로 분리하는 단계; 및 상기 제1 부분 및 제2 부분을 상기 제1 평균색상영역 및 제2 평균색상영역에 각각 삽입하는 단계;를 더 포함한다.According to still another aspect of the present invention, there is provided a method for detecting and tracking video duplication, in which the first average color region and the second average color region are determined by a specific rule when the average color region is plural. Extracting; Separating the identification information of the user into a first portion and a second portion; And inserting the first portion and the second portion into the first average color region and the second average color region, respectively.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 컴퓨터에 의해 수행되는, 검출 대상 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출단계; 및 특정한 후보프레임 내의 후보영역에 대해 상기 식별정보를 탐색하는 단계;를 포함하되, 상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것을 특징으로 한다.According to another aspect of the present invention for solving the above-described problem, a method for detecting and tracking video duplication extracts one or more candidate frames from a video to be detected, which is performed by a computer, wherein the candidate frames may be used to extract the original video. A candidate frame extracting step comprising a candidate region into which identification information of a user who has requested playback can be inserted; And searching for the identification information with respect to the candidate region within a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 상기 식별정보 탐색단계는, 상기 후보영역에서 색상이 최소값만큼 차이나는 영역을 탐색하는 단계를 포함하고, 색상이 최소값만큼 차이나는 영역이 감지된 경우, 상기 색상이 최소값만큼 차이나는 영역에서 상기 식별정보를 추출하는 것을 특징으로 한다.According to still another aspect of the present invention, there is provided a method for detecting and tracking video duplication, wherein the searching for identification information includes searching for an area where a color differs from the candidate area by a minimum value. When the area where the color differs by the minimum value is detected, the identification information is extracted from the area where the color differs by the minimum value.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 방법은, 색상이 최소값만큼 차이나는 영역이 복수로 감지된 경우, 상기 복수의 영역에서 각각 식별정보를 추출하는 단계; 및 상기 추출된 식별정보들을 조합하는 단계를 더 포함하는 것을 특징으로 한다.According to another aspect of the present invention for solving the above problems, a method for detecting and tracking video duplication is to extract identification information from each of the plurality of areas when a plurality of areas where the color differs by a minimum value is detected. step; And combining the extracted identification information.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 시스템은, 원본 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출부; 및 특정한 후보프레임 내의 후보영역에 상기 식별정보를 삽입한 보정 동영상을 생성하는 생성부;를 포함하되, 상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것을 특징으로 한다.According to another aspect of the present invention for solving the above problems, a system for detecting and tracking video duplication extracts one or more candidate frames from an original video, wherein the candidate frames contain identification information of a user who requested to play the original video. A candidate frame extractor comprising an insertable candidate region; And a generation unit for generating a corrected moving picture in which the identification information is inserted into a candidate region in a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a reference value difference.
상술한 과제를 해결하기 위한 본 발명의 또 다른 면에 따른 동영상 복제 검출 및 추적을 위한 시스템은, 복제 동영상에서 하나 이상의 보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출부; 및 특정한 후보프레임 내의 후보영역에 대해 상기 식별정보를 탐색하는 탐색부;를 포함하되, 상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것을 특징으로 한다.In accordance with another aspect of the present invention for solving the above problems, a system for detecting and tracking video duplication extracts one or more vvoframes from a duplicate video, wherein the candidate frame includes identification information of a user who requested to play the original video. A candidate frame extractor comprising an insertable candidate region; And a searcher for searching for the identification information with respect to the candidate region within a specific candidate frame, wherein the candidate region is a space over a specific range within the frame and is configured with a color within a difference of a reference value.
본 발명의 기타 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있다.Other specific details of the invention are included in the detailed description and drawings.
상기 본 발명에 의하면, 동영상 저작자는 원본 동영상이 무단으로 복제되어 유출되는 것에 대비하여, 유출자를 추적할 수 있는 보정 동영상을 생성할 수 있다. 따라서, 동영상 저작자는 동영상 유출로 인해 발생하는 손해를 사전에 대비할 수 있고, 동영상 재생 요청자가 동영상을 무단복제하여 배포하는 것을 방지하는 효과가 있다.According to the present invention, the video author can create a corrected video to track the leaker in preparation for the unauthorized copying and leakage of the original video. Therefore, the video author can prepare for the damage caused by the video leakage in advance, and the video play requester can prevent unauthorized copying and distribution of the video.
또한, 상기 본 발명에 의하면, 배포된 동영상을 대상으로, 해당 동영상이 무단복제 된 것인지 여부를 판단하고, 무단복제된 동영상이라면 유출자를 검출 및 추적할 수 있다. 따라서, 동영상 저작자는 동영상 유출로 인해 손해가 발생했을 때 유출자를 추적하여 유출자에게 보상을 요구하거나, 유출자가 처벌받도록 청구할 수 있게 되는 효과가 있다.In addition, according to the present invention, it is possible to determine whether the video is copied without permission for the distributed video, and if the video is unauthorized copying can detect and track the leaker. Therefore, the video author has the effect of tracking the leaker and requesting compensation for the leaker or requesting the leaker to be punished when damage occurs due to the leaking video.
또한, 상기 본 발명에 의하면, 보정 동영상에 삽입된 사용자의 식별정보는 시청자가 육안으로 인식하기 어려운 방법으로 삽입되므로, 동영상을 시청하는 시청자를 불편하게 하지 않으면서도 동영상의 무단복제를 방지할 수 있는 효과가 있다.In addition, according to the present invention, since the identification information of the user inserted into the corrected video is inserted in a way that the viewer is difficult to recognize with the naked eye, it is possible to prevent unauthorized copying of the video without inconvenience to the viewers watching the video. It works.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the following description.
도 1은 본 발명의 일 실시예에 따른 동영상 복제 검출 및 추적을 위한 시스템의 구성을 개략적으로 나타내는 블록도이다.1 is a block diagram schematically illustrating a configuration of a system for detecting and tracking a video copy according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 유출자 추적이 가능한 보정 동영상 생성방법을 개략적으로 나타내는 흐름도이다.2 is a flowchart schematically showing a method for generating a corrected moving picture that can be traced out according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 동영상을 구성하는 복수의 프레임 중에서 후보 프레임을 추출하는 단계를 설명하기 위한 예시도이다.3 is an exemplary view for explaining a step of extracting a candidate frame from a plurality of frames constituting a video according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 원본 동영상에 사용자의 식별정보를 삽입하는 과정을 개략적으로 나타내는 흐름도이다.4 is a flowchart schematically illustrating a process of inserting user identification information into an original video according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 원본 동영상 및 보정 동영상의 화면을 나타내는 예시도이다.5 is an exemplary view showing a screen of an original video and a corrected video according to an embodiment of the present invention.
도 6은 본 발명의 다른 실시예에 따른 식별정보 입력영역에 사용자의 식별정보가 삽입된 모습을 설명하기 위한 예시도이다.6 is an exemplary view for explaining how the user's identification information is inserted into the identification information input area according to another embodiment of the present invention.
도 7은 본 발명의 또 다른 실시예에 따른 원본 동영상에 식별정보를 삽입하는 영역을 평균색상영역으로 특정하는 단계가 더 포함된 유출자 추적이 가능한 보정 동영상 생성방법을 나타내는 흐름도이다.FIG. 7 is a flowchart illustrating a method of generating a corrected moving picture that can be traced to a leaker, further comprising specifying an area in which identification information is inserted into an original video as an average color area according to another embodiment of the present invention.
도 8은 본 발명의 또 다른 실시예에 따른 후보프레임 내에서 후보영역과 평균색상영역을 설명하기 예시도이다.8 is an exemplary view illustrating a candidate region and an average color region in a candidate frame according to another embodiment of the present invention.
도 9는 본 발명의 또 다른 실시예에 따른 사용자의 식별정보를 복수의 부분으로 분리하여 복수의 평균색상영역에 각각 삽입하는 방법을 개략적으로 나타내는 흐름도이다.9 is a flowchart schematically illustrating a method of dividing a user's identification information into a plurality of portions and inserting the identification information into a plurality of average color regions according to another embodiment of the present invention.
도 10은 본 발명의 또 다른 실시예에 따른 복수의 평균색상영역이 있는 경우를 나타내는 예시도이다.10 is an exemplary view showing a case where there are a plurality of average color areas according to another embodiment of the present invention.
도 11은 본 발명의 일 실시예에 따른 동영상 복제 검출 및 추적 방법을 개략적으로 나타내는 흐름도이다.11 is a flowchart schematically illustrating a video copy detection and tracking method according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. Advantages and features of the present invention and methods for achieving them will be apparent with reference to the embodiments described below in detail with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but can be embodied in various different forms, and the present embodiments only make the disclosure of the present invention complete, and those of ordinary skill in the art to which the present invention belongs. It is provided to fully inform the skilled worker of the scope of the invention, which is defined only by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In this specification, the singular also includes the plural unless specifically stated otherwise in the phrase. As used herein, "comprises" and / or "comprising" does not exclude the presence or addition of one or more other components in addition to the mentioned components. Like reference numerals refer to like elements throughout, and "and / or" includes each and all combinations of one or more of the mentioned components. Although "first", "second", etc. are used to describe various components, these components are of course not limited by these terms. These terms are only used to distinguish one component from another. Therefore, of course, the first component mentioned below may be a second component within the technical spirit of the present invention.
본 명세서에서 '원본 동영상'은 동영상의 원작자가 제작한 동영상의 배포를 위한 형태인 최종본으로서, 어떤 가공, 수정, 변경, 식별정보 삽입 및 복제 등이 되지않은 원본 그대로의 동영상을 의미한다.In the present specification, the "original video" is a final version that is a form for distributing a video produced by the original author of the video, and means a video as it is without any processing, modification, change, identification information insertion and duplication.
본 명세서에서 '보정 동영상'은 상기 원본 동영상에 특정한 방법으로 사용자의 식별정보가 삽입되어 생성되어 유출자 추적이 가능한 동영상을 의미한다.In the present specification, the 'correction video' refers to a video that is generated by inserting user's identification information in a specific way to the original video and tracking the leaked person.
본 명세서에서 '프레임(Frame)'은 동영상에 사용하는 이미지 한 컷을 의미한다.In the present specification, 'frame' refers to a cut of an image used in a video.
본 명세서에서 '키 프레임(Key Frame)'은 단일 동작의 시작 프레임과 끝 프레임 등 가장 중심이 되는 프레임을 의미한다.In the present specification, a 'key frame' refers to a frame which is the center of the frame such as a start frame and an end frame of a single operation.
본 명세서에서 '후보프레임'은 원본동영상을 구성하는 다수의 프레임 중에서 본 발명에서 제시된 방법에 의해 사용자의 식별정보를 삽입할 수 있는 조건을 만족하는 프레임을 의미한다.In the present specification, 'candidate frame' refers to a frame that satisfies a condition for inserting user's identification information by the method proposed in the present invention among a plurality of frames constituting the original video.
본 명세서에서 '후보영역'은 상기 후보프레임 내에서 사용자의 식별정보를 삽입 가능한 영역을 의미한다.In the present specification, the "candidate area" means an area into which identification information of a user can be inserted in the candidate frame.
예를 들어, 본 발명의 일 실시예에 의하면 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 영역이다. 그러나, 이에 한정되는 것은 아니고, 본 발명이 속하는 기술 분야에서 프레임을 구별할 수 있는 특징을 이용하여 추출될 수 있다.For example, according to an embodiment of the present invention, the candidate area is a space over a specific range within the frame, and is a region composed of colors within a difference of a reference value. However, the present invention is not limited thereto, and may be extracted using a feature that can distinguish a frame in the technical field to which the present invention pertains.
본 명세서에서 '평균색상영역'은 상기 후보영역에서 후보영역의 평균색상으로 구성된 영역을 의미한다.In the present specification, the 'average color region' means a region composed of an average color of candidate regions in the candidate region.
본 명세서에서 '식별정보'는 동영상을 재생 요청한 사용자의 식별정보를 의미한다. 예를 들어, 사용자의 식별ID, 재생장치의 식별번호, 네트워크 IP주소(Internet Protocol Address), MAC주소(Media Access Control Address), 재생 요청한 시간 등이 포함될 수 있으며 이에 한정되는 것은 아니고, 사용자를 식별할 수 있는 다양한 정보가 포함될 수 있다.In the present specification, 'identification information' refers to identification information of a user who requested to play a video. For example, the identification ID of the user, the identification number of the playback device, the network IP address (Internet Protocol Address), the MAC address (Media Access Control Address), and the time of the playback request may be included, but are not limited thereto. Various information that can be included may be included.
본 명세서에서 '식별정보 입력영역'은 사용자의 식별정보가 위치하는 영역을 의미한다. In the present specification, the 'identification information input area' means an area where the user's identification information is located.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms used in the present specification (including technical and scientific terms) may be used in a sense that can be commonly understood by those skilled in the art. In addition, terms that are defined in a commonly used dictionary are not ideally or excessively interpreted unless they are specifically defined clearly.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
본 발명은 크게 유출자 추적이 가능한 보정 동영상 생성방법과 관련된 발명과 동영상 복제를 검출하고 유출자를 추적하는 방법과 관련된 발명을 포함하고 있다. 먼저, 유출자 추적이 가능한 보정동영상 생성방법과 관련된 발명을 설명한다.The present invention includes an invention related to a method for generating a corrected moving picture which can be traced to a leaker, and an invention related to a method of detecting a copy of a moving picture and tracking a leaker. First, an invention related to a method for generating a corrected video capable of tracking an leaker will be described.
도 1은 본 발명의 일 실시예에 따른 동영상 복제 검출 및 추적을 위한 시스템의 구성을 개략적으로 나타내는 블록도이다.1 is a block diagram schematically illustrating a configuration of a system for detecting and tracking a video copy according to an embodiment of the present invention.
도 1을 참조하면, 동영상 복제 검출 및 추적을 위한 시스템(1000)은 보정 동영상 생성서버(100) 및 복제 동영상 검출 및 추적 서버(200)를 포함한다.Referring to FIG. 1, a system 1000 for detecting and tracking a video copy includes a corrected video generating server 100 and a copy video detecting and tracking server 200.
보정 동영상 생성서버(100) 및 복제 동영상 검출 및 추적 서버(200)는 반드시 분리되어있는 별도의 서버인 것은 아니고, 하나의 서버로 관리되는 경우도 포함된다.The corrected video generation server 100 and the duplicate video detection and tracking server 200 are not necessarily separate servers, but are also managed as one server.
보정 동영상 생성서버(100)는 원본 동영상(10)을 입력받아서, 원본 동영상(10) 내에 재생을 요청한 사용자의 식별정보(60)를 삽입하여 유출자를 추적할 수 있는 보정 동영상(20)을 생성하는 역할을 한다.The correction video generation server 100 receives the original video 10 and inserts the identification information 60 of the user who requested to play in the original video 10 to generate a correction video 20 for tracking the leaker. Play a role.
이를 위해 보정 동영상 생성서버(100)는 원본 동영상(10)을 구성하는 복수의 프레임에서 사용자의 식별정보(60)를 삽입할 수 있는 적어도 하나의 후보프레임(30)을 추출한다. 보정 동영상 생성서버(100)는 추출한 후보프레임(30) 내에서 후보영역(40)을 추출하여 사용자의 식별정보(60)를 삽입한다.To this end, the corrected video generating server 100 extracts at least one candidate frame 30 into which the identification information 60 of the user can be inserted from a plurality of frames constituting the original video 10. The corrected video generating server 100 extracts the candidate region 40 from the extracted candidate frame 30 and inserts the identification information 60 of the user.
복제 동영상 검출 및 추적 서버(200)는 배포된 동영상을 대상으로 해당 동영상이 복제된 동영상인지 여부를 판단하고, 무단복제된 동영상이라면 유출자를 추적하는 역할을 한다.The duplicate video detection and tracking server 200 determines whether the corresponding video is a duplicate video with respect to the distributed video, and tracks a leaker if the video is unauthorized copying.
이를 위해 복제 동영상 검출 및 추적 서버(200)는 사용자의 식별정보(60)가 삽입되었을 것으로 추정되는 적어도 하나의 후보프레임(30)을 추출한다. 복제 동영상 검출 및 추적 서버(200)는 검출 대상 동영상에서 추출한 후보프레임(30)내에 후보영역(40)을 추출한다. 복제 동영상 검출 및 추적 서버(200)는 추출한 후보영역(40)에 사용자의 식별정보(60)가 삽입되어 있는지 여부를 감지하고, 감지된 사용자의 식별정보(60)를 추출한다.To this end, the duplicate video detection and tracking server 200 extracts at least one candidate frame 30 in which it is estimated that the user's identification information 60 has been inserted. The duplicate video detection and tracking server 200 extracts the candidate region 40 in the candidate frame 30 extracted from the detection target video. The duplicate video detection and tracking server 200 detects whether the identification information 60 of the user is inserted into the extracted candidate region 40, and extracts the detected identification information 60 of the user.
사용자의 식별정보(60)가 삽입되어 있는지 여부를 감지하는 것은 상술한 바와 같이 서버에서 자동으로 이루어질 수 있으나, 이에 한정되는 것은 아니고 검출 요청자가 직접 조작하여 확인할 수도 있다.Detecting whether the identification information 60 of the user is inserted may be automatically performed by the server as described above, but the present invention is not limited thereto, and the detection requester may directly confirm the operation.
도 2는 본 발명의 일 실시예에 따른 유출자 추적이 가능한 보정 동영상 생성방법을 개략적으로 나타내는 흐름도이다.2 is a flowchart schematically showing a method for generating a corrected moving picture that can be traced out according to an embodiment of the present invention.
도 2를 참조하면, 본 발명의 일 실시예에 따른 유출자 추적이 가능한 보정 동영상 생성방법은, 원본 동영상(10)에서 식별정보를 삽입할 수 있는 후보프레임(30)을 추출하는 단계(S100), 후보영역(40)에 식별정보를 삽입하는 단계(S200), 식별정보가 삽입된 보정 동영상(20)을 생성하는 단계(S300)를 포함한다. 이하, 각 단계에 대한 상세한 설명을 기술한다.Referring to Figure 2, according to an embodiment of the present invention, a method for generating a corrected moving picture tracking, extracting the candidate frame 30 into which identification information can be inserted from the original video (10) (S100), And inserting identification information into the candidate region 40 (S200), and generating a corrected video 20 into which the identification information is inserted (S300). Hereinafter, a detailed description of each step will be described.
도 3은 본 발명의 일 실시예에 따른 동영상을 구성하는 복수의 프레임 중에서 후보 프레임을 추출하는 단계(S100)를 설명하기 위한 예시도이다.3 is an exemplary view for explaining a step (S100) of extracting a candidate frame from a plurality of frames constituting a video according to an embodiment of the present invention.
동영상은 프레임(Frame)이라 불리는 이미지를 연속으로 모아서 보여줌으로써 사람의 눈에 움직이는 것처럼 보이도록 하는 영상이다. 따라서, 하나의 동영상은 복수의 프레임으로 구성되며, 프레임이 높을수록 부드럽고 선명한 동영상이 된다. 한편, 동영상 1초를 구성하는 프레임이 얼마나 많은지는 초당x장, fps 등으로 표시한다.A video is a video that makes it appear as if it is moving to the human eye by displaying a series of images called frames. Therefore, one video is composed of a plurality of frames, and the higher the frame, the smoother and clearer the video. On the other hand, the number of frames constituting one second of the video is expressed in x length per second, fps and the like.
상기 원본 동영상(10)에서 식별정보를 삽입할 수 있는 후보프레임(30)을 추출하는 단계(S100)는 원본 동영상(10)을 구성하는 복수의 프레임 중에서 사용자의 식별정보(60)를 삽입할 수 있는 요건을 만족하는 후보영역(40)을 포함하는 프레임을 선별하여 추출하는 단계이다.Extracting the candidate frame 30 into which the identification information can be inserted in the original video 10 (S100) may insert the user's identification information 60 among a plurality of frames constituting the original video 10. Selecting and extracting a frame including the candidate region 40 satisfying the requirements.
상기 후보영역(40)은 프레임 내의 식별정보를 삽입할 수 있을 정도의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 영역을 의미한다.The candidate area 40 is a space over a specific range in which the identification information in the frame can be inserted, and means the area composed of colors within the difference between the reference values.
본 발명의 일 실시예에 따르면, 기준값은 원본 동영상의 종류에 따라 다르게 설정될 수 있으며, 프레임을 육안으로 보았을 때 색상의 차이가 느껴지지 않거나 매우 미세한 수준인 차이값을 의미한다.According to an embodiment of the present invention, the reference value may be set differently according to the type of the original video, and means a difference value in which the color difference is not felt or is very fine when the frame is visually seen.
도 3을 참조하면, 예시로, 동영상을 구성하고 있는 프레임의 일부가 도안화되어 있다. 도시된 복수의 프레임 중, 전반부에 있는 프레임들은 동일하거나 유사한 위치에 기준값 차이 이내의 색상으로 구성된 영역인, 후보영역(40)을 포함하고 있다. 이러한 동일한 특징을 공유하고 있는 프레임들을 후보프레임(30)으로 추출한다.Referring to FIG. 3, by way of example, a part of the frame constituting the video is illustrated. Among the plurality of illustrated frames, the frames in the first half include the candidate area 40, which is an area composed of colors within a reference value difference at the same or similar position. The frames sharing the same feature are extracted as the candidate frame 30.
본 발명의 일 실시예에 따르면, 후보프레임(30)은 복수의 후보군을 이룰 수 있으며, 각 후보군은 동일한 색상의 식별정보를 삽입할 수 있는 후보프레임(30)으로 구성되어 있다. 후보군은 키 프레임(Key Frame)을 기준으로 선별될 수 있으나 이에 한정되는 것은 아니고, 기준값 이하의 색상을 공유하는 프레임을 추출하는 데는 다양한 방식이 적용될 수 있다.According to an embodiment of the present invention, the candidate frame 30 may form a plurality of candidate groups, and each candidate group includes a candidate frame 30 into which identification information of the same color may be inserted. The candidate group may be selected based on a key frame, but is not limited thereto. Various methods may be applied to extract a frame sharing a color below a reference value.
도 4는 본 발명의 일 실시예에 따른 원본 동영상에 사용자의 식별정보를 삽입하는 과정을 개략적으로 나타내는 흐름도이다.4 is a flowchart schematically illustrating a process of inserting user identification information into an original video according to an embodiment of the present invention.
상기 후보영역(40)에 식별정보를 삽입하는 단계(S200)는, 원본 동영상(10)의 후보프레임(30)에서 후보영역(40)을 추출하는 단계(S210), 식별정보 입력영역(50)을 후보영역의 평균색상으로 설정하는 단계(S220), 식별정보 입력영역(50)에 사용자의 식별정보(60)를 삽입하는 단계(S230)를 포함한다.Inserting identification information into the candidate region 40 (S200), extracting the candidate region 40 from the candidate frame 30 of the original video 10 (S210), identification information input area 50 Setting the average color of the candidate area (S220), and inserting the user identification information 60 into the identification information input area 50 (S230).
후보영역(40)은 후보프레임(30)이 공유하고 있는 기준값 차이 이내의 색상으로 구성된 영역을 의미한다. 즉, 프레임을 구성하고 있는 화면에서 특정 영역의 색상이 인접하는 영역간 차이가 기준값 이내라면 해당 영역이 후보영역(40)이 된다. 따라서, 동일한 후보영역(40)을 공유하는 후보프레임(30)들은 식별정보를 동일한 색상 및 크기로 동일한 위치에 삽입할 수 있게 된다.The candidate region 40 refers to a region composed of colors within the difference between the reference values shared by the candidate frames 30. In other words, if the difference between the areas adjacent to the color of a specific area in the screen constituting the frame is within the reference value, the corresponding area becomes the candidate area 40. Accordingly, candidate frames 30 sharing the same candidate region 40 can insert identification information at the same position with the same color and size.
임의의 위치에 식별정보를 삽입하였을 때와 달리, 복제 동영상 검출 및 추적 단계에서 모든 프레임의 모든 영역을 대상으로 식별정보를 탐지할 필요없이 식별정보가 삽입되었을만한 후보프레임 및 후보영역을 선별하여 탐지할 수 있게 된다. 따라서, 본 발명에 의하면 효율적이고 신속한 복제 동영상 검출이 가능하다는 효과가 있다.Unlike when the identification information is inserted at an arbitrary position, the candidate frame and the candidate region where the identification information may be inserted are selected and detected without the need to detect the identification information for all regions of all frames in the duplicate video detection and tracking step. You can do it. Therefore, according to the present invention, it is possible to efficiently and quickly detect duplicate video.
식별정보 입력영역(50)은 사용자의 식별정보(60)가 실질적으로 위치하는 영역을 의미한다. 예를 들어, 문자로 구성된 사용자의 식별정보(60)의 배경색으로 표시될 수 있다.The identification information input area 50 means an area where the user's identification information 60 is substantially located. For example, it may be displayed as a background color of the user identification information 60 composed of characters.
식별정보 입력영역(50)은 후보영역(40)의 평균색상으로 설정된다(S220). 이는 식별정보 입력영역(50)이 동영상에 삽입되었을 때, 해당 보정 동영상(20)의 시청자가 육안으로 발견하기 어렵게 하여, 시청에 불편을 느끼지 않도록 하기 위함이다.The identification information input area 50 is set to the average color of the candidate area 40 (S220). This is for the viewer of the corrected video 20 to be difficult to find by the naked eye when the identification information input area 50 is inserted into the video, so as not to cause inconvenience to viewing.
사용자의 식별정보(60)는 상기 후보영역(40)의 평균색상으로 설정된 식별정보 입력영역(50)에 위치하도록 삽입된다. 한편, 사용자의 식별정보(60)는 상기 평균색상에서 최소값만큼 차이나는 색상으로 구성된다.The identification information 60 of the user is inserted to be located in the identification information input area 50 set as the average color of the candidate area 40. On the other hand, the user identification information 60 is composed of a color that differs by the minimum value from the average color.
본 발명의 일 실시예에 따르면, 색상을 RGB(Red, Green, Blue) 시스템으로 표현할 때, 색의 농도는 0부터 255까지의 숫자로 표현된다. 255에 가까울수록 해당 계열의 색상의 농도가 높아지고, 0에 가까울수록 해당 계열의 색상의 농도가 낮아진다. 이 때, 상기 최소값은 RGB 시스템에서 1만큼의 차이를 의미한다.According to an embodiment of the present invention, when the color is represented by an RGB (Red, Green, Blue) system, the color density is represented by a number from 0 to 255. The closer to 255, the higher the intensity of the color of the series, and the closer to 0, the lower the intensity of the color of the series. In this case, the minimum value means a difference of 1 in the RGB system.
한편, 사용자의 식별정보(60)의 색상이 상기 평균색상보다 1만큼 높을 수도 있고, 낮을 수도 있다.On the other hand, the color of the user identification information 60 may be higher than the average color by one, or may be lower.
예를 들어, 후보영역(40)의 평균색상이 삼원색 중 녹색 계통의 색상이고 녹색(Green)의 값이 180이라면, 사용자의 식별정보(60)의 색상은 상기 평균색상과 빨간색(Red)의 값 및 파란색(Blue)의 값은 동일하되 녹색의 값이 180과 1만큼 차이나는 181 또는 179로 설정될 수 있다.For example, if the average color of the candidate area 40 is the color of the green system among the three primary colors and the value of Green is 180, the color of the user identification information 60 is the value of the average color and the red color. And the value of Blue may be set to 181 or 179 which are the same but differ from the value of green by 180 and 1.
이는 육안으로 구별하기 어려운 색상 차이에 해당하므로, 동영상 재생 요청자의 시청을 불편하게 하지 않는다. 한편, 검출단계에서는 최소값만큼의 색상 차이가 매우 중요한 역할을 하며, 이에 대한 상세한 설명은 후술한다.This corresponds to a color difference that is difficult to distinguish with the naked eye, and thus does not inconvenience viewing of the video play requester. On the other hand, the color difference by the minimum value plays a very important role in the detection step, a detailed description thereof will be described later.
도 5는 본 발명의 일 실시예에 따른 원본 동영상 및 보정 동영상의 화면을 나타내는 예시도이다.5 is an exemplary view showing a screen of an original video and a corrected video according to an embodiment of the present invention.
원본 동영상(10)은 식별정보를 삽입하기 전의 동영상으로서, 동영상의 원작자가 제작한 동영상의 배포를 위한 형태인 최종본을 의미한다.The original video 10 is a video before inserting identification information, and means a final version that is a form for distributing a video produced by the original author of the video.
보정 동영상(20)은 식별정보가 후보영역(40) 이내에 본 발명의 삽입 방법에 의해 삽입되어 생성된 동영상을 의미한다.The corrected video 20 refers to a video generated by the identification information inserted into the candidate area 40 by the embedding method of the present invention.
도 5에 도시된 예시에 의하면, 보정 동영상(20)에는 사용자의 식별정보(60)로서 식별ID인 'USER'가 삽입될 수 있다. 사용자의 식별정보(60)는 식별ID 이외에 재생장치의 식별번호, 네트워크 IP주소(Internet Protocol Address), MAC주소(Media Access Control Address), 재생 요청한 시간 등이 포함될 수 있으며 이에 한정되는 것은 아니다. 한편, 삽입된 식별ID인 'USER'는 실제로 재생 요청자에게는 육안으로 보이지 않으므로 시청자는 동영상 재생 중에 이를 인지할 수 없다.According to the example illustrated in FIG. 5, the identification video 'USER' may be inserted into the corrected video 20 as the identification information 60 of the user. In addition to the identification ID, the identification information 60 of the user may include, but is not limited to, an identification number of a playback device, a network IP address (Internet Protocol Address), a MAC address (Media Access Control Address), and a time of a playback request. On the other hand, since the inserted identification ID 'USER' is not actually visible to the playback requester, the viewer cannot recognize this during video playback.
도 6은 본 발명의 다른 실시예에 따른 식별정보 입력영역에 사용자의 식별정보가 삽입된 모습을 설명하기 위한 예시도이다.6 is an exemplary view for explaining how the user's identification information is inserted into the identification information input area according to another embodiment of the present invention.
식별정보 입력영역(50)은 사용자의 식별정보(60)가 삽입될 수 있을 정도의 공간을 포함한다. 식별정보 입력영역(50)의 색상은 후보영역(40)의 평균색상으로 설정되고, 사용자의 식별정보(60)는 상기 평균색상과 최소값만큼 차이나는 색상으로 설정된다.The identification information input area 50 includes a space where the user's identification information 60 can be inserted. The color of the identification information input area 50 is set to the average color of the candidate area 40, and the user's identification information 60 is set to a color different from the average color by a minimum value.
예시도는 설명을 위해 식별정보 입력영역(50)과 사용자의 식별정보(60)가 구별되도록 도시하였으나, 실제 재생 화면에서는 육안으로 구별할 수 없으며 삽입된 식별정보를 인지할 수 없다.In the exemplary diagram, the identification information input area 50 and the user identification information 60 are distinguished from each other for explanation. However, in the actual playback screen, the identification information may not be visually distinguished and the inserted identification information may not be recognized.
도 7은 본 발명의 또 다른 실시예에 따른 원본 동영상에 식별정보를 삽입하는 영역을 평균색상영역으로 특정하는 단계가 더 포함된 유출자 추적이 가능한 보정 동영상 생성방법을 나타내는 흐름도이다.FIG. 7 is a flowchart illustrating a method of generating a corrected moving picture that can be traced to a leaker, further comprising specifying an area in which identification information is inserted into an original video as an average color area according to another embodiment of the present invention.
후보영역(40)에 식별정보를 삽입하는 단계(S200)에 대해서, 도 4와의 차이점을 중점으로 하여 설명한다.The step (S200) of inserting identification information into the candidate area 40 will be described focusing on the difference from FIG.
도 7을 참조하면, 도 4와 비교하여 후보영역(40)에서 평균색상영역(70)을 추출하는 단계가 더 포함된다.Referring to FIG. 7, the method further includes extracting the average color region 70 from the candidate region 40 as compared with FIG. 4.
평균색상영역은 후보영역(40)내의 특정한 영역으로서, 후보영역(40)의 평균색상으로 구성되며, 사용자의 식별정보(60)를 삽입 가능한 공간을 포함하는 영역을 의미한다.The average color area is a specific area within the candidate area 40, which is composed of the average color of the candidate area 40, and means an area including a space into which the user identification information 60 can be inserted.
본 발명의 또 다른 실시예에 의하면, 식별정보 입력영역(50)은 상기 평균색상영역(70)에 위치하게 된다. 한편, 식별정보 입력영역(50)은 후보영역(40)의 평균색상으로 설정되기 때문에 실질적으로 평균색상영역(70)의 색상과 완벽하게 동일한 색상이 된다.According to another embodiment of the present invention, the identification information input area 50 is located in the average color area 70. On the other hand, since the identification information input area 50 is set to the average color of the candidate area 40, the identification information input area 50 is substantially the same color as the color of the average color area 70.
식별정보 입력영역(50)이 평균색상영역(70)에 위치하게 되면, 식별정보 입력영역(50)이 후보영역(40) 내의 다른 공간에 위치한 경우보다 더욱 육안으로 구별하는 것이 어려워지므로, 동영상 재생 요청자의 시청을 더욱 편안하게 할 수 있는 효과가 있다.When the identification information input area 50 is located in the average color area 70, it becomes more difficult to visually discriminate than when the identification information input area 50 is located in another space in the candidate area 40, so that the video is reproduced. There is an effect that can make the viewing of the requester more comfortable.
도 8은 본 발명의 또 다른 실시예에 따른 후보프레임 내에서 후보영역과 평균색상영역을 설명하기 예시도이다.8 is an exemplary view illustrating a candidate region and an average color region in a candidate frame according to another embodiment of the present invention.
후보프레임(30) 내에는 기준값 이하의 색상 차이가 나는 후보영역(40)이 존재한다. 후보영역(40)에는 후보영역(40)의 평균색상과 정확히 일치하지 않는 색상으로 구성된 영역 및 후보영역(40)의 평균색상으로 구성된 평균색상영역(70) 존재한다.In the candidate frame 30, there is a candidate area 40 having a color difference less than or equal to a reference value. The candidate area 40 includes an area composed of colors that do not exactly match the average color of the candidate area 40 and an average color area 70 composed of an average color of the candidate area 40.
예시도는 설명을 위해 후보영역(40)과 평균색상영역(70)이 구별되도록 도시하였으나, 실제 재생 화면에서는 육안으로 구별할 수 없다.In the exemplary diagram, the candidate region 40 and the average color region 70 are distinguished from each other for explanation, but cannot be distinguished with the naked eye on the actual reproduction screen.
본 발명의 다른 실시예에 의하면, 사용자의 식별정보(60)는 식별정보 입력영역(50)없이 단독으로 동영상의 후보프레임(30) 내의 후보영역(40)에 삽입되는 것도 가능하다. 즉, 식별정보 입력영역(50)은 복제 동영상 검출 및 추적 단계에서 색상이 최소값만큼 차이나는 영역을 만들어서 사용자의 식별정보(60)를 탐색하는 것을 수월하게 하기 위해 이용되는 수단에 해당한다.According to another embodiment of the present invention, the identification information 60 of the user may be inserted into the candidate area 40 in the candidate frame 30 of the video alone without the identification information input area 50. That is, the identification information input area 50 corresponds to a means used to facilitate searching for the identification information 60 of the user by creating an area where the color differs by the minimum value in the duplicate video detection and tracking step.
또한, 다른 일실시예로, 사용자의 식별정보(60)가 식별정보 입력영역(50)없이 단독으로 삽입되는 경우에도, 색상이 최소값만큼 차이나는 영역을 만들 수 있는 방법을 설명한다.In addition, as another embodiment, even if the user's identification information 60 is inserted alone without the identification information input area 50, a method in which the color is different by the minimum value will be described.
사용자의 식별정보(60)는 후보프레임(30)의 후보영역(40) 내에 임의의 영역에 삽입되지 않고, 평균색상영역(70)에 삽입될 수 있다. 이 때, 사용자의 식별정보(60)의 색상은 평균색상에서 최소값만큼 차이나는 색상으로 구성된다. 따라서, 상술한 평균색상으로 구성된 식별정보 입력영역(50)에 위치한 경우와 마찬가지로 검출 단계에서 색상이 최소값만큼 차이나는 영역으로 감지될 수 있다.The identification information 60 of the user may be inserted into the average color area 70 without being inserted into any area within the candidate area 40 of the candidate frame 30. At this time, the color of the user identification information 60 is composed of a color that differs from the average color by a minimum value. Accordingly, as in the case where the identification information input area 50 including the average color described above is located, it may be detected as an area where the color differs by a minimum value in the detection step.
다음으로, 평균색상영역(70)이 복수인 경우를 설명한다.Next, the case where the average color area 70 is plural will be described.
도 9는 본 발명의 또 다른 실시예에 따른 사용자의 식별정보를 복수의 부분으로 분리하여 복수의 평균색상영역에 각각 삽입하는 방법을 개략적으로 나타내는 흐름도이다.9 is a flowchart schematically illustrating a method of dividing a user's identification information into a plurality of portions and inserting the identification information into a plurality of average color regions according to another embodiment of the present invention.
복수의 평균색상영역(70)이 있는 경우, 사용자의 식별정보(60)가 삽입되는 방법에는, 사용자의 식별정보(60) 전체가 중복하여 각각 삽입되는 방법, 하나의 평균색상영역을 선택하여 삽입하는 방법 및 사용자의 식별정보(60)를 복수의 부분으로 분리하여 각각 삽입하는 방법이 포함된다.When there are a plurality of average color areas 70, the method of inserting the user identification information 60 is a method of inserting the entire user identification information 60 in duplicate, and selecting and inserting one average color area. And a method of separating and inserting the identification information 60 of the user into a plurality of parts, respectively.
사용자의 식별정보(60)를 복수의 부분으로 분리하여 복수의 평균색상영역(70)에 각각 삽입하는 방법은, 원본 동영상의 후보프레임(30)에서 후보영역(40)을 추출하는 단계(S210), 후보영역(40)에서 평균색상영역(70)을 추출하는 단계(S211), 평균색상영역(70)에서 제1 평균색상영역(71) 및 제2 평균색상영역(72)을 추출하는 단계(S213), 사용자의 식별정보를 제1 부분(61) 및 제2 부분(62)로 분리하는 단계(S215) 및 사용자의 식별정보의 제1 부분(61) 및 제2 부분(62)을 제1 평균색상영역(71) 및 제2 평균색상영역(72)에 각각 삽입하는 단계(S217)를 포함한다.In the method of dividing the user identification information 60 into a plurality of parts and inserting the identification information 60 into the plurality of average color areas 70, extracting the candidate area 40 from the candidate frame 30 of the original video (S210). Extracting the average color area 70 from the candidate area 40 (S211), extracting the first average color area 71 and the second average color area 72 from the average color area 70 ( S213), separating the identification information of the user into the first portion 61 and the second portion 62 (S215) and the first portion 61 and the second portion 62 of the identification information of the user in a first manner. And inserting into the average color area 71 and the second average color area 72, respectively (S217).
일실시예로, 복수의 평균색상영역(70)에서 제1 평균색상영역(71) 및 제2 평균색상영역(72)을 추출하는 방법을 설명한다. 복수의 평균생상영역(70)은 모두 기본적으로 후보영역(40)의 평균색상으로 구성되며, 사용자의 식별정보(60)가 삽입될 수 있는 공간을 포함한다는 점에서는 동일하다. 한편, 각각의 평균색상영역(70)의 넓이 및 위치는 모두 상이하다. 따라서, 특정한 기준에 의해 제1 평균색상영역(71) 및 제2 평균색상영역(72)을 추출한다. 예를 들어, 넓이를 기준으로 가장 넓은 평균색상영역을 제1 평균색상영역(71)으로 선별하고, 그 다음으로 넓은 평균색상영역을 제2 평균색상영역(72)으로 선택할 수 있다. 다만, 이에 한정되는 것은 아니고, 넓이 외의 다양한 기준이 적용될 수 있다.As an example, a method of extracting the first average color region 71 and the second average color region 72 from the plurality of average color regions 70 will be described. The plurality of average generation regions 70 basically consist of the average color of the candidate region 40, and are the same in that they include a space into which the identification information 60 of the user can be inserted. On the other hand, the width and position of each average color area 70 are all different. Therefore, the first average color gamut 71 and the second average color gamut 72 are extracted by a specific criterion. For example, the widest average color area may be selected as the first average color area 71 based on the area, and then the widest average color area may be selected as the second average color area 72. However, the present invention is not limited thereto, and various standards other than the width may be applied.
도 10은 본 발명의 또 다른 실시예에 따른 복수의 평균색상영역이 있는 경우를 나타내는 예시도이다.10 is an exemplary view showing a case where there are a plurality of average color areas according to another embodiment of the present invention.
도 10에 도시된 예시를 참조하면, 후보영역(40) 내에 후보영역(40)의 평균색상으로 구성되며, 사용자의 식별정보(60)를 삽입할 수 있는 평균색상영역(70)이 복수로 존재한다.Referring to the example illustrated in FIG. 10, a plurality of average color areas 70 including the average color of the candidate areas 40 in the candidate area 40 and into which the identification information 60 of the user can be inserted are present. do.
일실시예로, 도 10에서 도시된 바와 같이, 평균색상영역(70)이 복수로 존재하는 경우 각 영역을 제1 평균색상영역(71) 및 제2 평균색상영역(72)으로 지칭한다. 상기 제1 평균색상영역(71) 및 제2 평균색상영역(72)은 복수의 평균색상영역(70) 중에서 가장 넓은 2개의 영역에 해당할 수 있다.As shown in FIG. 10, when a plurality of average color regions 70 exist, each region is referred to as a first average color region 71 and a second average color region 72. The first average color area 71 and the second average color area 72 may correspond to the two widest areas among the plurality of average color areas 70.
제1 평균색상영역(71) 및 제2 평균색상영역(72)에는 사용자의 식별정보(60)를 분리한 제1 부분(61) 및 제2 부분(62)이 각각 삽입된다.In the first average color region 71 and the second average color region 72, a first portion 61 and a second portion 62 which separate the user identification information 60 are inserted.
도 10을 참조하면, 사용자의 식별정보(60)가 'USER'인 경우, 'US' 부분과 'ER'부분으로 분리하여 제1 평균색상영역(71) 및 제2 평균색상영역(72)에 각각 삽입할 수 있다.Referring to FIG. 10, when the identification information 60 of the user is 'USER', the first and second average color areas 71 and 72 are separated into a 'US' part and an 'ER' part. You can insert each one.
이를 통해, 복수의 평균색상영역(70) 중에서 어느 하나도 단독으로 사용자의 식별정보(60)를 삽입할 수 있는 충분한 공간이 없는 경우, 사용자의 식별정보(60)를 분리하여 삽입함으로써 여전히 본 발명의 실시예를 통한 실시가 가능해진다.In this case, if any one of the plurality of average color areas 70 does not have enough space to insert the identification information 60 of the user alone, by inserting the identification information 60 of the user is still separated Implementation through the embodiment becomes possible.
지금부터는 동영상 복제를 검출하고 추적하는 방법과 관련된 발명을 설명한다.The following describes an invention related to a method for detecting and tracking video copies.
도 11은 본 발명의 일 실시예에 따른 동영상 복제 검출 및 추적 방법을 개략적으로 나타내는 흐름도이다.11 is a flowchart schematically illustrating a video copy detection and tracking method according to an embodiment of the present invention.
도 11을 참조하면, 동영상 복제 검출 및 추적하는 방법은, 검출 대상 동영상에서 후보프레임(30)을 추출하는 단계(S400), 추출한 후보프레임(30) 내의 후보영역(40)에 대해 사용자의 식별정보(60)를 탐색하는 단계(S500) 및 삽입된 사용자의 식별정보(60)를 추출하는 단계(S600)를 포함한다.Referring to FIG. 11, in the method of detecting and tracking a video copy, extracting the candidate frame 30 from the detected video (S400) and identifying the user with respect to the candidate area 40 in the extracted candidate frame 30. Searching (60) (S500) and extracting the identification information 60 of the inserted user (S600).
본 발명에 의하면, 동영상 복제 검출 및 추적을 위해 검출 대상 동영상을 구성하는 복수의 프레임 모두의 전체 영역을 탐색할 필요없이, 특정한 후보프레임(30)을 추출하여 특정한 후보영역(40)만을 탐색하면 된다. 따라서, 보다 효율적이고 경제적인 복제 동영상 검출 및 유출자 추적이 가능하다는 효과가 있다.According to the present invention, it is not necessary to search the entire area of all of the plurality of frames constituting the video to be detected for video duplication detection and tracking, and only the specific candidate area 40 is searched by extracting the specific candidate frame 30. . Therefore, there is an effect that more efficient and economical duplicate video detection and outflow tracking can be performed.
상기 검출 대상 동영상에서 후보프레임(30)을 추출하는 단계(S400)는 앞서 보정 동영상(20) 생성방법에서 설명한 원본 동영상(10)에서 후보 프레임(30)을 추출하는 단계(S100)의 내용과 동일하다.Extracting the candidate frame 30 from the detection target video (S400) is the same as the content of step (S100) of extracting the candidate frame 30 from the original video 10 described in the method for generating the corrected video 20 above. Do.
상기 추출한 후보프레임(30) 내의 후보영역(40)에 대해 사용자의 식별정보(60)를 탐색하는 단계(S500)를 설명한다.A step S500 of searching for the identification information 60 of the user with respect to the candidate region 40 in the extracted candidate frame 30 will be described.
사용자의 식별정보(60)를 탐색하는 단계(S500)는 S400 단계에서 추출한 특정한 후보프레임(30)내에 존재하는 후보영역(40)을 대상으로 한다.Searching for the user identification information 60 (S500) targets the candidate area 40 existing in the specific candidate frame 30 extracted in step S400.
본 발명의 일 실시예에 의하면, 상기 후보영역(40) 내의 식별정보 입력영역(50)은 후보영역(40)의 평균색상으로 설정된다. 한편 삽입된 사용자의 식별정보(60)는 상기 평균색상에서 최소값만큼 차이나는 색상으로 구성된다.According to an embodiment of the present invention, the identification information input area 50 in the candidate area 40 is set to the average color of the candidate area 40. Meanwhile, the inserted user identification information 60 includes colors that differ by the minimum value from the average color.
복제 동영상 검출 및 추적 서버(200)는 식별정보 입력영역(50)의 색상값과 사용자의 식별정보(60)의 색상값이 최소값만큼 차이난다는 특징을 이용하여, 후보영역(40)내에 삽입된 사용자의 식별정보(60)를 탐색한다. 즉, 사용자의 식별정보(60)를 탐색하는 단계(S500)는 색상이 최소값만큼 차이나는 영역을 감지하는 단계 및 상기 감지된 영역에서 사용자의 식별정보(60)를 추출하는 단계를 포함할 수 있다.The duplicate video detection and tracking server 200 is inserted into the candidate area 40 by using the feature that the color value of the identification information input area 50 and the color value of the user identification information 60 differ by a minimum value. Search for the identification information 60 of the user. That is, the step (S500) of searching for the identification information 60 of the user may include detecting an area where the color differs by a minimum value and extracting the identification information 60 of the user from the detected area. .
색상을 RGB 시스템으로 표현하는 경우, 색상의 최소값은 0부터 255까지로 표시되는 색상값에서 1만큼을 의미한다. 따라서, 복제 동영상 검출 및 추적 서버(200)는 검출 대상이 되는 후보영역(40) 내에서 색상값의 차이가 최소값인 1만큼 차이나는 영역을 탐색한다. 복제 동영상 검출 및 추적 서버(200)는 색상값의 차이가 최소값인 1만큼 차이나는 영역이 감지되면, 해당 영역에서 사용자의 식별정보(60)를 추출하여 획득한다.When the color is represented by the RGB system, the minimum value of the color means 1 as the color value represented by 0 to 255. Accordingly, the duplicate video detection and tracking server 200 searches for an area in which the difference in color values differs by one, which is the minimum value, in the candidate area 40 to be detected. The duplicate video detection and tracking server 200 extracts and obtains the identification information 60 of the user from the corresponding area when a region where the difference in the color value is different by the minimum value 1 is detected.
예를 들어, 식별정보 입력영역(50)의 색상값은 후보영역(40)의 평균색상이며, 평균색상의 녹색 계열의 색상값이 180인 경우를 설명한다. 이 때, 사용자의 식별정보(60)의 색상은 빨간색 계열 및 파란색 계열의 색상값은 식별정보 입력영역(50)의 색상값과 동일하고, 녹색 계열의 색상값은 181일 수 있다(또는, 179일 수 있다.). 복제 동영상 검출 및 추적 서버(200)는 녹색 계열의 색상값 차이가 최소값인 1만큼 차이나는 영역을 탐색한다. 식별정보 입력영역(50)과 사용자의 식별정보(60)의 녹색 계열의 색상값은 각각 180, 181로 최소값인 1만큼 차이나므로, 복제 동영상 검출 및 추적 서버(200)에 의해 감지된다.For example, the case where the color value of the identification information input area 50 is the average color of the candidate area 40 and the color value of the green series of the average color is 180 will be described. In this case, the color of the identification information 60 of the user may be the same as the color value of the identification information input area 50 of the red series and the blue series, and the color value of the green series may be 181 (or 179). May be). The duplicate video detection and tracking server 200 searches for an area where the difference in color values of the green series differs by a minimum value of 1. The color values of the green series of the identification information input area 50 and the user identification information 60 are 180 and 181, respectively, and are different from each other by a minimum value of 1, and thus are detected by the duplicate video detection and tracking server 200.
상술한 바와 같이, 색상값의 차이가 최소값인 1만큼 차이나는 영역이 감지된 경우, 복제 동영상 검출 및 추적 서버(200)는 감지된 영역에서 사용자의 식별정보(60)를 추출하여 획득한다.As described above, when an area in which the difference in color values differs by a minimum value of 1 is detected, the duplicate video detection and tracking server 200 extracts and obtains the user identification information 60 from the detected area.
한편, 일실시예로, 상기 색상값의 차이가 최소값인 1만큼 차이나는 영역이 발견되지 않는 경우는, 다른 후보프레임에 대해 새롭게 탐색 작업을 진행하거나, 색상값의 차이가 최소값인 1만큼 차이나는 영역이 발견될 떄까지 탐색 작업을 멈추지 않을 수 있다.On the other hand, in one embodiment, when the area of the difference between the color value is not found by the minimum value of 1 is found, another search for a new candidate frame, or the difference in the color value is changed by the minimum value of 1 The search may not stop until the region is found.
본 발명의 또 다른 실시예에 의하면, 검출 대상 동영상에서 후보프레임(30)을 추출하는 단계(S400) 이후에, 평균색상영역(70)을 추출하는 단계가 더 포함될 수 있다. 이는 보정 동영상 생성 과정에서, 사용자의 식별정보(60)를 삽입하는 영역을 후보영역(40)내에 존재하는 후보영역(40)의 평균색상으로 구성된 평균색상영역(70)으로 특정한 경우이다. 이 경우, 복제 동영상 검출 및 추적 서버(200)는 검출 대상 후보영역(40) 전체를 대상으로 탐색할 필요없이, 추출된 평균색상영역(70)에 대해서만 탐색을 진행하게 된다. 따라서, 탐색 대상 영역이 더욱 좁아지므로, 보다 신속하고 효율적인 탐색이 가능해진다.According to another embodiment of the present invention, after extracting the candidate frame 30 from the detection target video (S400), the step of extracting the average color region 70 may be further included. This is a case where the region into which the identification information 60 of the user is inserted is specified as the average color region 70 composed of the average color of the candidate region 40 existing in the candidate region 40 in the process of generating the corrected moving image. In this case, the duplicate video detection and tracking server 200 searches only the extracted average color region 70 without searching the entire candidate candidate region 40. Therefore, the search target area becomes narrower, which enables faster and more efficient search.
다음으로, 동영상 복제 검출 및 추적 단계에서 색상값의 차이가 최소값인 영역이 복수로 발견된 경우를 설명한다.Next, a case where a plurality of areas where the difference in color values is the minimum value in the video copy detection and tracking step is found will be described.
도 10을 참조하면, 식별정보로서 'USER'라는 사용자의 식별ID가 제1 부분(61)인 'US' 및 제2 부분(62)인 'ER'으로 분리되어, 제1 평균색상영역(71)에 'US'가, 제2 평균색상영역(72)에는 'ER'이 각각 삽입되어있다. 복제 동영상 검출 및 추적 서버(200)는 제1 평균색상영역(71) 및 제2 평균색상영역(72)에서 각각 'US'와 'ER'이라는 식별정보의 부분들을 추출 및 획득하고, 'USER'로 조합하여 식별정보 전체를 획득한다.Referring to FIG. 10, the identification ID of the user 'USER' as the identification information is divided into 'US', which is the first part 61, and 'ER,' which is the second part 62, and the first average color area 71. Is inserted into the second average color area 72, and 'ER' is inserted into the second average color area 72, respectively. The duplicate video detection and tracking server 200 extracts and acquires portions of identification information 'US' and 'ER' from the first average color region 71 and the second average color region 72, respectively, To obtain the entire identification information.
일실시예로, 감지한 식별정보의 부분들을 조합하는 순서는 제1 평균색상영역(71) 및 제2 평균색상영역(72)을 선택하는 기준과 동일하게 적용된다. 전술한 예와 같이, 복수의 평균생삭영역(70) 중에서 넓이를 기준으로 가장 넓은 평균색상영역을 제1 평균색상영역(71)으로, 그 다음으로 넓은 평균색상영역을 제2 평균색상영역(72)으로 선택한 경우, 조합하는 순서도 그에 따른다. 즉, 제1 평균색상영역(71)에서 감지된 식별정보를 제1 부분(61), 제2 평균색상영역(72)에서 감지된 식별정보를 제2 부분(62)으로 하여, 제1 부분(61) 뒤에 제2 부분(62)을 조합하여 전체 식별정보를 완성한다.In one embodiment, the order of combining the detected pieces of identification information is applied in the same manner as the criteria for selecting the first average color area 71 and the second average color area 72. As in the above-described example, among the plurality of average cutting areas 70, the widest average color area is defined as the first average color area 71, and the next widest average color area is defined as the second average color area 72. If you choose), the order of combining will follow. That is, the identification information detected in the first average color area 71 is used as the first part 61 and the identification information detected in the second average color area 72 is the second part 62. 61) The second part 62 is then combined to complete the entire identification information.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, in a software module executed by hardware, or by a combination thereof. Software modules may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.In the above, embodiments of the present invention have been described with reference to the accompanying drawings, but those skilled in the art to which the present invention pertains may implement the present invention in other specific forms without changing the technical spirit or essential features. I can understand that. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive.

Claims (10)

  1. 컴퓨터에 의해 수행되는,Performed by a computer,
    원본 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출단계; 및Extracting at least one candidate frame from an original video, wherein the candidate frame includes a candidate area into which identification information of a user who has requested to play the original video can be inserted; And
    특정한 후보프레임 내의 후보영역에 상기 식별정보를 삽입한 보정 동영상을 생성하는 단계;를 포함하되,Generating a corrected video including the identification information inserted in a candidate region within a specific candidate frame;
    상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것인, 유출자 추적이 가능한 보정 동영상 생성방법.The candidate region is a space over a specific range within a frame, and composed of colors within a difference of a reference value.
  2. 제1항에 있어서, 상기 보정 동영상을 생성하는 단계는,The method of claim 1, wherein generating the corrected video comprises:
    상기 후보영역 내에 식별정보 입력영역을 상기 후보영역의 평균색상으로 설정하는 단계; 및Setting an identification information input area in the candidate area to an average color of the candidate area; And
    상기 식별정보 입력영역에 상기 평균색상에서 최소값만큼 차이나는 색상으로 사용자의 식별정보를 삽입하는 단계;를 포함하는, 유출자 추적이 가능한 보정 동영상 생성방법.And inserting identification information of the user into a color that differs from the average color by a minimum value in the identification information input area.
  3. 제2항에 있어서,The method of claim 2,
    상기 후보영역 내에서 상기 후보영역의 평균색상으로 구성되며, 사용자의 식별정보를 삽입 가능한 평균색상영역을 추출하는 단계를 더 포함하되,The method may further include extracting an average color area formed of an average color of the candidate area within the candidate area and inserting user identification information.
    상기 식별정보 입력영역은 상기 평균색상영역에 위치하는 것을 특징으로 하는, 유출자 추적이 가능한 보정 동영상 생성방법.And the identification information input area is located in the average color area.
  4. 제1항에 있어서, 상기 보정 동영상을 생성하는 단계는,The method of claim 1, wherein generating the corrected video comprises:
    상기 후보영역 내에 상기 후보영역의 평균색상으로 구성되며, 사용자의 식별정보를 삽입 가능한 평균색상영역을 추출하는 단계; 및Extracting an average color area formed of an average color of the candidate area within the candidate area and into which user identification information can be inserted; And
    상기 평균색상영역에 상기 사용자의 식별정보를 삽입하는 단계;를 포함하되,And inserting identification information of the user into the average color area.
    상기 사용자의 식별정보는 상기 평균색상에서 최소값만큼 차이나는 색상으로 구성된 것인, 유출자 추적이 가능한 보정 동영상 생성방법.The identification information of the user is composed of a color that differs by the minimum value from the average color, outgoing traceable correction video generation method.
  5. 제4항에 있어서,The method of claim 4, wherein
    상기 평균색상영역이 복수인 경우, 특정한 규칙에 의해 제1 평균색상영역 및 제2 평균색상영역을 추출하는 단계;Extracting a first average color area and a second average color area according to a specific rule when the average color area is plural;
    상기 사용자의 식별정보를 제1 부분 및 제2 부분으로 분리하는 단계; 및Separating the identification information of the user into a first portion and a second portion; And
    상기 제1 부분 및 제2 부분을 상기 제1 평균색상영역 및 제2 평균색상영역에 각각 삽입하는 단계;를 더 포함하는, 유출자 추적이 가능한 보정 동영상 생성방법.And inserting the first portion and the second portion into the first average color region and the second average color region, respectively. 2.
  6. 컴퓨터에 의해 수행되는,Performed by a computer,
    검출 대상 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출단계; 및Extracting at least one candidate frame from the detection target video, wherein the candidate frame includes a candidate area into which identification information of a user who has requested to play the original video can be inserted; And
    특정한 후보프레임 내의 후보영역에 대해 상기 식별정보를 탐색하는 단계;를 포함하되,Searching for the identification information about a candidate region within a specific candidate frame;
    상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것인, 동영상 복제 검출 및 추적방법.The candidate area is a space over a specific range within a frame and is composed of colors within a reference value difference.
  7. 제6항에 있어서, 상기 식별정보 탐색단계는,The method of claim 6, wherein the identifying information search step,
    상기 후보영역에서 색상이 최소값만큼 차이나는 영역을 탐색하는 단계를 포함하고,Searching for an area in which color differs from the candidate area by a minimum value;
    색상이 최소값만큼 차이나는 영역이 감지된 경우, 상기 색상이 최소값만큼 차이나는 영역에서 상기 식별정보를 추출하는 것을 특징으로 하는, 동영상 복제 검출 및 추적방법.And if the region where the color differs by the minimum value is detected, extracting the identification information from the region where the color differs by the minimum value.
  8. 제7항에 있어서, 색상이 최소값만큼 차이나는 영역이 복수로 감지된 경우,The method of claim 7, wherein when a plurality of areas where the color differs by a minimum value is detected,
    상기 복수의 영역에서 각각 식별정보를 추출하는 단계; 및Extracting identification information from each of the plurality of areas; And
    상기 추출된 식별정보들을 조합하는 단계를 더 포함하는 것을 특징으로 하는, 동영상 복제 검출 및 추적방법.Combining the extracted identification information, characterized in that it further comprises, moving picture detection and tracking method.
  9. 원본 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출부; 및Extracting at least one candidate frame from an original video, wherein the candidate frame includes a candidate area into which identification information of a user who has requested to play the original video can be inserted; And
    특정한 후보프레임 내의 후보영역에 상기 식별정보를 삽입한 보정 동영상을 생성하는 생성부;를 포함하되,And a generator configured to generate a corrected video including the identification information in a candidate region within a specific candidate frame.
    상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것인, 유출자 추적이 가능한 보정 동영상 생성 시스템.The candidate region is a space over a specific range within the frame, and composed of colors within the difference between the reference value, the outflow tracking can be corrected video generation system.
  10. 검출 대상 동영상에서 하나 이상의 후보프레임을 추출하되, 상기 후보프레임은 원본 동영상을 재생 요청한 사용자의 식별정보를 삽입 가능한 후보영역을 포함하는 것인, 후보프레임 추출부; 및Extracting at least one candidate frame from the detection target video, wherein the candidate frame includes a candidate area into which identification information of a user who has requested to play the original video can be inserted; And
    특정한 후보프레임 내의 후보영역에 대해 상기 식별정보를 탐색하는 탐색부;를 포함하되,And a searcher for searching for the identification information on the candidate region within a specific candidate frame.
    상기 후보영역은 프레임 내의 특정한 범위 이상의 공간으로서, 기준값 차이 이내의 색상으로 구성된 것인, 동영상 복제 검출 및 추적 시스템.The candidate region is a space over a specific range within a frame, and is composed of colors within a reference value difference.
PCT/KR2019/004964 2018-04-24 2019-04-24 Method and system for detecting and tracking video copy WO2019209027A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20180047586 2018-04-24
KR10-2018-0047586 2018-04-24
KR10-2019-0047761 2019-04-24
KR1020190047761A KR102227370B1 (en) 2018-04-24 2019-04-24 Method and system for detecting and tracking video piracy

Publications (1)

Publication Number Publication Date
WO2019209027A1 true WO2019209027A1 (en) 2019-10-31

Family

ID=68294167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/004964 WO2019209027A1 (en) 2018-04-24 2019-04-24 Method and system for detecting and tracking video copy

Country Status (1)

Country Link
WO (1) WO2019209027A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090080689A1 (en) * 2005-12-05 2009-03-26 Jian Zhao Watermarking Encoded Content
KR20090122606A (en) * 2008-05-26 2009-12-01 김상귀 Copyright protection and infringement measures on the Internet
US20120163654A1 (en) * 2010-12-22 2012-06-28 Electronics And Telecommunications Research Institute Method and system for tracking illegal distributor and preventing illegal content distribution
US20120281871A1 (en) * 2000-02-14 2012-11-08 Reed Alastair M Color image or video processing
KR101439475B1 (en) * 2013-04-03 2014-09-17 주식회사 마인미디어 Apparatus and method for detecting and searching illegal copies of moving pictures

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120281871A1 (en) * 2000-02-14 2012-11-08 Reed Alastair M Color image or video processing
US20090080689A1 (en) * 2005-12-05 2009-03-26 Jian Zhao Watermarking Encoded Content
KR20090122606A (en) * 2008-05-26 2009-12-01 김상귀 Copyright protection and infringement measures on the Internet
US20120163654A1 (en) * 2010-12-22 2012-06-28 Electronics And Telecommunications Research Institute Method and system for tracking illegal distributor and preventing illegal content distribution
KR101439475B1 (en) * 2013-04-03 2014-09-17 주식회사 마인미디어 Apparatus and method for detecting and searching illegal copies of moving pictures

Similar Documents

Publication Publication Date Title
WO2010064830A2 (en) Apparatus and method for inserting a watermark
WO2014200137A1 (en) System and method for detecting advertisements on the basis of fingerprints
WO2019088688A1 (en) Content distribution management system and method using blockchain technology
WO2015111826A1 (en) Inpainting device and method using segmentation of reference region
WO2021091003A1 (en) Method for managing copyright of content
WO2021256755A1 (en) Device and method for managing harmful content by using metadata
WO2022025449A1 (en) System of allocating to user and rewarding user through virtual space allocation and partition in physical space
KR20190123696A (en) Method and system for detecting and tracking video piracy
WO2019209027A1 (en) Method and system for detecting and tracking video copy
WO2017052240A1 (en) Duplicate image evidence management system for verifying authenticity and integrity
WO2020105867A1 (en) Device and method for inserting identification code for tracking duplicated image
KR100833987B1 (en) Image searching apparatus and method thereof
WO2023128112A1 (en) Hazard information management server for collecting and managing hazard information on road through link with information collecting terminal mounted on vehicle, and operating method thereof
WO2015133829A1 (en) Apparatus for protecting digital cinema contents and method therefor
WO2019103443A1 (en) Method, apparatus and system for managing electronic fingerprint of electronic file
WO2019124583A1 (en) System and method for monitoring multi-projection theater
WO2025110367A1 (en) Artificial intelligence-based subtitle management device, method, and program
WO2013042843A1 (en) Method for authenticating images on the basis of block units using a reversible watermarking based on a progressive differential histogram
KR20210147964A (en) Method and device for detecting and tracking video piracy
WO2015167312A1 (en) Device and method for processing video
KR102483204B1 (en) Methods and systems for detection and tracking of video duplications in nft platforms
WO2024135878A1 (en) Depth information-based video determination device and method therefor
CN101222601B (en) Devices, video projection systems and signals for processing video images
WO2021049835A1 (en) Security image-generating display device and adapter
WO2021133133A1 (en) Electronic drawing security management method using colors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791840

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19791840

Country of ref document: EP

Kind code of ref document: A1