[go: up one dir, main page]

AU2008255189A1 - Method of detecting artefacts in video data - Google Patents

Method of detecting artefacts in video data Download PDF

Info

Publication number
AU2008255189A1
AU2008255189A1 AU2008255189A AU2008255189A AU2008255189A1 AU 2008255189 A1 AU2008255189 A1 AU 2008255189A1 AU 2008255189 A AU2008255189 A AU 2008255189A AU 2008255189 A AU2008255189 A AU 2008255189A AU 2008255189 A1 AU2008255189 A1 AU 2008255189A1
Authority
AU
Australia
Prior art keywords
video
frame
video field
artefact
fields
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008255189A
Inventor
Andrew James Dorrell
Nagita Mehrseresht
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2008255189A priority Critical patent/AU2008255189A1/en
Publication of AU2008255189A1 publication Critical patent/AU2008255189A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0112Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard
    • H04N7/0115Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard with details on the detection of a particular field or frame pattern in the incoming video signal, e.g. 3:2 pull-down pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0147Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation using an indication of film mode or an indication of a specific pattern, e.g. 3:2 pull-down pattern

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Description

S&F Ref: 864516 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Andrew James Dorrell Nagita Mehrseresht Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Method of detecting artefacts in video data The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(1 883480_1) -1 METHOD OF DETECTING ARTEFACTS IN VIDEO DATA FIELD OF INVENTION The current invention relates to the field of video data processing and, in particular, a method and apparatus for detecting video fields containing feathering artefacts. The present 5 invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for detecting video fields containing feathering artefacts. DESCRIPTION OF BACKGROUND ART Video data is acquired (and/or encoded) and transmitted in one of two different formats 0 referred to as "progressive" and "interlaced". A video data sequence captured on film or generated for computer display typically use the progressive format whereas television is typically interlaced. In progressive video, a sequence of frames of video data is displayed on a display screen at a specified frame-rate specified in Frames Per Second (fps). "Np" is a commonly used 5 abbreviation for "N frames per second progressive video". For example 24p is used to mean "24 frames per second progressive video". In "interlaced" video, odd and even rows in a frame of video data displayed on the display screen are updated separately. A frame of interlaced video data typically includes two "video fields". A first video field consists of all of the odd rows of video data of the frame of video 0 data. The second video field consists of all of the even rows of video data of the frame of video data. Only one (1) video field is updated every new frame (or screen refresh) on an "interlaced display", alternating even and odd. "Ni" is a commonly used abbreviation for "N fields per second interlaced video", such as 60i for "60 fields per second interlaced video". For television, it is typical that the odd and even rows of video data are acquired at different times. 25 The two video fields are referred to as "top" and "bottom". The top video field is displayed so that the first row of video data of the top video field is displayed in the top row of the display screen. The bottom video field is displayed so that the last row of video data of the bottom video field is displayed in the bottom row of the display screen. Displaying the top and the bottom video field in such a manner avoids any confusion about how the rows of video data 30 are actually numbered. 1880027v (864516__Final) -2 For digital encoding purposes top and bottom video field pairs are "packed" into a frame, meaning that the odd rows of the frame come from the bottom video field and the even rows come from the top video field. When film content, which is typically acquired as 24p, is converted for television to a 5 standard format such as NTSC (National Television System Committee) which comprises sixty (60) fields per second, a conversion process is required to be performed. This conversion process, known as "telecine conversion" or "3:2 pulldown", involves scanning each film frame to produce a digital image and separating the odd and even scan lines for use as video fields. To effect the conversion from twenty four (24) frames to sixty (60) video ) fields it is necessary to repeat some video fields. Also to convert 24p content for the Phase Alternating Line (PAL) television standard and the Sequential Color with Memory (SECAM) interlaced television standard comprising fifty (50) video fields per second, the 24p film content is sped up by about 4% to 25p and a conversion process known as "2:2 pulldown" is applied. This conversion involves scanning 5 each film frame of video data to produce a digital image and separating the odd and even rows of the frame for use as video fields. The 2:2 pulldown may also be used to convert 30p (30 frames per second) progressive video data to sixty (60) video fields per second in accordance with the NTSC standard. Referring to Fig. 1, 24p film frames 101, 102 and 103 are converted into video fields (e.g., 0 104, 105 and 106) by extracting the odd and even rows of video data from original progressive frames of video data 101, 102 and 103. To affect a frame rate conversion from twenty four (24) frames per second to sixty (60) fields per second, the video fields (e.g., 104, 105, 106, 107 and 108) are present in a repeating 3:2 pattern known as a "pull-down pattern" from successive film frames as depicted in Fig. 1. As also shown in Fig. 1, three video fields 104, 25 105 and 106 are drawn from a first film frame 101, two video fields 107 and 108 from the next film frame 102, three video fields 109, 110 and 111 from the next film frame 103 and so on. This leads to a pattern where the first video field (e.g., 104) and the third video field (e.g., 106) of each group of five (5) fields (e.g., 104 to 108) are identical. Again, for encoding purposes the video fields are packed into encoding frames (e.g., 112). Due to the repetition of 30 some video fields some encoding frames (e.g., 113) contain data from multiple film frames (e.g., 101, 102). If the encoding frames (e.g., 112) are displayed directly on progressive display equipment then moving objects will have a visible artefact referred to variously as combing, sawtooth, feathering or motion artefact. 1880027v1 (864516_Final) -3 Other pull-down patterns include "3:2:3:2:2", "2:2:2:4" and "2:3:3:2". Animation may even use pull-down patterns such as 5:5, 6:4 or 8:7. The "visual quality" (or "quality appearance") of telecine converted video data can be improved during display (or playback) on progressive display equipment if the original 5 progressive frames of video data are reconstructed from the video signal, in an "inverse telecine" process, and displayed. Referring again to Fig. 1, the video fields (e.g., 104, 105 and 106) are extracted from the encoded frame (e.g., 117) and recombined to reproduce the original 24p frames of video data 114, 115 and 116. The reconstructed 24p frames 114, 115 and 116 may then be displayed using a 3:2 repeat pattern. This method of recombining top ) and bottom video fields is often referred to as "weaving". Similarly, a frame of video data in which odd and even rows come from two different fields is referred to as a weave. Accordingly, a weave is a combination of two different video fields. The inverse telecine processing results in a higher quality progressive display than vertically interpolating the video fields, as the original vertical resolution is fully recovered. 5 Inverse telecine processing also prevents the appearance of any wobble that results from the original film frames not being vertically anti-aliased prior to extraction of the video fields. Inverse telecine processing may be performed based on metadata included with the encoded video data stream. For a variety of reasons, however, such metadata is often unreliable and most high quality video playback equipment performs some analysis of the 0 video fields in order to infer that the video fields are generated by a telecine process. Such analysis is often referred to as "film content detection", "pulldown detection" or "24P detection". An important consideration in inverse telecine processing is to minimise latency and buffering, improve accuracy, generality and robustness to noise. One conventional pull-down detection method detects the regular repetition of video fields. 25 For example, the 3:2 pull-down pattern can be detected by observing regular repetition of one video field in every five (5) video fields. If film content detection is only based on the detection of a repeated video field then the detection process may require many frames of output to achieve a positive detection result. For example, the repeated field detection method for detecting the 3:2 pull-down pattern can result in a delay of up to fifteen (15) frames. 30 During such a delay reduced quality frames are output. Another conventional pull-down detection method detects the presence or absence of motion between video fields in an input video data sequence. Such a method generally has less latency in pull-down detection. 1880027vl (864516_Final) -4 Detecting motion between video fields reliably is difficult due to naturally varying degrees of contrast and motion present in a video data sequence. Compression and other types of noise may also result in differences between top and bottom video fields of a same film frame. One known measure of motion is based on a sum of absolute differences in pixel intensity. 5 Large differences in such a motion measure may result from motion and/or complex scene structure. Scenes containing low contrast or fade often result in low differences which may be attributed to noise or true motion in low-contrast frames or objects. Setting a detection threshold high may prevent noise being confused as motion. However, setting a detection threshold high may result in poor detection of real scene motion during low motion or low 0 contrast scenes. One method of detecting motion between video fields retains a history of motion levels detected over time and adapts a detection threshold used in accordance with the motion level present in prior frames. A problem with such a method is that prior motion may not be a good indicator of motion for a current frame. This would be the case if the level of motion changes 5 suddenly as in a scene change or fade or even if the contrast levels of the areas undergoing motion change. The performance of most known history based methods of detecting motion between video fields will also be reduced if there are sudden changes in scene structure due to lighting changes. The dependence on historical data for setting detection thresholds means that there is a delay after a scene change before an optimal threshold is achieved and reliable 0 pull-down pattern detection can be achieved. Typically, the detection threshold changes only in a limited range and predetermined minimum and maximum constraints are applied to the detection threshold. SUMMARY OF THE INVENTION It is an object of the present invention to substantially overcome, or at least ameliorate, 25 one or more disadvantages of existing arrangements. According to one aspect there is provided an apparatus for generating a deinterlaced output video frame from a set of four input video fields, said apparatus comprising: a field memory for storing a sequence of four consecutive video fields; a pull-down detection module configured to process pixels from the temporally first 30 three of the video fields; an interpolating deinterlacing module configured to process pixels from the temporally second, third and fourth of the video fields; and 1880027vl (864516_Final) -5 a selection module configured to select between pixels from the temporally second three video fields based on a previous weave direction result from said clean weave detection module to generate the deinterlaced output video frame. According to another aspect there is provided a computer implemented method of 5 generating a deinterlaced output video frame from a set of four input video fields, said method comprising the steps of: storing a sequence of four consecutive video fields in a memory; processing pixels from the temporally first three of the stored video fields; processing pixels from the temporally second three of the stored video fields; and D selecting between the processed pixels from the temporally second three video fields based on a previous weave direction result to generate the deinterlaced output video frame. According to still another aspect there is provided a computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to generate a deinterlaced output video frame from a set of four input 5 video fields, said program comprising: code for storing a sequence of four consecutive video fields in a memory; code for processing pixels from the temporally first three of the stored video fields; code for processing pixels from the temporally second three of the stored video fields; and 0 code for selecting between the processed pixels from the temporally second three video fields based on a previous weave direction result to generate the deinterlaced output video frame. According to still another aspect there is provided a method of detecting from a series of video fields, adjacent pairs of video fields that when combined by weaving produce a frame 25 containing feathering artefact, said method comprising the steps of: determining first and second frame scores for the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of a reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and 30 a preceding video field; determining third and fourth frame scores for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field with the subsequent video field and the fourth frame 1880027vl (864516_Final) -6 score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field and the preceding video field respectively; detecting which of the combinations of video fields contain feathering artefact, based on the first, second, third and fourth frame scores; and 5 output one or more of the combinations of video fields. According to still another aspect there is provided an apparatus for detecting from a series of video fields, adjacent pairs of video fields that when combined by weaving produce a frame containing feathering artefacts, said apparatus comprising: first frame score determining module for determining first and second frame scores for D the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of a reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and a preceding video field; second frame score determining module for determining third and fourth frame scores 5 for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field with the subsequent video field and the fourth frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field and the preceding video field respectively; 0 detecting module for detecting which of the combinations of video fields contain feathering artefact, determined from the first, second, third and fourth frame scores, and for outputting one or more of the combinations of video fields. According to still another aspect there is provided a computer readable medium, having a program recorded thereon, where the program is configured to make a computer 25 execute a procedure to detect from a series of video fields, adjacent pairs of video fields that when combined the produced weaved frame contains feathering artefacts, said program comprising: code for determining first and second frame scores for the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of 30 a reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and a preceding video field; 1880027vl (864516_Final) -7 code for determining third and fourth frame scores for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field with the subsequent video field and the fourth frame score indicates the presence of large area low contrast feathering artefact in the combination 5 of the reference video field and the preceding video field respectively; code for detecting which of the combinations of video fields contain feathering artefact, determined from the first, second, third and fourth frame scores; and code for outputting one or more of the combinations of video fields. Other aspects of the invention are also disclosed. 3 BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention will now be described with reference to the following drawings, in which: Fig. I shows the relationship between 24p film frames, 3:2 pull-down video fields and reconstructed original film frames; 5 Figs. 2A and 2B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced; Fig. 3 is a block diagram of a de-interlacing software module comprising a clean weave detection module; Fig. 4 is a flow diagram showing a method of determining a weave direction; 0 Fig. 5 shows an example of blocks of data used for block-based feathering artefact measurement as used in the method of Fig. 4; Fig. 6 shows a graph of motion confidence degree function; Fig. 7 is a flow diagram showing a method of determining bad blocks as is used in the method of Fig. 4; 25 Fig. 8 is a flow diagram showing a method of detecting a bad weave, as executed in the method of Fig. 4; Fig. 9 is a flow diagram showing a method of determining clean weaves as used in the large-area/low-contrast processing path of Fig. 4; Fig. 10 shows a graph of a function used for capping motion and feathering artefact 30 values; and Fig. 11 shows a schematic block diagram of a de-interlacing module comprising a clean weave detection module. 1880027vl (864516_Final) -8 DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the 5 purposes of this description the same function(s) or operation(s), unless the contrary intention appears. Methods of detecting feathering artefacts within video fields will be described below with reference to Figs. 2A to 11. The described methods are sensitive to large-area low-contrast feathering artefacts and provide good robustness to noise. The methods may be used to 3 reliably weave (or combine) the video fields together to form an original progressive source frame of video data. The described methods independently accumulate "selective" and "sensitive" motion normalized measurement values. The methods use the sensitive motion normalised measurement values when no feathering artefact is detected using selective measurement values. 5 As described below, selective measurement highly quantizes block differences by using a threshold. The selective measurement uses relative comparison as well as fixed thresholds on accumulated measurement values. The methods described herein may be implemented using a computer system 200, such as that shown collectively in Figs. 2A and 2B. Alternatively, the methods described may be 0 implemented in the form of dedicated hardware as show in Fig 11. As seen in Fig. 2A, the computer system 200 is formed by a computer module 201, input devices such as a keyboard 202, a mouse pointer device 203, a scanner 226, a camera 227, and a microphone 280, and output devices including a printer 215, a display device 214 and loudspeakers 217. An external Modulator-Demodulator (Modem) transceiver device 216 may 25 be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221. The network 220 may be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 221 is a telephone line, the modem 216 may be a traditional "dial-up" modem. Alternatively, where the connection 221 is a high capacity (eg: cable) connection, the modem 216 may be a broadband modem. A 30 wireless modem may also be used for wireless connection to the network 220. The computer module 201 typically includes at least one processor unit 205, and a memory unit 206 for example formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The module 201 also includes an number of 1880027vl (864516_Final) -9 input/output (1/0) interfaces including an audio-video interface 207 that couples to the video display 214, loudspeakers 217 and microphone 280, an 1/0 interface 213 for the keyboard 202, mouse 203, scanner 226, camera 227 and optionally a joystick (not illustrated), and an interface 208 for the external modem 216 and printer 215. In some implementations, i the modem 216 may be incorporated within the computer module 201, for example within the interface 208. The computer module 201 also has a local network interface 211 which, via a connection 223, permits coupling of the computer system 200 to a local computer network 222, known as a Local Area Network (LAN). As also illustrated, the local network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so ) called "firewall" device or device of similar functionality. The interface 211 may be formed by an Ethernetim circuit card, a BluetoothTM wireless arranegment or an IEEE 802.11 wireless arrangement. The interfaces 208 and 213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards 5 and having corresponding USB connectors (not illustrated). Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (eg: CD-ROM, DVD), USB-RAM, and floppy disks for example 0 may then be used as appropriate sources of data to the system 200. The components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner which results in a conventional mode of operation of the computer system 200 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun 25 Sparcstations, Apple MacTm or alike computer systems evolved therefrom. The described methods may be implemented using the computer system 200 wherein the processes of Figs. 3 to 10, to be described, may be implemented as one or more software application programs 233 executable within the computer system 200. In particular, the steps of the described methods are effected by instructions 231 in the software that are carried out 30 within the computer system 200. The software instructions 231 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules 1880027vl (864516_Final) -10 manage a user interface between the first part and the user.The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 200 from the computer readable medium, and then executed by the computer system 200. A computer readable medium having such 5 software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 200 preferably effects an advantageous apparatus for implementing the described methods. The software 233 is typically stored in the HDD 210 or the memory 206. The software is loaded into the computer system 200 from a computer readable medium, and is then executed D by the computer system 200. Thus for example the software may be stored on an optically readable CD-ROM medium 225 that is read by the optical disk drive 212. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 200 preferably effects an advantageous apparatus for implementing the described methods. 5 In some instances, the application programs 233 may be supplied to the user encoded on one or more CD-ROM 225 and read via the corresponding drive 212, or alternatively may be read by the user from the networks 220 or 222. Still further, the software can also be loaded into the computer system 200 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or 0 data to the computer system 200 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 201. Examples of computer readable transmission media that may also participate in 25 the provision of software, application programs, instructions and/or data to the computer module 201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 233 and the corresponding code modules 30 mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214. Through manipulation of typically the keyboard 202 and the mouse 203, a user of the computer system 200 and the application may manipulate the interface in a functionally adaptable manner to provide 1880027vl (864516_Final) -11 controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 217 and user voice commands input via the microphone 280. 5 Fig. 2B is a detailed schematic block diagram of the processor 205 and a "memory" 234. The memory 234 represents a logical aggregation of all the memory modules (including the HDD 209 and semiconductor memory 206) that can be accessed by the computer module 201 in Fig. 2A. When the computer module 201 is initially powered up, a power-on self-test (POST) D program 250 executes. The POST program 250 is typically stored in a ROM 249 of the semiconductor memory 206. A hardware device such as the ROM 249 is sometimes referred to as firmware. The POST program 250 examines hardware within the computer module 201 to ensure proper functioning, and typically checks the processor 205, the memory (209, 206), and a basic input-output systems software (BIOS) module 251, also typically stored in the 5 ROM 249, for correct operation. Once the POST program 250 has run successfully, the BIOS 251 activates the hard disk drive 210. Activation of the hard disk drive 210 causes a bootstrap loader program 252 that is resident on the hard disk drive 210 to execute via the processor 205. This loads an operating system 253 into the RAM memory 206 upon which the operating system 253 commences operation. The operating system 253 is a system level 0 application, executable by the processor 205, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 253 manages the memory (209, 206) in order to ensure that each process or application running on the computer module 201 has sufficient memory in which to 25 execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 200 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 234 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 200 and 30 how such is used. The processor 205 includes a number of functional modules including a control unit 239, an arithmetic logic unit (ALU) 240, and a local or internal memory 248, sometimes called a cache memory. The cache memory 248 typically include a number of storage registers 244 1880027v] (864516_Final) -12 246 in a register section. One or more internal busses 241 functionally interconnect these functional modules. The processor 205 typically also has one or more interfaces 242 for communicating with external devices via the system bus 204, using a connection 218. The application program 233 includes a sequence of instructions 231 that may include 5 conditional branch and loop instructions. The program 233 may also include data 232 which is used in execution of the program 233. The instructions 231 and the data 232 are stored in memory locations 228-1030 and 235-1037 respectively. Depending upon the relative size of the instructions 231 and the memory locations 228-230, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 230. 0 Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 228-229. In general, the processor 205 is given a set of instructions which are executed therein. The processor 205 then waits for a subsequent input, to which it reacts to by executing another set 5 of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 202, 203, data received from an external source across one of the networks 220, 202, data retrieved from one of the storage devices 206, 209 or data retrieved from a storage medium 225 inserted into the corresponding reader 212. The execution of a set of the instructions may in some cases result in output of 0 data. Execution may also involve storing data or variables to the memory 234. The described methods use input variables 254, that are stored in the memory 234 in corresponding memory locations 255-258. The described methods produce output variables 261 that are stored in the memory 234 in corresponding memory locations 262-265. Inetrmediate variables may be stored in memory locations 259, 260, 266 and 267. 25 The register section 244-246, the arithmetic logic unit (ALU) 240, and the control unit 239 of the processor 205 work together to perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 233. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 231 from a memory 30 location 228; (b) a decode operation in which the control unit 239 determines which instruction has been fetched; and 1880027vl (864516_Final) -13 (c) an execute operation in which the control unit 239 and/or the ALU 240 execute the instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 239 stores or writes a value to a memory location 232. Each step or sub-process in the processes of Figs. 3 to 10 is associated with one or more segments of the program 233, and is performed by the register section 244-1047, the ALU 240, and the control unit 239 in the processor 205 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 233. The described methods may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the methods. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories. 5 A de-interlacing module 300 is shown in Fig. 3. The de-interlacing module 300 is implemented as software modules 330 and 340, resident on the hard disk drive 210 and/or the memory 206. The software modules 330 and 340 are executed by the processor 205. Input video fields 320 are read into a First-In First-Out (FIFO) buffer module 370 by the processor 205. The processor 205 also performs the step of outputting fields 363, 364. The buffer D module 370 may be configured within the memory 206. The FIFO buffer module 370 provides access to multiple consecutive video fields 321, 322, 323 and 324. The de-interlacing module 300 further comprises a general video de-interlacing module 330 and a pull-down detection module 340. Video fields 322, 323 and 324 stored in the FIFO buffer module 370 are processed by the general video de-interlacing module 330 and video 25 fields 321, 322 and 323 fetched from the FIFO buffer are processed by the pull-down detection module 340. The general de-interlacing module 330 can produce an output frame 363 using one of several de-interlacing methods. The general de-interlacing module 330 may either perform interpolation or the general de-interlacing module 330 may produce an output frame by 30 weaving together two of the three consecutive video fields 322, 323 and 324, based on the output from the pull-down detection module 340. The general video de-interlace module 330 may produce an output frame by performing interpolation using motion-adaptive (MA), or motion-compensated (MC) spatio-temporal 1880027vl (864516_Final) -14 interpolation methods. The general video de-interlace module 330 may alternatively perform a non-adaptive spatio-temporal interpolation or an intra-field interpolation. The pull-down detection module 340 examines the weaves (or combinations) of adjacent pairs of video fields. A weave (or combination) of adjacent pairs of input video fields forming 5 a frame of video data which does not contain (or is free from) feathering artefact is referred to below as a "clean weave", a "good weave" or a "progressive source frame." Likewise, the term "bad weave" refers to a weave (or combination) of an adjacent pair of input video fields forming a frame of video data containing feathering artefact. The pull-down detection module 340 detects and tracks the history of clean weaves. A ) "weave direction" signal 380 identifying the weave direction is generated by the module 340 based on comparison of a group of three consecutive video fields 321, 322 and 323. The weave direction is "forward weave" when only the weave (or combination) of video field 321 and video field 322 is a clean weave. The weave direction is "backward weave" when only the weave of video field 322 and video field 323 is a clean weave. The weave 5 direction is "both" when the weave of video field 321 and video field 322 (i.e., the forward weave) and the weave of video field 322 and 323 (i.e., the backward weave) are both clean weaves. Finally, the weave direction is "neither" when a clean weave is not detected for the video fields 321, 322 and 323. For example, the weave direction is expected to be "both" when there is no motion in the video fields 321, 322 and 323 (i.e., the video fields 320 0 represent a purely static scene). For interlaced video data, the weave direction is expected to be "neither" whenever there is motion or change between different video fields. The detection of weave direction is performed in a stateless manner by the detection module 342 based on an analysis of the input video fields 321, 322 and 323. Stability of the weave direction detection is improved by incorporating state information using a state 25 machine module 344. When a weave direction cannot be determined with sufficient confidence by the detection module 342, the processor 205 consults the state machine module 344. The state machine module 344 tracks patterns in previous weave direction determinations. If one of a number of predetermined standard patterns has been detected and tracked up to a current frame, then the state machine 344, executed by the processor 205, 30 makes a prediction of weave direction based on the detected pattern or unique part thereof. In an alternative embodiment, there is no state machine and the output from the clean weave detection module 342 is passed directly to the general video de-interlace module 330. 1880027vl (864516_Final) -15 While the pull-down detection module 340 is processing fields 321, 322 and 323, the general video de-interlace module 330 is simultaneously processing the fields 322, 323 and 324 that have already been processed by the pull-down detection module 330. Since the weave direction signal is effectively delayed by one field, pull-down detection and output 5 frame generation may be pipelined. The frame 363 output by (or outputting from) the general video deinterlace module 330 depends on the weave direction signal 380 according to Table I below: Table 1 weave direction Select Output of general video de-interlace module 330 Forward Active The weave of video fields 322 and 323 Backward Active The weave of video fields 323 and 324 Both Active The weave of video fields 323 with the mean of video fields 322 and 324 Neither Inactive Interpolative de-interlacing based on fields 322, 323 and 324 0 In an alternative embodiment, a de-interlacing module 1100, as shown in Fig. 11, may be implemented as dedicated video processing hardware comprised of circuits 1130, 1140, 1150, 1142, 1144, 1155 and 1170. The circuits 1130, 1140, 1150, 1142, 1144, 1155 and 1170 may be implemented as separate hardware modules or combined together into one or more integrated circuits. 5 Input video fields, supplied in the form of a video input signal 1120, are passed into a First In First-Out (FIFO) buffer module 1170. The FIFO buffer module 1170 delays the input video signal 1120 using field delay elements 1110, 1111 and 1112 and outputs multiple delayed video field signals via the video field outputs 1121, 1122, 1123 and 1124. The de-interlacing module 1100 further comprises an interpolative video de-interlacing 20 module 1130, a weave de-interlacing module 1150 and a pull-down detection module 1140. The video field signals 1122, 1123 and 1124 output from the FIFO buffer module 1170 are provided as input to the three field video de-interlacing module 1130 and simultaneously to the weave de-interlacing module 1150. The video field signals 1121, 1122 and 1123 output from the FIFO buffer are supplied as input to the pull-down detection module 1140. 25 The interpolative de-interlacing module 1130 performs motion-adaptive (MA) or motion compensated (MC) spatio-temporal interpolation. Alternatively, the general video de-interlace 1880027vl (864516_Final) -16b module 1130 may perform a non-adaptive spatio-temporal interpolation or intra-field interpolation. The weave de-interlacing module 1150 performs de-interlacing by combining the fields represented by the field signals 1122, 1123 and 1124 using a "weave" operation. 5 The modules 1130, 1150 and 1155 together perform the same function as the general video de-interlacer software module 330. The pull-down detection module 1140 examines the weaves (or combinations) of adjacent pairs video fields and detects clean (i.e. good) and bad weaves and produces a "weave direction" signal 1180 and a "select" or output selection signal 1190 based on the detected 0 clean weaves. The select signal 1190 becomes active whenever a clean weave is detected and is related to the weave direction signal 1180 according to Table 1. Both the weave direction signal 1180 and the select signal 1190 are updated at the end of each field. This introduces an effective one field delay between the inputs to the pull-down detection module 1140 and outputs 1180 and 1190 of the pull-down detection module 1140. The pull-down detection 5 module 1140 performs the same function as the software pull-down detection module 340. The weave direction is "forward weave" when only the weave (or combination) of video field 1121 and video field 1122 is a clean weave. The weave direction is "backward weave" when only the weave of video field 1122 and video field 1123 is a clean weave. The weave direction is "both" when the weave of video field 1121 and video field 1122 (i.e., the forward 0 weave) and the weave of video field 1122 and 1123 (i.e., the backward weave) are both clean weaves. Finally, the weave direction is "neither" when a clean weave is not detected for the video fields 1121, 1122 and 1123. The detection of weave direction is performed in a stateless manner by the clean weave detection module 1142 based on an analysis of the input video field signals 1121, 1122 and 25 1123. Stability of the weave direction detection is improved by incorporating state information using a state machine module 1144. When a weave direction cannot be determined with sufficient confidence by the detection module 1142, the state machine module 1144, will output a weave direction based on its internal state. The state machine module 1144 tracks patterns in previous weave direction determinations. If one of a number 30 of predetermined standard patterns has been detected and tracked up to a current frame, then the detection module 1142, makes a prediction of weave direction based on the detected pattern or unique part thereof. 1880027vl (864516_Final) -17 The multiplexer 1155 selects the output from either the interpolative de-interlacer or the weave de-interlacer 1150 based on the select signal 1190. When at least one clean weave is detected by the pull-down detection module 1140, the select signal 1190 becomes active and the multiplexer 1155 selects the output from the interpolative de-interlacer 1130 to produce an 5 output frame signal 1160. When the select signal 1190 is inactive, the multiplexer 1155 selects the output signal generated by the weave de-interlacer 1150 to produce the output video signal 1160. There is a single field delay between the group of three video fields 1121, 1122, and 1123 input to the pull-down detection module 340 and the group of three video field signals 1122, 3 1123 and 1124 input to the video de-interlacer modules 1130 and 1150. The single field delay matches the delay between the inputs and output of the pull-down detection module 1140 allowing the de-interlacing modules 1130 and 1150 to process the delayed signals 1122, 1123, 1124 as the same time as the next three fields 1121, 1122 and 1123 are being processed by the pull-down detection module 1140. 5 Fig. 4 is a flow diagram showing a method 400 of determining a weave direction. The method 400 may be implemented in the clean weave detection software module 342 of Fig. 3. The method 400 is implemented as software, resident on the hard disk drive 210 and being controlled in its execution by the processor 205. Alternatively, in another embodiment, the method 400 may be implemented as the dedicated hardware module 1142 of Fig. 11. 0 The clean weave detection method 400 processes three consecutive video fields in a series of video fields. The method 400 will be described by way of example with reference to the video fields 321, 322 and 323 of Fig. 3. The method 400 is coarsely divided into a block processing stage 441 and a frame processing stage 442. In the block processing stage, 441, all sets of corresponding blocks of 25 video data from the input video fields 321, 322 and 323 are processed. Each set of three input blocks (one from each input video field 321, 322 and 323) is processed independently in the block processing stage 441. The blocks are processed in a "block raster" scan order. In the frame processing stage 442, information gathered from the block processing stage 441 is processed to determine the weave direction for the current group of three video fields 321, 322 30 and 323. The middle field (i.e, field 322 of Fig 3) of the input video fields 321, 322 and 323 is referred to below as a "reference video field" and the video fields temporally preceding (i.e., 1880027vl (864516_Final) -18 field 323 of Fig 3) and following (field 321 of Fig 3) the reference video field 322 are referred to below as the "preceding video field" and the "subsequent video field", respectively. The method 400 begins at step 410, where at the start of each new output frame 363, the processor 205 initializes one or more frame scores to zero. The frame scores may be 5 configured as variables stored within the memory 206 and each representing a particular frame score determined for the new output frame 363. The frame scores comprise two sets of numbers as described in the Table 2, below: Table 2 Frame score set Variable Description of Frame Score high contrast frame scores F 1
,
1 Forward high contrast artefact
FI,
2 Backward high contrast artefact MI Motion spread large area frame scores F 2
,
1 Forward large area artefact
F
2
,
2 Backward large area artefact Separate frame scores are accumulated during subsequent block processing for high contrast and large area artefact scores in each of the forward and backward weaves of the 5 input video fields 321, 322 and 323. The accumulated frame scores may be stored in the memory 206. In addition, a motion spread score is determined to indicate the number of pixels at which subject motion is observed in the input video fields 321, 322 and 323. A motion severity score is also determined to indicate the degree of intensity variation across the input video fields 321, 322 and 323 that is attributable to subject motion. 20 At the next step 420, the processor 205 fetches a block of video data from each of the three input fields 321, 322 and 323 and stores the blocks in memory 206. The blocks of video data stored at step 420 will be referred below as "current" blocks of video data. Then at step 430, the blocks of video data stored at step 420 are processed to determine a first and second feathering artefact value, respectively, occurring for the forward weave (i.e., the weave of 25 video field 321 and field 322) and backward weave (i.e., the weave of video field 322 and 323). The first and second feathering artefact values are stored in the memory 206. At the next step 435, the processor 205 determines a single block motion score for the current blocks of video data fetched at step 420. The block motion score is independent of the weave direction. Feathering artefact is indicated by the presence of both motion and 1880027v1 (864516_Final) -19 alternating brightness between adjacent rows of weaved pixel data. Measurement of a feathering artefact score indicative of pixel rows having alternating intensity and determination of a motion score indicative of scene change within a block of video data, will be described in detail below with reference to Fig. 5. The block motion score determined at i step 420 is stored in the memory 206. At step 450, the processor 205 accesses the first and second feathering artefact values determined in step 430, from the memory 206, along with the motion score determined at step 435. The processing performed at step 450 determines whether the forward or backwards weaves of the current blocks of video data, fetched at step 420, have high-contrast feathering ) artefact and updates the high contrast frame scores accordingly. A method 700 of updating the high contrast frame score values, as executed at step 450, will be described in detail below with reference to Fig. 7. Subsequently, at step 460, the processor 205 performs the step of updating third and forth (large-area) frame-score values for the forward and backward weaves, respectively. Step 460 5 will be described in detail below. The third and forth (large-area) frame-score values may be stored in the memory 206. At the next step 470, if the processor 205 determines that the current blocks of video data, fetched in step 420, are the last blocks in the input video fields 321, 322 and 323, then the method 400 proceeds to step 480. Otherwise, the method 400 returns to step 420. Steps 420 0 to 460 are repeated for each set of input pixel blocks of the input video fields 321, 322 and 323 until all blocks of the input video fields 321, 322 and 323 are processed. At step 480, the high contrast frame scores accumulated in steps 420 to 460 are processed by the processor 205 to detect whether the frames generated by the forward and backward weaves contain unacceptable levels of feathering artefact. The generated frames having 25 unacceptable levels of feathering artefact are determined to be bad weaves as defined above. At the next step 490, if the processor 205 determines that no bad weaves are detected, using the high contrast frame scores, then the method 400 proceeds to step 495. Otherwise, the method 400 concludes. At step 495, the processor 205 processes the accumulated large area frame scores to 30 determine the weave direction for the current group of three video fields 321, 322 and 323. Also at step 495, the processor 205 updates previous determinations about the existence of unacceptable levels of feathering artefact in the forward and backward weaves respectively. For example, the processor 205 may update one or more flags configured within the memory 1880027vl (864516_Final) -20 206 indicating the existence of unacceptable levels of feathering artefact in the forward and backward weaves. Steps 480 and 495 will now be described in detail below with reference to Fig. 8 and Fig. 9. At step 495, the processor 205 detects low-contrast feathering artefacts if the artefacts appear over large spatial regions. The third and forth (large area) frame scores and the determination of weave direction at step 495 are sensitive to such low-contrast feathering artefacts. To achieve a good robustness to noise, step 495 requires the low-contrast feathering artefacts to affect a large spatial region in a combined frame generated by either the forward or ) backward weaves. By contrast, the determination of whether frames generated by the forward and backward weaves contain unacceptable levels of feathering artefact, at step 480, is sensitive to small regions of high contrast feathering artefact. Large regions of low contrast feathering artefacts appear when a moving object has low contrast and/or particularly during low speed fades between different video streams (or 5 sequences). Accordingly, the clean weave detection method 400 of Fig. 4 considers large area/low-contrast feathering artefacts only when no high-contrast feathering artefact is reliably detected. The determination of a feathering artefact value, at step 430, will now be described in detail with reference to Fig. 5 and Fig. 3. Fig. 5 shows how pixel data in blocks of pixels 0 from adjacent fields is logically processed during the determination of the feather artefact. A block 501 of pixel data from the reference video field 322 input to the clean weave detection module 342 is shown in Fig. 5. A block 502 of pixel data from the preceding video field 323 or subsequent video field 321 field input to the clean weave detection module 342 is also shown in Fig. 5. As seen in Fig. 5, filled circles represent pixels that are present in the video 25 fields (e.g., 322)whereas empty circles represent spatial placeholders showing the location of pixels which are not present in the particular field. Blocks 510 and 520 are representative of the blocks of data fetched at step 420 (i.e., the current blocks) of method 400. The blocks 510 and 520 are selected such that a center row of the center block 510 corresponds to a missing row. Block 530 represents the result of a weave of the two blocks 510 and 520. The weaved 30 block 530 is referred to as a candidate block. The candidate block 530 is constructed such that a first 531, third 533 and fifth 535 row of the block 530 are from the block 520 corresponding to a preceding (e.g, 321) or subsequent (e.g., 323) video field. A second 532 and fourth 534 row of the block 530 are from the block 510 corresponding to a reference 1880027vl (864516_Final) -21 video field (e.g., 322). If the candidate block 530 overlaps the boundary of a frame formed by the input fields 321, 322 and 323, the input fields are extended by repeating boundary samples in order to complete the candidate block 530. For the purpose of calculating a feather artefact value, the candidate block 530 is logically i partitioned into three overlapping vertically adjacent sub-blocks 540, 550 and 560. For each of the sub-blocks 540, 550 and 560, a score Sn is determined according to Equation (1) below: S=d 2 +d 23 -d 13 (1) where d, 2 represents the maximum difference between spatially corresponding pixels in the first 531 and the second 532 row, d 23 represents the maximum difference between spatially ) corresponding pixels in the second 532 and the third 533 row, and d, 3 represents the maximum difference between spatially corresponding pixels in the first 531 and the third 533 rows. The raw feathering artefact value occurring within the 3x5 pixel candidate block 530 may be determined according to Equation (2) below: A = max(min(Sl, S2), min(S2, S3)) (2) 5 where Si, S2 and S3 represent the scores according to Equation I for blocks 540, 550 and 560 respectively, min(x,y) is the minimum of two numbers x and y and max(x,y) is the maximum of the two numbers x and y. At step 430 of the method 400 of Fig. 4, the processor 205 applies Equation 2 to determine two raw feathering artefact values Af and Ab corresponding to the forward and backward weaves, respectively. .0 The determination of a motion score, as at step 435 of the method 400, is described with further reference to Fig. 5 and Fig. 3. The motion scores are determined using a sum of absolute difference (SAD) values between blocks of pixel data from the preceding video field 323 and subsequent video field 321. Such blocks have the form depicted by the block 520. For each group of three input blocks (e.g., the three blocks stored at step 420) a single motion 25 score Mt 3 is determined. The method 700 of updating the high contrast frame score values as, as executed at step 450 of Fig. 4, will now be described in detail with reference to Fig. 6 and Fig. 7. The method 700 is implemented as software resident on the hard disk drive 210 and being controlled in its execution by the processor 205. 30 The method 700 utilises the raw feathering artefact values Ar and Ab, as well as the motion score Mi for the current blocks of video data (i.e., the blocks of video data fetched at step 1880027vl (864516_Final) -22 420). The method 700 begins at step 703, where the processor 205 performs the step of determining, from the block motion score, a corresponding motion confidence degree MCs. The motion confidence degree MCf, determined at step 703 is stored in the memory 206. The motion confidence degree MCff value is formed by subjecting the motion score to a nonlinear 5 function 600 depicted in Fig. 6. As seen in Fig. 6, thresholds TI 610 and T2 620 are determined on the basis that SAD measurement values below threshold TI, which are considered as mostly due to noise, are mapped to zero; SAD measurement values above threshold T2, which result from motion or a scene change with a high confidence, are mapped to a value Cma, 630; and SAD measurement values in between thresholds TI and T2, which 3 retain some uncertainty, are mapped linearly to the range [0, Cm.]. In an exemplary embodiment, in which pixel data is twelve (12) bit, the threshold TI has a value of two hundred and fifty-six (256) and threshold T2 has a value of three-hundred and eighty-four (384). In another embodiment, a single threshold (e.g., TI=T2=384) may be used to determine a binary value for motion confidence degree. If the motion confidence degree MCf 5 stored within the memory 206 is non-zero then the motion spread frame score is updated by incrementing the motion spread frame score according to Equation (3) below: Mi ( Mi +1 (3) Subsequently at step 710, if the processor 205 determines that the block in the forward weave has a significantly larger feathering artefact value than the corresponding block in the 0 backward weave, then the method 700 proceeds to step 730. Otherwise, the method 700 proceeds to step 720. Specifically, at step 710, the processor 205 determines if Af > 5Ab, and if so, the method 700 proceeds to step 730. At step 730, the processor 205 compares the forward weave score Af to a predetermined fixed threshold To stored in the memory 206. The threshold Twn is equal to ten (10) for an 8-bit processing pipeline and is equal to one 25 hundred and sixty (160) for a 12-bit processing. The threshold T,,,, is configured according to an expected or measured noise level and may be adaptive in some embodiments. If the forward weave score Ar is below the threshold To, at step 730, then the method 700 concludes. Otherwise, if the forward weave score Af is above the threshold T, then the method 700 proceeds to step 770. At step 770, the first (high contrast) frame score stored 30 within memory 206 is updated by the processor 205. In particular, at step 770, the processor 205 adds the motion confidence degree MCf to the first (high contrast) frame score in accordance with Equation (4) as follows: 1880027vl (864516_Final) -23 Fi, 4- F 1
,
1 + MCf (4) Returning to decision 710, if the processor 205 determines that Af 5Ab, then the method 700 proceeds to step 720 where the same test is applied to the backward weave. Specifically, i at step 720, if the processor 205 determines that Ab > 5Af, then the method 700 proceeds to step 740. At step 740, the backward weave score Ab is compared by the processor 205 to a predetermined fixed threshold T,,n stored in the memory 206, where Tj has the same value as used in step 710. If the backward weave score Ab is below the threshold T, then the method 700 concludes. Otherwise, if the backward weave score Ab is above the lower ) threshold T then the method 700 proceeds to step 780. At step 780, the processor 205 updates the second (high contrast) frame score stored within the memory 206 by adding the motion confidence degree to the motion confidence degree in accordance with Equation (5) as follows:
F
1
,
2 4- Fj, 2 + MCf, (5) 5. Returning to decision 720, if the processor 205 determines that Ab 5Ar, then the forward and backward weaves have comparable feathering artefact values. In this case, the method 700 proceeds to step 750 where the processor 205 performs a final test on the forward weave score Af to determine if the forward weave score Af is larger than 4Tnn. If the forward weave 0 score Af is larger than 4T,,, then the method 700 proceeds to step 770. Otherwise, the method 700 proceeds directly to step 760. At step 770, the processor 205 updates the forward weave frame scores stored within the memory 206 as per Equation (4) above. At step 760, the processor 206 performs a final test on the backward frame score Ab to determine if the backward frame score Ab is larger than 4T . 25 If the backward frame score Ab is larger than 4T ,,, then the method 700 proceeds to step 780 where the forward weave frame scores stored within the memory 206 are updated as per Equations (5) above before method 700 concludes. As described above, at step 460 of the method 400, the processor 205 updates the third F 2
,
1 and forth F 2
,
2 (large-area) frame-score values for the forward and backward weaves, 30 respectively. Fig. 10 shows a function, cap(x, Tcappw), 1000 that maps an input value, x, less than a threshold (Teapp,) 1010 to the input value x. All larger values of x are mapped to the threshold value Tcapped as seen in Fig. 10. At step 460, the processor 205 uses the function 1880027vl (864516_Final) -24 1000 to update the third and fourth (large-area) frame scores in accordance with Equations (6) and (7) below:
F
2
,
1 4- F 2
,
1 + cap(Ar, CA) X cap(Mfi, CM) (6)
F
2
,
2 4- F 2
,
2 + cap(Ab, CA) x cap(Mf, CM) (7) i where CA is set to one-thousand and twenty-three (1023) and Cm is set to two-hundred and fifty-five (255). The threshold values CA and Cm are selected so that the effect of large values, which are indicative of high contrast artefact or motion, are effectively removed from the third
F
2
,
1 and forth F 2
,
2 frame score values. A method 800 of detecting bad weaves, as executed in step 480 of Fig. 4, will now be ) described with reference to Fig. 8. The method 800 uses the accumulated high contrast frame scores to determine the existence of unacceptable levels of feathering artefact in the frames generated by forward and backward weaving. The method 800 may be implemented as softare resident on the hard disk drive 210 and being controlled in its execution by the processor 205. 5 The method 800 begins at step 801, where the processor 205 fetches the high contrast forward and backward frame scores from the memory 206. At the next step 802, the processor 205 determines if both the forward and backward weaves have large high contrast artefact frame score values by comparing the smaller of the forward high contrast frame scores, F 1
,
1 , and the backward high contrast frame scores, F1, 2 , with a threshold Tn.,,ood- In 0 an exemplary embodiment, Ta,._ good is a multiple of the frame motion spread score Mi. The threshold, Tagod , is set to a value over which the output weaves are bad weaves with visible feathering artefact. In one embodiment, the threshold Ta good is adapted based on a total number of blocks with motion or similarly based on MI. In another embodiment, a fixed value is used for Tax_ good . In still another embodiment, which uses a binary value for motion 25 confidence degree value, the threshold, T,,x_ good , may be set to a fraction of output frame-size. When a YES path is chosen at step 802, both the forward and backward weaves are apparently bad weaves with visible feathering artefact. Therefore, the weave direction is set to "neither" in the next step 808 and the method 800 concludes. When a NO path is chosen at step 802, however, the method 800 continues with checking 30 for small frame scores in step 803. At step 803, the processor 205 compares the larger of the forward high contrast frame scores, F j , and the backward high contrast frame scores, F 1
,
2 , 1880027vl (864516_Final) -25 with a Tm ,,ad threshold stored in memory 206 or the hard disk drive 210. The threshold, Tm- ad, is a threshold below which the forward and backward weaves are clean from feathering artefact. The threshold, Tmi,,_ d , is a small number (e.g., ten (10) times the Cm. value 630). The threshold, Tmi, &d , is used to filter-out any outliers before subsequently performing a relative comparison in step 804. If the processor 205 determines, at step 804, that the larger frame score is less than the threshold, Tmi, w , then both weaves are considered free of feathering artefact, and the method 800 proceeds to step 806. Otherwise, the method 800 proceeds to step 804. At step 806, the processor 205 sets the weave direction to "both". At step 804, the processor 205 compares the frame score associated with the forward weave with the frame score associated with the backward weave. The processor 205 may access the frame scores from the memory 206. The processor 205 determines one of the forward weave and the backward weave as a clean weave, and one of the forward weave and the backward weave as a bad weave when one of the associated frame scores is significantly larger than the other. An exemplary criterion for being significantly larger is that the larger i frame score is K times larger than the smaller frame score where K = 2. More generally K is in the range of 5 < K . 3 3 When one frame score is larger than the other by a factor ofK , at step 804, the YES path is chosen at step 804 and the method 800 continues to step 809. Otherwise, the method 800 follows the NO path to step 805. 0 At step 809, the processor 205 performs a check to determine the clean weave direction. When the frame score associated with the forward weave is larger than the frame score associated with the backward weave, the method 800 proceeds to step 812. At step 812, the weave direction is set to "backward". Otherwise, the weave direction is set to "forward" in step 811. Again, the weave direction is stored in memory 206 as a weave direction flag. 5 When frame scores are comparable (i.e., the NO path is chosen at step 804), the method 800 proceeds to step 805. At step 805, the processor 205 compares the frame scores against a threshold, T,,,,,red . A decision is made at step 805 on whether both the forward and backward weaves are bad weaves with visible feathering artefact or both weaves are clean weaves. In particular, if the frame scores associated with the forward and backward weave are larger than 30 the Tf,,,,hrd threshold, then the method 800 proceeds to step 814. Otherwise, the method 800 1880027v] (864516_Final) -26 proceeds to step 806. At step 806, the processor 205 sets the weave direction to 'both". In contrast, at step 814, the processor 205 sets the weave direction to "neither". The Tfeahered threshold is used to differentiate between the case in which both weaves are clean (e.g., at stationary sequences) and the case in which both weaves have visible feathering artefact (e.g., 5 interlaced input content with motion between different video fields). In one embodiment, the threshold, Tf,,,,,er,d is equal to eighty (80) times the Cmax value 630. A method 900 of determining the weave direction, as executed in step 495, will now be described in detail with reference to Fig. 9. The method 900 may be implemented as software resident on the hard disk drive 210 and being controlled in its execution by the processor 205. D The method 900 begins at step 901, where the processor 205 fetches the third and forth large area frame scores associated with the forward and backward weaves from memory 206. In step 902, the frame scores associated with the forward and backward weaves are compared against a threshold, Tminb stored in memory 206 and/or the hard disk drive 210. The threshold Tminb is configured so that a frame score below the threshold Tminb indicates a clean 5 weave with high confidence. If the processor 205 determines that the larger frame score is below the threshold, Tin bI then the method 900 proceeds to step 906. Otherwise, the method 900 proceeds to step 903. At step 906, the processor 205 sets the weave direction to "both". Again, the processor 205 may set the weave direction to "both" by updating a flag stored in memory 206. O0 At step 903, the processor 205 compares the frame score associated with the forward weave with the frame score associated with the backward weave. If the larger frame score is at least K times larger that the smaller frame score, then the method 900 proceeds to step 904. Otherwise, the method 900 proceeds to step 907. At step 907, the weave direction is set to "unknown". The "unknown" weave direction is used to indicate that a prediction based on 25 previously observed patterns of weave direction can be used. At step 904, the processor 205 compares the frame score associated with the forward weave with the frame score associated with the backward weave. If the frame score associated with the forward weave is larger than the frame score associated with the backward weave, then the method 900 proceeds to step 908. Otherwise, the method 900 proceeds to 30 step 905. At step 908, the weave direction is set to "backward". At step 905, the weave direction is set to "forward". Again, the weave direction may be set at steps 905 and 908 by updating the weave direction flag stored in memory 206 1880027v1 (864516_Final) -27 In an exemplary embodiment, the same ratio (K) value is used in step 903 as was used in step 804 of the method 800 depicted in Fig. 8. However, different K values may be used for detecting bad weaves using the high-contrast frame scores as in method 800 and detecting bad weaves using the large-area frame scores as in method 900. 5 In one embodiment, the clean weave detection results generated in the clean weave detection module 342 are passed to the pattern tracking state-machine 344, as shown in Fig. 3. The state machine 344 is updated upon completion of each output frame 395. The state machine 344 factors into the classification, the information about any detected pull-down pattern. As described above, the pattern tracking state-machine 344 may be implemented as ) software resident in the hard disk drive 210 and being controlled in its execution by the processor 205. Alternatively, the pattern tracking state-machine 344 may be implemented in dedicated hardware such as one or more integrated circuits. When no bad weave is detected by the clean weave detection unit 342 (i.e., weave direction generated in unit 342 is "both"), the state machine 344 is consulted for prediction. When prediction is possible based on the 5 previously tracked decisions, the prediction is used to assist in selecting the output of the clean weave detection module 342 and updating the state-machine 344. In one embodiment, the state machine 344 explicitly detects and tracks 3:2 and 2:2 pull-down patterns. The state machine 344 can be helpful in maintaining or assisting the per-frame decision especially during low-motion (e.g., sub-pixel motion) sections of the input video fields. 0 Industrial Applicability It is apparent from the above that the arrangements described are applicable to the video processing and computer industries. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit 25 of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings. 1880027vl (864516_Final)

Claims (20)

  1. 3. The apparatus according to claim 2, further comprising a pattern tracking module that 0 is updated based on output of the pull-down detection module upon completion of each frame.
  2. 4. The apparatus of claim 2, where the clean weave detection module determines first and second frame scores, the first and second frame scores being indicative of the level of high contrast feathering artefact in the combination of a reference video field respectively with a 25 preceding video field and a subsequent video field.
  3. 5. The apparatus according to claim 4, where the clean weave detection module determines third and fourth frame scores, the third and fourth frame scores being indicative of low contrast, wide area feathering artefact in the combination of a reference video field 30 respectively with a preceding video field and a subsequent video field.
  4. 6. The apparatus according to claim 5, where said first, and third frame scores are determined from a plurality of first block artefact scores for blocks from a reference and 1880027vl (864516_Final) -29 subsequent input video field in combination with a block motion score for blocks from a preceding and subsequent input video field.
  5. 7. The apparatus according to claim 5, where said second and fourth frame scores are determined from a plurality of second block artefact scores for blocks from a reference and subsequent input video field in combination with a block motion score for blocks from a preceding and subsequent input video field.
  6. 8. The apparatus according to claim 5, where the third and fourth frame scores are used to determine a weave direction when the first and second frame scores indicate that neither combination of fields contain significant levels of high contrast feathering artefact.
  7. 9. The apparatus according to claim 8, where a prediction based on a tracked pattern is used to determine a weave direction when the third and fourth frame scores indicate that 5 neither combination of fields contain significant levels of low contrast large area feathering artefact.
  8. 10. A computer implemented method of generating a deinterlaced output video frame from a set of four input video fields, said method comprising the steps of: 0 storing a sequence of four consecutive video fields in a memory; processing pixels from the temporally first three of the stored video fields; processing pixels from the temporally second three of the stored video fields; and selecting between the processed pixels from the temporally second three video fields based on a previous weave direction result to generate the deinterlaced output video frame. 25
  9. 11. A computer readable medium, having a program recorded thereon, where the program is configured to make a computer execute a procedure to generate a deinterlaced output video frame from a set of four input video fields, said program comprising: code for storing a sequence of four consecutive video fields in a memory; 30 code for processing pixels from the temporally first three of the stored video fields; code for processing pixels from the temporally second three of the stored video fields; and 1880027vl (864516_Final) -30 code for selecting between the processed pixels from the temporally second three video fields based on a previous weave direction result to generate the deinterlaced output video frame. 5 12. A method of detecting from a series of video fields, adjacent pairs of video fields that when combined by weaving produce a frame containing feathering artefact, said method comprising the steps of: determining first and second frame scores for the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of a 0 reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and a preceding video field; determining third and fourth frame scores for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the 5 combination of the reference video field with the subsequent video field and the fourth frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field and the preceding video field respectively; detecting which of the combinations of video fields contain feathering artefact, based on the first, second, third and fourth frame scores; and .0 output one or more of the combinations of video fields.
  10. 13. The method according to claim 12, further comprising the step of determining, for each of a plurality of corresponding blocks of video data of the reference video field and the subsequent video field, a plurality of first artefact values occurring between the reference 25 video field and the subsequent video field.
  11. 14. The method according to claim 13, further comprising the step of determining, for each of a plurality of corresponding blocks of video data of the reference video field and a preceding video field, a plurality of second artefact values occurring between the reference 30 video field and the preceding video field;
  12. 15. The method according to claim 6, further comprising the step of comparing said third and fourth frame scores when a comparison of said first and second frame scores indicate that 1880027vl (864516_Final) -31 combinations of the reference video field with both the subsequent video field and the preceding video field corresponds to a progressive source frame.
  13. 16. The method according to claim 14, wherein the first and second artefact values are 5 determined by a block motion score measured using a block of video data from the preceding video field and a block of video data from the subsequent video field. D 17. An apparatus for detecting from a series of video fields, adjacent pairs of video fields that when combined by weaving produce a frame containing feathering artefacts, said apparatus comprising: first frame score determining module for determining first and second frame scores for 5 the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of a reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and a preceding video field; second frame score determining module for determining third and fourth frame scores 0 for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field with the subsequent video field and the fourth frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field and the preceding video field respectively; 25 detecting module for detecting which of the combinations of video fields contain feathering artefact, determined from the first, second, third and fourth frame scores, and for outputting one or more of the combinations of video fields.
  14. 18. The apparatus according to claim 17, further comprising first artefact value 30 determining module for determining, for each of a plurality of corresponding blocks of video 1880027vl (864516_Final) -32 data of a reference video field and a subsequent video field, a plurality of first artefact values occurring between the reference video field and the subsequent video field.
  15. 19. The apparatus according to claim 18, further comprising second artefact value 5 determining module, for each of a plurality of corresponding blocks of video data of the reference video field and a preceding video field, a plurality of second artefact values occurring between the reference video field and the preceding video field;
  16. 20. A computer readable medium, having a program recorded thereon, where the program 0 is configured to make a computer execute a procedure to detect from a series of video fields, adjacent pairs of video fields that when combined the produced weaved frame contains feathering artefacts, said program comprising: code for determining first and second frame scores for the series of video fields, where the first frame score indicates presence of high contrast feathering artefact in a combination of 5 a reference video field and a subsequent video field and the second frame score indicates the presence of high contrast feathering artefact in a combination of the reference video field and a preceding video field; code for determining third and fourth frame scores for the series of video fields, where the third frame score indicates the presence of large area low contrast feathering artefact in the 20 combination of the reference video field with the subsequent video field and the fourth frame score indicates the presence of large area low contrast feathering artefact in the combination of the reference video field and the preceding video field respectively; code for detecting which of the combinations of video fields contain feathering artefact, determined from the first, second, third and fourth frame scores; and 25 code for outputting one or more of the combinations of video fields.
  17. 21. The computer readable medium according to claim 20, further comprising code for determining, for each of a plurality of corresponding blocks of video data of a reference video 1880027vl (864516_Final) -33 field and a subsequent video field, a plurality of first artefact values occurring between the reference video field and the subsequent video field.
  18. 22. The computer readable medium according to claim 21, further comprising code for determining, for each of a plurality of corresponding blocks of video data of the reference video field and a preceding video field, a plurality of second artefact values occurring between the reference video field and the preceding video field.
  19. 23. An apparatus for generating a deinterlaced output video frame from a set of four input ) video fields, said apparatus being substantially as herein before described with reference to any one of the embodiments as that embodiment is shown in the accompanying drawings.
  20. 24. A computer implemented method of generating a deinterlaced output video frame from a set of four input video fields, said method being substantially as herein before 5 described with reference to any one of the embodiments as that embodiment is shown in the accompanying drawings. DATED this 4th Day of December 2008 CANON KABUSHIKI Patent Attorneys for the Applicant 20 SPRUSON&FERGUSON 1880027vl (864516_Final)
AU2008255189A 2008-12-09 2008-12-09 Method of detecting artefacts in video data Abandoned AU2008255189A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008255189A AU2008255189A1 (en) 2008-12-09 2008-12-09 Method of detecting artefacts in video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2008255189A AU2008255189A1 (en) 2008-12-09 2008-12-09 Method of detecting artefacts in video data

Publications (1)

Publication Number Publication Date
AU2008255189A1 true AU2008255189A1 (en) 2010-06-24

Family

ID=42270381

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008255189A Abandoned AU2008255189A1 (en) 2008-12-09 2008-12-09 Method of detecting artefacts in video data

Country Status (1)

Country Link
AU (1) AU2008255189A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381858A (en) * 2020-11-13 2021-02-19 成都商汤科技有限公司 Target detection method, device, storage medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381858A (en) * 2020-11-13 2021-02-19 成都商汤科技有限公司 Target detection method, device, storage medium and equipment
CN112381858B (en) * 2020-11-13 2024-06-11 成都商汤科技有限公司 Target detection method, device, storage medium and equipment

Similar Documents

Publication Publication Date Title
US8682100B2 (en) Field sequence detector, method and video device
US6055018A (en) System and method for reconstructing noninterlaced captured content for display on a progressive screen
CN101803363B (en) Method and apparatus for line-based motion estimation in video image data
US6269484B1 (en) Method and apparatus for de-interlacing interlaced content using motion vectors in compressed video streams
US7889233B2 (en) Video image processing with remote diagnosis and programmable scripting
JP5038483B2 (en) Video data deinterlacing
US8031267B2 (en) Motion adaptive upsampling of chroma video signals
US7916784B2 (en) Method and system for inverse telecine and field pairing
US7256835B2 (en) Apparatus and method for deinterlacing video images
CN1953532A (en) Television/cinema scheme identification apparatus and identification method
JP2009260930A (en) Method of determining field dominance in video frame sequence
JP2005517251A (en) Method and unit for estimating a motion vector of a pixel group
US8379146B2 (en) Deinterlacing method and apparatus for digital motion picture
US7616693B2 (en) Method and system for detecting motion between video field of same and opposite parity from an interlaced video source
US20090324102A1 (en) Image processing apparatus and method and program
US8274605B2 (en) System and method for adjacent field comparison in video processing
US20080111917A1 (en) Method and system for determining video deinterlacing strategy
AU2008255189A1 (en) Method of detecting artefacts in video data
US7956928B2 (en) Apparatus and method for video de-interlace
US7339626B2 (en) Deinterlacing video images with slope detection
EP1762092A1 (en) Image processor and image processing method using scan rate conversion
JP2006510267A (en) Method for recognizing film and video occurring simultaneously in a television field
US7750974B2 (en) System and method for static region detection in video processing
KR20060047491A (en) Auxiliary information processing method, interpolation method, signal processor and interpolator
AU2006252189B2 (en) Method and apparatus for determining quality appearance of weaved blocks of pixels

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted