[go: up one dir, main page]

WO2005074143A1 - System and method for encoding/decoding live audio/video information - Google Patents

System and method for encoding/decoding live audio/video information Download PDF

Info

Publication number
WO2005074143A1
WO2005074143A1 PCT/AU2005/000128 AU2005000128W WO2005074143A1 WO 2005074143 A1 WO2005074143 A1 WO 2005074143A1 AU 2005000128 W AU2005000128 W AU 2005000128W WO 2005074143 A1 WO2005074143 A1 WO 2005074143A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
segment
values
encoded
dominant
Prior art date
Application number
PCT/AU2005/000128
Other languages
French (fr)
Other versions
WO2005074143A8 (en
Inventor
Adam Leslie Clark
Original Assignee
Adams Platform Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/770,952 external-priority patent/US20050196049A1/en
Priority claimed from US10/771,097 external-priority patent/US7505045B2/en
Priority claimed from US10/771,096 external-priority patent/US20050180641A1/en
Application filed by Adams Platform Pty Ltd filed Critical Adams Platform Pty Ltd
Publication of WO2005074143A1 publication Critical patent/WO2005074143A1/en
Publication of WO2005074143A8 publication Critical patent/WO2005074143A8/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • H03M7/3064Segmenting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates generally to communication systems and, in particular, to a system and method for encoding/decoding live video for transmission over low bandwidth communication links.
  • Video signals can be digitized, encoded, and subsequently decoded in a manner which significantly decreases the number of bits necessary to represent a decoded reconstructed video without noticeable, or with acceptable, degradation in the reconstructed video.
  • Video coding is an important part of many applications such as digital television transmission, video conferencing, video databases, etc.
  • a video camera In video conferencing applications, for example, a video camera is typically used to capture a series of images of a target, such as a meeting participant or a document.
  • the series of images is encoded as a data stream and transmitted over a communications channel to a remote location.
  • the data stream may be transmitted over a phone line, satellite, an integrated services digital network (ISDN) line, or the Internet.
  • ISDN integrated services digital network
  • connection of a user interface device to the Internet may be made by a variety of communication channels, including twisted pair telephone lines, coaxial cable, and wireless signal communication via local transceivers or orbiting satellites.
  • Most user interface device Internet connections are made by relatively low-bandwidth communication channels, mainly twisted pair telephone lines, due to the existing infrastructure of such telephone lines and the cost of implementing high-bandwidth infrastructure. This constrains the type of information that may be presented to users via the Internet connection, because video transmissions using presently available coding techniques generally require greater bandwidth than twisted pair telephone wires can provide for optional viewing.
  • the encoding process is typically implemented using a digital video coder/decoder (codec), which divides the images into blocks and compresses the blocks according to a video compression standard, such as the ITU-T H.263 and H.261 standards.
  • codec digital video coder/decoder
  • a block may be compressed independent of the previous image or as a difference between the block and part of the previous image.
  • the data stream is received at a remote location, where it is decoded into a series of images, which may be viewed at the remote location. Depending on the equipment used, this process typically occurs at a rate of one to thirty frames per second.
  • An efficient hybrid video coding system is based on the ITU-T
  • the ITU-T Recommendation H.263 adopts a hybrid scheme of motion-compensated prediction to exploit temporal redundancy and transform coding using the discrete cosine transform (DCT) of the remaining signal to reduce spatial redundancy.
  • DCT discrete cosine transform
  • Half pixel precision is used for the motion compensation, and variable length coding is used for the symbol representation.
  • these techniques still do not provide adequate results for low- bandwidth connections such as dial-up connections or wireless device networks (e.g., GSM or CDMA) that have data transmission rates as low as 9.6 kilobits/sec, 14.4 kilobits/sec, 28.8 kilobits/sec or 56 kilobits/sec.
  • GSM Global System for Mobile communications
  • CDMA wireless device networks
  • a method and apparatus are utilized and configured to encode a live or streamed video stream by encoding frames of audio/video data, segment by segment, each frame including a number of pixels and each pixel having a plurality of pixel color components.
  • the encoding process includes creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components.
  • the encoder determines a set of segment reference pixels, wherein each segment reference pixel is a pixel within the segment having a most intense dominant pixel color.
  • the encoder prior to encoding the pixel data, creates a frame group file to store a header, the frame table and segment reference pixels.
  • the encoder may also store any associated audio/video data, synchronization information, or tags into the frame group file or may create one or more separate files for such information.
  • the plurality of pixel color components stored within the table may include any or all of luminance, chrominance, and color depth information.
  • the encoder scales down each entry to reduce the amount of stored data. In another embodiment, the non-dominant color values are also scaled down and entered into the table.
  • determining the set of segment reference pixels includes comparing, on a pixel by pixel basis for each segment, a current pixel color value with a previously stored dominant pixel color value and storing the plurality of pixel color components and pixel parameters of the pixel with the most intense dominant pixel color component.
  • the plurality of pixel color components include at least one of the sets of primary color components, red, green, and blue; or cyan, magenta, and yellow and the segment reference pixels include the primary color components and black. Black may be determined by comparing an average or aggregation of the red, green, and blue pixel component values to a black threshold value.
  • the encoded segment of a frame may include a line of a frame, a half of a frame, or other fraction of a frame.
  • the encoder writes a pointer to the next frame group within the frame group file to ensure the decoder decodes the frame groups in the correct sequence.
  • redundant encoded pixel values of the frame group table share common table entries and therefore share identical dominant pixel color components and identical pixel parameter values.
  • the redundant encoded pixel values share dominant pixel color components and pixel parameters values that are similar to one another within a tolerance range.
  • an encoder including: means to create, pixel by pixel, a frame table of encoded segments of audio/video data, wherein each pixel is represented by a single pixel color component; and means to create a set of color scaling pixels for each encoded segment, wherein each color scaling pixel is selected based upon a pixel having a most significant color intensity value with respect to similarly colored pixels within the segment.
  • a machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: encoding a frame of audio/video data, segment by segment, including a number of pixels each having a plurality of pixel color components by: creating a frame group table in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; and determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels is a pixel within each one of the encoded segments having a most intense dominant pixel color value.
  • a method of communicating live audio/visual information over a communications link including: encoding, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value; communicating the frame group table and the segment reference pixels over a network to a receiver; and at the receiver, decoding the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
  • a system including: an encoder to encode, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components and to determine a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value; a server to communicate the frame group table and the segment reference pixels over a network to a receiver ; and a decoder coupled to the receiver to decode the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels.
  • a method including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
  • a decoder including: means to generate, pixel by pixel and segment by segment, a frame of audio/video data based upon a table of encoded pixel parameter values, wherein each pixel is represented by a single color entry, and means to scale a representative set of segment reference pixel values according to the table of encoded pixel parameter values, wherein a segment is a fractional portion of the, frame of audio/video data.
  • a machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
  • Figure 1 is a block diagram of an exemplary system for compressing streamed or live video, according to one embodiment of the present invention
  • Figure 2 illustrates a sequence of video frames with its corresponding raw video data, according to one embodiment of the invention
  • Figure 3 A illustrates the encoding of a raw video table, according to one embodiment of the present invention
  • Figure 3B illustrates a segment reference pixel table, according to one embodiment of the present invention
  • Figure 4 illustrates the decoding of a compressed video file, according to one embodiment of the present invention
  • Figure 5 is a flow diagram showing an example of an encoding process, according to one embodiment of the present invention.
  • Figure 6 is a flow diagram illustrating an example of a decoding process, according to one embodiment of the present invention.
  • Figure 7 illustrates an exemplary network architecture for use according to one embodiment of the present invention.
  • Figure 8 illustrates an exemplary computer architecture for use according to one embodiment of the present invention.
  • a system and method for encoding video are described.
  • the present encoding system and method overcome prior deficiencies in streaming live video content by encoding and decoding video data such that high-quality video transmission over low bandwidth communication links is possible.
  • numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
  • the present invention can be implemented by an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include a general-purpose computer, selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the methods of the invention may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems.
  • the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
  • FIG. 1 is a block diagram of an exemplary system 100 for live video compression, according to one embodiment of the present invention.
  • Video compression system 100 is designed to encode and deliver high quantity live video over low bandwidth transmission links (e.g., 9.6-56 kBps).
  • video compression system 100 obtains video from a live feed source 104, such as a camera recording a live sporting event.
  • a live feed source 104 such as a camera recording a live sporting event.
  • the source contributing video data to be streamed "live” may also be any device capable of delivering video content, such as a digital versatile disc (DVD), a computer storage device, or digital video tape.
  • DVD digital versatile disc
  • analog video storage devices may also be used so long as the video stored thereon is first converted to a digital format prior to "live” encoding.
  • Live feed source 104 produces digital output video signals in a raw data file format 106.
  • audio signals accompany the video signals from source devices such as live feed source 104.
  • the audio signals may be digitized and/or compressed and provided along with raw video data 106, either in a separate file or appended to the video file.
  • the audio data may be processed independent of raw video data 106 according to any audio compression method including the MPEG (Moving Picture Experts Group) "MP3" or Microsoft's "wav” format.
  • MPEG Motion Picture Experts Group
  • MP3 Microsoft's "wav” format.
  • Such audio may be synchronized with video data file 106 at any point within compression system 100.
  • Raw video data 106 including a start stream header, is provided to an encoder 112.
  • the start stream header is included at the start of the stream of raw video data 106 and may include information regarding the audio data and raw video data 106, such as video frame starting and ending points, video tag information, the number of video frames per second, frame resolution (i.e., the number of pixels per frame), color depth information, audio synch information, and similar data regarding the video data stream that may be used by encoder 112.
  • Compression system 100 uses encoder 112 to compress raw video data 106 for streaming video to a decoder 118 at or near real-time. The details of the encoding process performed by encoder 112 will be discussed below.
  • Encoder 112 produces a compressed or encoded video file 114, which may include for each frame group, segment reference pixel values, encoded pixel data for all frames within the frame group, and header information for an encoded frame group, such as resolution settings for decoder 118 and audio/video synch information.
  • a trailer within compressed video file 114 may be generated that may include other audio/video information such as a pointer identifying the next frame group to be decoded.
  • compressed video file 114 is a frame group table of pixel parameter values for each pixel of each video frame including the acquired video. Encoder 112 may also produce an audio output file that may or may not be compressed, as discussed above.
  • reference to compressed video file 114 includes any audio/visual data, optional data and/or header and trailer information. It should be appreciated, however, that in other embodiments, the header, the trailer, the compressed video data, and audio data may be written to separate files or appended to one or more files in any combination thereof.
  • Compressed video file 114 may be transmitted over a network 116 to decoder 118.
  • Decoder 118 decodes compressed video file 114 to include a decompressed video file 120 and synchronizes the audio data (if any) for audio/visual viewing via a playback device 122.
  • Playback device 122 may be any device accepting video data such as a television, cellular phone display, personal computer, personal digital assistant (PDA), automobile navigation system, or other similar device capable of displaying video data.
  • PDA personal digital assistant
  • FIG. 2 illustrates a sequence of video frames with its corresponding table of raw video data, according to one embodiment of the present invention.
  • Video sequence 200 is composed of a number of video frames 210 ⁇ -210n. Each video frame 210 is composed of thousands of pixels. The exact number of pixels in a frame depends upon the digital video format and more specifically, the frame resolution used.
  • the present method and system support High Definition Digital TV (HDTV), National TV Standards Committee (NTSC) having 30 interlaced frames per second at 525 lines of resolution with an audio FM frequency and a MTS signal for stereo, Phase Alternating Line (PAL) standards having 25 interlaced frames per second at 625 lines of resolution, System en coleur Malawi memoire (SECAM) and similar protocols. It should be noted, however, any analog audio/video format is to be converted to a digital audio/video format prior to encoding by encoder 112.
  • HDTV High Definition Digital TV
  • NTSC National TV Standards Committee
  • PAL Phase Alternating Line
  • SECAM System en coleur Marie memoire
  • Live feed source 104 generates frames 210 and provides raw video data file 106 that describes video frames 210 ⁇ -n and their corresponding pixels.
  • Raw video data file 106 contains raw video frame data tables 220i-n, where n represents the number of frames, and each row 231-236 corresponds to pixels in each video frame 210, where the pixels between 235 and the last pixel 236 have been omitted for clarity.
  • the columns of each raw video frame data table 220 describe the pixel numbers 222, red color component values 223, green color component values 224, blue color component values 225 (RGB values), luminance values 226, and chrominance values 227 for each pixel in respective frame 210.
  • other color spaces may be used such as, cyan, magenta, and yellow (CyMgYl).
  • each pixel parameter 223-227 of each pixel 222 in each frame table 220 requires multiple bytes of information to be stored per pixel, thus creating large file sizes for multiple frames of video data.
  • Considering high quality video requires at least a frame rate of 25 frames per second or 1,500 frames/minute, it should be apparent that the amount of storage and/or bandwidth required to stream and play an uncompressed or slightly compressed video file is quite large.
  • Figure 3 A illustrates a table of raw video data 220 I encoded into a frame group table 320 ⁇ , according to one embodiment of the present invention.
  • the pixel numbers in column 222 I are mapped to pixel numbers in column 322 ⁇ .
  • the pixel value sets i.e., RGB values 223 ⁇ -225 ⁇
  • the calculation of dominant color value 3231 will be discussed below.
  • Luminance value 2261 is mapped to a scaled luminance value 326 ⁇ and chrominance value 227 ⁇ is mapped to a scaled chrominance value 327 ⁇ .
  • the calculation of scaled chrominance and luminance values 3271, 3261 will also be discussed below.
  • Each compressed video table 320 may further include color depth values 328.
  • color depth value 328 is the average of the scaled chrominance and luminance values 327, 326.
  • encoder 112 populates compressed video table 320, if a row of compressed pixel data is determined to be identical or at least sufficiently similar to a previously populated row, encoder 112 places a pointer to the appropriate previously entered row, as illustrated with reference to 334.
  • Figure 3B illustrates pixel reference value sets 350 ⁇ -n generated by encoder 112, where n is the number of encoded segments for each frame group.
  • the segment reference pixel value sets 350 may have up to four (4) reference pixel values corresponding to red 361, green 362, blue 363 and black 364 for each segment of a video frame 210.
  • the segment reference pixels are selected based upon the video frame 210's most intense dominant pixel color values for each encoded segment, as illustrated by red 361, green 362, blue 363, and black 364.
  • the most intense dominant pixel color value is based on the highest raw pixel color values.
  • the black segment reference pixel 364, according to one embodiment of the present invention, may be determined by comparing the color component values (e.g., RGB) in aggregate.
  • the segment reference pixel values may also include pixel parameter values, such as luminance value 356 ⁇ and chrominance value 357 ⁇ for each of the segment reference pixel colors 361-364.
  • the segment reference pixel values may also be scaled or alternatively, the reference pixel values may be full-scale values corresponding to the raw data format.
  • additional reference values may be used for color depth or other similar graphics data or pixel parameters. Calculation of pixel reference value sets 350 will be discussed in greater detail below.
  • FIG. 4 illustrates an exemplary decoding process 400 for a compressed video file 114, according to one embodiment of the present invention.
  • Compressed video file 114 may include a frame group header, segment reference pixel values 350, and encoded video tables 320 for each video frame 210.
  • Decoder 118 processes compressed video file 114 to provide a decoded video file 120.
  • Decoded video file 120 includes a decoded video table 420 including decoded pixel parameter values 422-427 for each pixel 431-436.
  • Decoding process 400 includes the mapping of a compressed (encoded) video table 320 to a decoded video table 420 using segment reference pixel values 350.
  • the pixel data 331- 336 is decoded using table 350 and is respectively mapped to pixel data 431- 436.
  • the process performed by decoder 118 to populate decoded video table 420 will be described in detail below.
  • Decoded video file 120 can be formatted for playback devices supporting different input protocols. Such protocols include NTSC, SECAM, PAL and HDTV as described above. Additionally, support for computer displays is provided. If a low-bandwidth communication link exists between display (play back device) 122 and decoder 118, decoder 118 may be configured, in one embodiment, to transmit a fraction of the lines per frame. In another embodiment, in order to minimize bandwidth consumption, encoder 112 may encode only a fraction of the lines per frame, such as one of two fields of video, resulting in a smaller compressed video file 114 for transmission over network 116. In other embodiments, the video frames may be encoded in their entirety but a field is removed and/or the screen resolution is reduced prior to transmission over network 116.
  • frames may be dropped prior to transmission.
  • a file encoded at 24 frames per second may be reduced to 12 frames per second by dropping ever other frame prior to transmission.
  • These embodiments may be particularly useful when playback device 122 is a cellular telephone or other wireless device, requiring high quality video over low bandwidth networks, such as GSM, CDMA, and TDMA.
  • Figure 5 illustrates a flowchart of encoding process 500 for encoding live or streaming video content, according to one embodiment of the present invention.
  • encoder 112 receives raw video data 106 for encoding.
  • Encoder 112 then provides compressed video file 114, including frame group header, segment reference pixel values 350, frame group table 320, and any additional parameters and optional information described above, to decoder 118 via network 116.
  • encoder 112 receives the digitized video data as raw video data 106.
  • encoder 112 determines from raw video data 106 the video format and frame information, and creates a frame group file in which a frame group header, a table of scaled pixel parameter values, and reference pixels will be stored.
  • the audio is also stored in the frame group file.
  • Raw video data 106 may be of any format known in the art such as MPEG (Moving Picture Experts Group), MJPEG (moving JPEG (Joint Photographic Experts Group)), QuickTime's AVI (audio video interleaved), among others.
  • encoder 112 receives raw pixel data for frames 210 as further illustrated in raw pixel data tables 220.
  • encoder 112 determines, pixel by pixel and per segment, the dominant color of each pixel by examining each pixels color component values.
  • pixel one data 231 includes a red color component value 2231 of 10,000, a green component value 224 ⁇ of 2,000, and a blue component value 2251 of 500. Therefore, in one embodiment pixel l's dominant color value would correspond to the highest numerical value among the three color values (RGB) which is the red value 2231 of 10,000. h other embodiments other techniques for calculating the dominant color may be used, such as weighted color component value comparisons.
  • the current pixel's color component values are compared to the highest previously stored values for that color component in order to determine the segment reference pixels corresponding to the most intense pixels for each color for each segment.
  • the color component values would all have to be above a threshold value.
  • black segment reference pixel 364 has red, green and blue values of 9000, 8000,and 8500, respectively.
  • the values may not be the highest value for each color (e.g., red segment reference pixel 361 red value of 10000)
  • the black segment reference pixel corresponding to the most intense black pixel of the segment is the pixel with the highest of all three color component values, red, green and blue.
  • a white segment reference pixel and white dominant pixel table values are based upon the color component values being below a threshold value.
  • a segment may be defined as any portion of a frame or frames.
  • a segment may be defined as the first line of a frame as shown in 210 ⁇ pixels 1 to 5.
  • a segment may be multiple frames, two lines of a frame or half of a frame. It should be appreciated that the segment size definition may be optimized to accommodate a variety of systems to minimize encoder processor overhead and maximize the frame transfer rate for streaming video content over low-bandwidth connections.
  • pixel parameters 231 and 232 each indicate that the dominant color for each pixel is red based upon a comparison of their respective RGB values. However, the red value for pixel one of 10,000 is greater than that of pixel two 9,000 and therefore would remain as the red segment reference pixel as shown in table 350 of Figure 3B.
  • the segment reference pixel also retains its other pixel parameters such as green color component value 354 ⁇ , blue color component value 355 I, luminance value 356 ⁇ , and chrominance value 357 ⁇ . In other embodiments, all or some of these values may be scaled or otherwise manipulated to decrease table size or alter display characteristics.
  • the pixel parameters are scaled down and stored in the table 320.
  • the scaled pixel values include scaled predominant color value 323 I, scaled luminance value 326 ⁇ , scaled chrominance value 327 ⁇ , and a calculated color depth value 3281.
  • only the dominant color value, luminance value, and chrominance value are scaled down and stored in the table 320.
  • all of the raw pixel parameter values are scaled down and stored within the table 320, including the non-dominant color values.
  • pixel parameters 231-235 are scaled down into a one through ten (1-10) scale as shown with scaled pixel parameters 331-335 of table 320 ⁇ .
  • pixel parameter row 233 of table 220 ⁇ indicates the dominant pixel color is green with a green color component value of 8,000 and a luminance and chrominance value of 490 and 510, respectively. If full-scale raw color values were 10,000, then the dominant color value may be rounded to the nearest thousand and divided by the full scale to produce a 1-10 value. For example:
  • the color depth 328 is calculated based upon the average of the scaled down luminance and chrominance values 326, 327 and as illustrated in table 320 ⁇ . In another embodiment, the calculation is performed at decoder 118. In yet other embodiments, the raw values may be scaled into any number of ranges, such as a 1-25 scale.
  • encoder 112 checks the current pixel parameter values with previously stored values in the table. If the scaled down parameter values are unique, at block 516, encoder 112 writes the parameter values for the pixel into table 320. However, if an identical or sufficiently similar (e.g., within a tolerance amount) table entry already exists, encoder 112, at block 518, creates an entry for the current pixel that refers to the previously encoded pixel in table 320. For example, with reference to Figure 3 A, pixel parameter row 234, if scaled according to the process described above, would have identical scaled dominant pixel color, luminance, and chrominance to that of pixel parameter row 233.
  • encoder 112 inserts a reference to the previously encoded pixel as shown with reference to table 320 ⁇ row 334. It should be appreciated that in dealing with tens of thousands of pixels, the combination of scaling down the dominant color 323, luminance 326, and chrominance values 327 in addition to inserting pointers for redundant pixel values will result in a significant reduction in the size of encoded pixel table 320 over that of raw pixel table 220, and thus the amount of bandwidth required to transmit this information to a decoder 118 is reduced.
  • Encoder 112 at block 520, checks whether or not the encoding has reached the end of the segment. If the end of the segment has not been reached, then encoder 112 indexes to the next pixel corresponding to the next pixel parameter row and repeats blocks 506 through 518.
  • encoder 112 retrieves the segment reference pixel values corresponding to the most intense dominant pixel colors for the segment and writes those segment reference pixels to the frame group file.
  • the coordinates assigned to a segment reference pixel are the coordinates of the pixel prior to a pixel color change within the segment, or if there is not a color change leading up to the end of the segment, the segment reference pixel coordinates for that color are the coordinates of the last pixel of the segment.
  • the segment reference pixels may be stored, by coordinate references or otherwise, according to any programming method that would allow for the values to be scaled according to the encoding method described above.
  • a segment only has fewer than four reference pixel colors represented therein, then there may be fewer than few segment reference pixels associated with that segment. For example, if a segment includes a row of five pixels, as illustrated with reference to Figures 2 and 3, table 320 illustrates that the segment only includes dominant color values of red and green and therefore will only have red and green segment reference pixels as further illustrated in Figure 3B, segment reference pixel data 361 and 362. Therefore, in this example, encoder 112 would only write segment reference pixel data corresponding to the most intense red and green pixel colors of the segment to the frame group file.
  • encoder 112 determines if it has reached the end of the frame in the encoding process. If the process has not reached the end of the frame, encoder 112 indexes to the next segment and repeats blocks 504 through 522. If another frame has been reached, at block 526, encoder 112 determines whether it has encoded the entire frame group. If the entire frame group has not been encoded, encoder 112 indexes to the next frame and repeats blocks 504 through 524. However, if the end of the frame group has been reached, encoder 112, at block 528, inserts a pointer used by decoder 118 to identify the next frame group for decoding.
  • encoder 112 communicates the frame group in the compressed video file 114 through network 116 to decoder 118.
  • encoder 112 begins encoding the next frame group and repeats blocks 504 through 528.
  • the frame group file includes multiple tables including multiple frames.
  • a table may include pixel information for 25 frames and a frame group may include five tables thus equalling 125 frames per frame group.
  • FIG. 6 illustrates a flowchart of a decoding process 600, according to one embodiment of the present invention.
  • decoder 118 receives compressed video file 114 through network 116. After decoding, decoder 118 supplies decoded video data 120 to playback device 122.
  • Decoding process 600 begins at block 602 by receiving and caching (temporarily storing) compressed video file 114 from network 116.
  • decoder 118 begins decoding the scaled table 320 of pixel parameter values beginning at the first pixel of the first frame.
  • decoder 118 reads from the table 320, the pixel location, reference pixel color value, luminance and chrominance.
  • decoder 118 scales the corresponding segment reference pixel values according to the table 350 of pixel parameter values.
  • decoder 118 uses the encoded pixel parameter values of table 320 ⁇ and the segment reference pixels of table 350 ⁇ to generate decoded pixel parameter values as illustrated in table 420 ⁇ . For example, using the scaled dominant color value G8 of pixel three, the scaled luminance and chrominance of 5, and the green segment reference pixel 362 results in decoded pixel three values of table 420 ⁇ . For example:
  • Pixel parameter values of G8 (G, use green segment reference pixel), luminance (Lm) 5, and chrominance (Cm) 5, from table 350 ⁇ : Segment Reference Pixel G— R600, G10000, B740, Lm600, Cm400. Non-dominant R and B remain the same, scale dominant G Lm and Cm:
  • R and B are scaled by the same factor of 0.8, similar to the calculation for Gs, above.
  • the scaled values of the table may include the non-dominant colors that may also be decoded with reference to the segment reference pixels.
  • the segment reference pixel values are the original raw full-scale values, and are either communicated with the table 320 of scaled values or are resident within the decoder system.
  • decoder 118 duplicates the decoded results of previous pixel entry.
  • Decoder 118 indexes to the next pixel to decode in the segment if it is determined, at block 610, that the segment has not been fully decoded. If the end of the segment has been reached, decoder 118 determines at block 612 if the end of the frame has been reached. If not, then decoder 118 begins decoding the next segment of the frame using the process described above. If the entire frame has been decoded, decoder 118 determines, at block 614, if the entire frame group has been decoded. If not, decoder 118 begins decoding the next frame in the table 320.
  • decoder 118 receives and decompresses (if necessary) any audio data associated with the previously decoded frame group.
  • decoder 118 determines if the frame data requires reformatting for display.
  • the user of the display device 122 configures decoder 118 to format decompressed video data 120 to accommodate various playback devices 122, such as Microsoft's Windows Media Player. If reformatting is required, refo ⁇ natting is executed at block 620 and decoder 118, at block 622 synchronizes the audio and writes the decoded frame group to playback device 122.
  • decoder 118 After decoding and displaying the decoded frame group, according to one embodiment of the present invention, decoder 118 at block 624, reads from the frame group the pointer to the next frame group for decoding, and clears the previously decoded frame group from the cache. In one embodiment, decoder 118 may read a trailer appended to the communicated file. The trailer may provide decoder 118 with audio/video information, such as the logical location or name of the next frame group to decode, the number of frames and or files remaining in the encoded video, index information to the next file, or other audio/video information related to playback.
  • audio/video information such as the logical location or name of the next frame group to decode, the number of frames and or files remaining in the encoded video, index information to the next file, or other audio/video information related to playback.
  • Elements of the present invention may be included within a client-server based system 700 such as that illustrated in Figure 7.
  • One or more servers 710 communicate with a plurality of clients 730-735.
  • Clients 730-735 may transmit and receive data from servers 710 over a variety of communication media including (but not limited to) a local area network 740 and/or a larger network 725 (e.g., the Internet).
  • Alternative communication channels such as wireless communication via GSM, TDMA, CDMA, Bluetooth, IEEE 802.11, or satellite broadcast (not shown) are also contemplated within the scope of the present invention.
  • Servers 710 may include a database for storing various types of data. This may include, for example, specific client data (e.g., client account information and client preferences) and/or more general data.
  • the database on servers 710 in one embodiment runs an instance of a Relational Database Management System (RDBMS), such as MicrosoftTM SQL-Server, OracleTM or the like.
  • RDBMS Relational Database Management System
  • a user/client may interact with and receive feedback from servers 710 using various different communication devices and/or protocols.
  • a user connects to servers 710 via client software.
  • the client software may include a browser application such as Netscape Navigator or Microsoft Internet Explorer on the user's personal computer, which communicates to servers 710 via the Hypertext Transfer Protocol (hereinafter "HTTP").
  • HTTP Hypertext Transfer Protocol
  • software such as Microsoft's Word, Power Point, or other applications for composing and presentations may be configured as client decoder/player.
  • clients may communicate with servers 710 via cellular phones and pagers (e.g., in which the necessary transaction software is electronic in a microchip), handheld computing devices, and/or touch-tone telephones.
  • Servers 710 may also communicate over a larger network (e.g., network 725) to other servers 750-752.
  • This may include, for example, servers maintained by businesses to host their Web sites -e.g., content servers such as
  • Network 725 may include router 720.
  • Router 720 forwards data packets from one local area network (LAN) or wide area network (WAN) to another. Based on routing tables and routing protocols, router 720 reads the network address in each IP packet and makes a decision on how to send it based on the most expedient route. Router 720 works at layer 3 in the protocol stack.
  • components illustrated in Figure 1 may be distributed throughout network (client-server based system) 700.
  • video sources may be connected to any client 730-735 or 760-762, or sever 710 or 750-752.
  • Live feed source 104, encoder 112, decoder 118 and display 122 may reside in any client or server, as well.
  • all or some of the components of Figure 1, may be fully contained within a signal server, or client.
  • servers 750-752 host the video acquisition device which supplies live feed source 104 and encoder 112. Video sources connected to clients 760-762 provide source video to servers 750-752. Servers 750-752 encode and compress live source video 104 and deliver compressed video file 114 upon a client request. Upon client 730-733 request, servers 750-752 transmit compressed video file 114 over network 725, 740 to client 730-733 via server 710. In addition, server 710 and client 730-733 may be connected via a dial-up connection between 9.6 kBps and 56 kBps. Clients 730-733 host decoder 118, and upon receiving compressed video file 114, decodes file 114 and provide decoded video file 120 to an attached playback device 122.
  • encoder 112, decoder 118 and video acquisition device (live feed source) 104 may exist in software executed by a general purpose processor, a dedicated video processor provided on an add-on card to a personal computer, a PCMCIA card, an ASIC (application specific integrated circuit) or similar devices.
  • decoder 118 may reside as a software program running independently, or as a plug-in to a web browser. Decoder 118 may be configured to format its video output to have compatibility with existing playback devices that support motion JPEG, MPEG, MPEG-2, MPEG-4 and JVT standards.
  • a computer system 800 representing exemplary clients 730-735 and/or servers (e.g., servers 710), in which elements of the present invention may be implemented will now be described with reference to Figure 8.
  • Computer system 800 includes a system bus 820 for communicating information, and a processor 810 coupled to bus 820 for processing information.
  • Computer system 800 further includes a random access memory (RAM) or other dynamic storage device 825 (referred to herein as main memory), coupled to bus 820 for storing information and instructions to be executed by processor 810.
  • Main memory 825 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 810.
  • Computer system 800 also may include a read only memory (ROM) and/or other static storage device 826 coupled to bus 820 for storing static information and instructions used by processor 810.
  • ROM read only memory
  • a data storage device 827 such as a magnetic disk or optical disc and its corresponding drive may also be coupled to computer system 800 for storing information and instructions.
  • Computer system 800 can also be coupled to a second I/O bus 850 via an I/O interface 830.
  • Multiple I/O devices may be coupled to I/O bus 850, including a display device 843, an input device (e.g., an alphanumeric input device 842 and/or a cursor control device 841). For example, video news clips and related information may be presented to the user on display device 843.
  • the communication device 840 is for accessing other computers (servers or clients) via a network 725, 740.
  • Communication device 840 may include a modem, a network interface card, or other well-known interface device, such as those used for coupling to Ethernet, token ring, or other types of networks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for encoding/decoding a frame (210n) of audio/video data (106), segment by segment, including a number of pixels (222n) each having a plurality of pixel color components (223n to 227n) by creating a frame group table (220n) in which each pixel (222n) entry includes a dominant pixel color component (323n) of the plurality of pixel color components (223n to 227n), and determining a set of segment reference pixels (361n to 364n) for each encoded segment, wherein each one of the segment reference pixels (361n to 364n) is a pixel within each one of the encoded segments having a most intense dominant pixel color value.

Description

SYSTEM AND METHOD FOR ENCODING/DECODING LIVE AUDIO/VIDEO INFORMATION
FIELD OF THE INVENTION
The present invention relates generally to communication systems and, in particular, to a system and method for encoding/decoding live video for transmission over low bandwidth communication links.
BACKGROUND OF THE INVENTION
Video signals can be digitized, encoded, and subsequently decoded in a manner which significantly decreases the number of bits necessary to represent a decoded reconstructed video without noticeable, or with acceptable, degradation in the reconstructed video. Video coding is an important part of many applications such as digital television transmission, video conferencing, video databases, etc.
In video conferencing applications, for example, a video camera is typically used to capture a series of images of a target, such as a meeting participant or a document. The series of images is encoded as a data stream and transmitted over a communications channel to a remote location. For example, the data stream may be transmitted over a phone line, satellite, an integrated services digital network (ISDN) line, or the Internet.
In general, connection of a user interface device to the Internet may be made by a variety of communication channels, including twisted pair telephone lines, coaxial cable, and wireless signal communication via local transceivers or orbiting satellites. Most user interface device Internet connections are made by relatively low-bandwidth communication channels, mainly twisted pair telephone lines, due to the existing infrastructure of such telephone lines and the cost of implementing high-bandwidth infrastructure. This constrains the type of information that may be presented to users via the Internet connection, because video transmissions using presently available coding techniques generally require greater bandwidth than twisted pair telephone wires can provide for optional viewing.
The encoding process is typically implemented using a digital video coder/decoder (codec), which divides the images into blocks and compresses the blocks according to a video compression standard, such as the ITU-T H.263 and H.261 standards. In compression schemes of this type, a block may be compressed independent of the previous image or as a difference between the block and part of the previous image. In a typical video conferencing system, the data stream is received at a remote location, where it is decoded into a series of images, which may be viewed at the remote location. Depending on the equipment used, this process typically occurs at a rate of one to thirty frames per second.
One technique widely used in video systems is hybrid video coding and decoding. An efficient hybrid video coding system is based on the ITU-T
Recommendation H.263. The ITU-T Recommendation H.263 adopts a hybrid scheme of motion-compensated prediction to exploit temporal redundancy and transform coding using the discrete cosine transform (DCT) of the remaining signal to reduce spatial redundancy. Half pixel precision is used for the motion compensation, and variable length coding is used for the symbol representation. However these techniques still do not provide adequate results for low- bandwidth connections such as dial-up connections or wireless device networks (e.g., GSM or CDMA) that have data transmission rates as low as 9.6 kilobits/sec, 14.4 kilobits/sec, 28.8 kilobits/sec or 56 kilobits/sec. For users at the end of a dial-up connection or wireless network, high quality video takes extraordinary amounts of time to download. Streaming high quality video is nearly impossible, and providing live video feeds is generally unfeasible.
SUMMARY OF THE INVENTION A method and apparatus, according to one embodiment of the present invention, are utilized and configured to encode a live or streamed video stream by encoding frames of audio/video data, segment by segment, each frame including a number of pixels and each pixel having a plurality of pixel color components. The encoding process includes creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components. For each encoded segment of the frame, the encoder determines a set of segment reference pixels, wherein each segment reference pixel is a pixel within the segment having a most intense dominant pixel color.
In one embodiment of the present invention, prior to encoding the pixel data, the encoder creates a frame group file to store a header, the frame table and segment reference pixels. In various embodiments, the encoder may also store any associated audio/video data, synchronization information, or tags into the frame group file or may create one or more separate files for such information. The plurality of pixel color components stored within the table may include any or all of luminance, chrominance, and color depth information. Prior to entry within the frame group table, the encoder scales down each entry to reduce the amount of stored data. In another embodiment, the non-dominant color values are also scaled down and entered into the table.
In one embodiment of the present invention, determining the set of segment reference pixels includes comparing, on a pixel by pixel basis for each segment, a current pixel color value with a previously stored dominant pixel color value and storing the plurality of pixel color components and pixel parameters of the pixel with the most intense dominant pixel color component.
In one embodiment of the present invention, the plurality of pixel color components include at least one of the sets of primary color components, red, green, and blue; or cyan, magenta, and yellow and the segment reference pixels include the primary color components and black. Black may be determined by comparing an average or aggregation of the red, green, and blue pixel component values to a black threshold value.
Among various embodiments of the present invention, the encoded segment of a frame may include a line of a frame, a half of a frame, or other fraction of a frame.
In one embodiment of the present invention, the encoder writes a pointer to the next frame group within the frame group file to ensure the decoder decodes the frame groups in the correct sequence.
In one embodiment of the present invention, redundant encoded pixel values of the frame group table share common table entries and therefore share identical dominant pixel color components and identical pixel parameter values. In another embodiment, the redundant encoded pixel values share dominant pixel color components and pixel parameters values that are similar to one another within a tolerance range.
According to a further aspect of the present invention there is provided an encoder, including: means to create, pixel by pixel, a frame table of encoded segments of audio/video data, wherein each pixel is represented by a single pixel color component; and means to create a set of color scaling pixels for each encoded segment, wherein each color scaling pixel is selected based upon a pixel having a most significant color intensity value with respect to similarly colored pixels within the segment.
According to yet a further aspect of the present invention there is provided a machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: encoding a frame of audio/video data, segment by segment, including a number of pixels each having a plurality of pixel color components by: creating a frame group table in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; and determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels is a pixel within each one of the encoded segments having a most intense dominant pixel color value.
According to yet a further aspect of the present invention there is provided a method of communicating live audio/visual information over a communications link, including: encoding, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value; communicating the frame group table and the segment reference pixels over a network to a receiver; and at the receiver, decoding the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
According to yet a further aspect of the present invention there is provided a system, including: an encoder to encode, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components and to determine a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value; a server to communicate the frame group table and the segment reference pixels over a network to a receiver ; and a decoder coupled to the receiver to decode the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels. According to yet a further aspect of the present invention there is provided a method, including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
According to yet a further aspect of the present invention there is provided a decoder, including: means to generate, pixel by pixel and segment by segment, a frame of audio/video data based upon a table of encoded pixel parameter values, wherein each pixel is represented by a single color entry, and means to scale a representative set of segment reference pixel values according to the table of encoded pixel parameter values, wherein a segment is a fractional portion of the, frame of audio/video data.
Accordmg to yet a further aspect of the present invention there is provided a machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values. BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example, and not limitation, and will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
Figure 1 is a block diagram of an exemplary system for compressing streamed or live video, according to one embodiment of the present invention;
Figure 2 illustrates a sequence of video frames with its corresponding raw video data, according to one embodiment of the invention;
Figure 3 A illustrates the encoding of a raw video table, according to one embodiment of the present invention;
Figure 3B illustrates a segment reference pixel table, according to one embodiment of the present invention;
Figure 4 illustrates the decoding of a compressed video file, according to one embodiment of the present invention;
Figure 5 is a flow diagram showing an example of an encoding process, according to one embodiment of the present invention;
Figure 6 is a flow diagram illustrating an example of a decoding process, according to one embodiment of the present invention;
Figure 7 illustrates an exemplary network architecture for use according to one embodiment of the present invention; and
Figure 8 illustrates an exemplary computer architecture for use according to one embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A system and method for encoding video are described. The present encoding system and method overcome prior deficiencies in streaming live video content by encoding and decoding video data such that high-quality video transmission over low bandwidth communication links is possible. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of acts leading to a desired result. The acts are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, signals, datum, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention can be implemented by an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general-purpose computer, selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and processes presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. For example, any of the methods according to the present invention can be implemented in hard- wired circuitry, by programming a general- purpose processor or by any combination of hardware and software. One of skill in the art will immediately appreciate that the invention can be practiced with computer system configurations other than those described below, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, DSP devices, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. The required structure for a variety of these systems will appear from the description below.
The methods of the invention may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, application, etc.), as taking an action or causing a result. Such expressions are merely a shorthand way of saying that execution of the software by a computer causes the processor of the computer to perform an action or produce a result.
Figure 1 is a block diagram of an exemplary system 100 for live video compression, according to one embodiment of the present invention. Video compression system 100 is designed to encode and deliver high quantity live video over low bandwidth transmission links (e.g., 9.6-56 kBps). In one embodiment of the present invention, video compression system 100 obtains video from a live feed source 104, such as a camera recording a live sporting event. Among varying embodiments, the source contributing video data to be streamed "live" may also be any device capable of delivering video content, such as a digital versatile disc (DVD), a computer storage device, or digital video tape. It should be noted, analog video storage devices may also be used so long as the video stored thereon is first converted to a digital format prior to "live" encoding.
Live feed source 104 produces digital output video signals in a raw data file format 106. Generally, audio signals accompany the video signals from source devices such as live feed source 104. The audio signals may be digitized and/or compressed and provided along with raw video data 106, either in a separate file or appended to the video file. In one embodiment of the present invention, the audio data may be processed independent of raw video data 106 according to any audio compression method including the MPEG (Moving Picture Experts Group) "MP3" or Microsoft's "wav" format. Such audio may be synchronized with video data file 106 at any point within compression system 100.
Raw video data 106, including a start stream header, is provided to an encoder 112. The start stream header is included at the start of the stream of raw video data 106 and may include information regarding the audio data and raw video data 106, such as video frame starting and ending points, video tag information, the number of video frames per second, frame resolution (i.e., the number of pixels per frame), color depth information, audio synch information, and similar data regarding the video data stream that may be used by encoder 112.
Compression system 100 uses encoder 112 to compress raw video data 106 for streaming video to a decoder 118 at or near real-time. The details of the encoding process performed by encoder 112 will be discussed below. Encoder 112 produces a compressed or encoded video file 114, which may include for each frame group, segment reference pixel values, encoded pixel data for all frames within the frame group, and header information for an encoded frame group, such as resolution settings for decoder 118 and audio/video synch information. In another embodiment, a trailer within compressed video file 114 may be generated that may include other audio/video information such as a pointer identifying the next frame group to be decoded. The majority of compressed video file 114 is a frame group table of pixel parameter values for each pixel of each video frame including the acquired video. Encoder 112 may also produce an audio output file that may or may not be compressed, as discussed above. For purposes of this specification, reference to compressed video file 114 includes any audio/visual data, optional data and/or header and trailer information. It should be appreciated, however, that in other embodiments, the header, the trailer, the compressed video data, and audio data may be written to separate files or appended to one or more files in any combination thereof.
Compressed video file 114 may be transmitted over a network 116 to decoder 118. Decoder 118 decodes compressed video file 114 to include a decompressed video file 120 and synchronizes the audio data (if any) for audio/visual viewing via a playback device 122. Playback device 122 may be any device accepting video data such as a television, cellular phone display, personal computer, personal digital assistant (PDA), automobile navigation system, or other similar device capable of displaying video data. The process performed by decoder 118 will be described in detail below.
Figure 2 illustrates a sequence of video frames with its corresponding table of raw video data, according to one embodiment of the present invention. Video sequence 200 is composed of a number of video frames 210ι-210n. Each video frame 210 is composed of thousands of pixels. The exact number of pixels in a frame depends upon the digital video format and more specifically, the frame resolution used. The present method and system support High Definition Digital TV (HDTV), National TV Standards Committee (NTSC) having 30 interlaced frames per second at 525 lines of resolution with an audio FM frequency and a MTS signal for stereo, Phase Alternating Line (PAL) standards having 25 interlaced frames per second at 625 lines of resolution, System en coleur avec memoire (SECAM) and similar protocols. It should be noted, however, any analog audio/video format is to be converted to a digital audio/video format prior to encoding by encoder 112.
Live feed source 104 generates frames 210 and provides raw video data file 106 that describes video frames 210ι-n and their corresponding pixels. Raw video data file 106 contains raw video frame data tables 220i-n, where n represents the number of frames, and each row 231-236 corresponds to pixels in each video frame 210, where the pixels between 235 and the last pixel 236 have been omitted for clarity. The columns of each raw video frame data table 220 describe the pixel numbers 222, red color component values 223, green color component values 224, blue color component values 225 (RGB values), luminance values 226, and chrominance values 227 for each pixel in respective frame 210. In alternate embodiments, other color spaces may be used such as, cyan, magenta, and yellow (CyMgYl).
As illustrated with reference to Figure 2, each pixel parameter 223-227 of each pixel 222 in each frame table 220 requires multiple bytes of information to be stored per pixel, thus creating large file sizes for multiple frames of video data. Considering high quality video requires at least a frame rate of 25 frames per second or 1,500 frames/minute, it should be apparent that the amount of storage and/or bandwidth required to stream and play an uncompressed or slightly compressed video file is quite large.
Figure 3 A illustrates a table of raw video data 220 I encoded into a frame group table 320ι, according to one embodiment of the present invention. The pixel numbers in column 222 I are mapped to pixel numbers in column 322ι. For each row, 231 -236, of raw pixel data in table 2201 there is a corresponding row, 331-336, of compressed pixel data in table 320ι, where the pixels between 235 (335) and the last pixel 236 (336) have been omitted for clarity. The pixel value sets (i.e., RGB values 223ι-225ι) are processed and mapped to dominant color value 323ι, as illustrated by pixel 1 value RIO in table 320ι. The calculation of dominant color value 3231 will be discussed below. Luminance value 2261 is mapped to a scaled luminance value 326ι and chrominance value 227ι is mapped to a scaled chrominance value 327ι. The calculation of scaled chrominance and luminance values 3271, 3261 will also be discussed below. Each compressed video table 320 may further include color depth values 328. In one embodiment, color depth value 328 is the average of the scaled chrominance and luminance values 327, 326. According to one embodiment, as encoder 112 populates compressed video table 320, if a row of compressed pixel data is determined to be identical or at least sufficiently similar to a previously populated row, encoder 112 places a pointer to the appropriate previously entered row, as illustrated with reference to 334.
Figure 3B illustrates pixel reference value sets 350ι-n generated by encoder 112, where n is the number of encoded segments for each frame group. The segment reference pixel value sets 350, according to one embodiment of the present invention, may have up to four (4) reference pixel values corresponding to red 361, green 362, blue 363 and black 364 for each segment of a video frame 210. The segment reference pixels are selected based upon the video frame 210's most intense dominant pixel color values for each encoded segment, as illustrated by red 361, green 362, blue 363, and black 364. The most intense dominant pixel color value is based on the highest raw pixel color values. The black segment reference pixel 364, according to one embodiment of the present invention, may be determined by comparing the color component values (e.g., RGB) in aggregate. The segment reference pixel values may also include pixel parameter values, such as luminance value 356ι and chrominance value 357ι for each of the segment reference pixel colors 361-364. In other embodiments, the segment reference pixel values may also be scaled or alternatively, the reference pixel values may be full-scale values corresponding to the raw data format. In alternate embodiments, additional reference values may be used for color depth or other similar graphics data or pixel parameters. Calculation of pixel reference value sets 350 will be discussed in greater detail below.
Figure 4 illustrates an exemplary decoding process 400 for a compressed video file 114, according to one embodiment of the present invention. Compressed video file 114 may include a frame group header, segment reference pixel values 350, and encoded video tables 320 for each video frame 210. Decoder 118 processes compressed video file 114 to provide a decoded video file 120. Decoded video file 120 includes a decoded video table 420 including decoded pixel parameter values 422-427 for each pixel 431-436. Decoding process 400 includes the mapping of a compressed (encoded) video table 320 to a decoded video table 420 using segment reference pixel values 350. The pixel data 331- 336 is decoded using table 350 and is respectively mapped to pixel data 431- 436. The process performed by decoder 118 to populate decoded video table 420 will be described in detail below.
Decoded video file 120 can be formatted for playback devices supporting different input protocols. Such protocols include NTSC, SECAM, PAL and HDTV as described above. Additionally, support for computer displays is provided. If a low-bandwidth communication link exists between display (play back device) 122 and decoder 118, decoder 118 may be configured, in one embodiment, to transmit a fraction of the lines per frame. In another embodiment, in order to minimize bandwidth consumption, encoder 112 may encode only a fraction of the lines per frame, such as one of two fields of video, resulting in a smaller compressed video file 114 for transmission over network 116. In other embodiments, the video frames may be encoded in their entirety but a field is removed and/or the screen resolution is reduced prior to transmission over network 116. In yet another embodiment, frames may be dropped prior to transmission. For example, a file encoded at 24 frames per second may be reduced to 12 frames per second by dropping ever other frame prior to transmission. These embodiments may be particularly useful when playback device 122 is a cellular telephone or other wireless device, requiring high quality video over low bandwidth networks, such as GSM, CDMA, and TDMA.
Figure 5 illustrates a flowchart of encoding process 500 for encoding live or streaming video content, according to one embodiment of the present invention. As discussed with reference to Figure 1, encoder 112 receives raw video data 106 for encoding. Encoder 112 then provides compressed video file 114, including frame group header, segment reference pixel values 350, frame group table 320, and any additional parameters and optional information described above, to decoder 118 via network 116.
In one embodiment of the present invention, encoder 112 receives the digitized video data as raw video data 106. At block 502, encoder 112 determines from raw video data 106 the video format and frame information, and creates a frame group file in which a frame group header, a table of scaled pixel parameter values, and reference pixels will be stored. In another embodiment, the audio is also stored in the frame group file. Raw video data 106 may be of any format known in the art such as MPEG (Moving Picture Experts Group), MJPEG (moving JPEG (Joint Photographic Experts Group)), QuickTime's AVI (audio video interleaved), among others.
For example, with reference to Figure 2, encoder 112 receives raw pixel data for frames 210 as further illustrated in raw pixel data tables 220. At block 504, encoder 112 determines, pixel by pixel and per segment, the dominant color of each pixel by examining each pixels color component values. For example, pixel one data 231 includes a red color component value 2231 of 10,000, a green component value 224ι of 2,000, and a blue component value 2251 of 500. Therefore, in one embodiment pixel l's dominant color value would correspond to the highest numerical value among the three color values (RGB) which is the red value 2231 of 10,000. h other embodiments other techniques for calculating the dominant color may be used, such as weighted color component value comparisons. At block 506, the current pixel's color component values are compared to the highest previously stored values for that color component in order to determine the segment reference pixels corresponding to the most intense pixels for each color for each segment. In the case of a black pixel or segment reference pixel, according to one embodiment of the present invention, the color component values would all have to be above a threshold value. For example, black segment reference pixel 364 has red, green and blue values of 9000, 8000,and 8500, respectively. Although the values may not be the highest value for each color (e.g., red segment reference pixel 361 red value of 10000), the black segment reference pixel corresponding to the most intense black pixel of the segment, is the pixel with the highest of all three color component values, red, green and blue. If any one of the values is below a threshold value, the higher of the remaining two values determines the color pixel. An exemplary threshold value may be eighty percent of the maximum color component value for each color (e.g., 80% of 10000 = 8000). In another embodiment, a white segment reference pixel and white dominant pixel table values are based upon the color component values being below a threshold value. Continuing at block 508, if the current color component value(s) is (are) not greater than the stored segment reference pixel value(s), the stored values remain unchanged. However, if the current color component value(s) is (are) greater than the stored value(s) then encoder 112 overwrites the current segment reference pixel values corresponding to that color component with the new values.
A segment may be defined as any portion of a frame or frames. For example, with reference to Figure 2, a segment may be defined as the first line of a frame as shown in 210ι pixels 1 to 5. Among various embodiments, a segment may be multiple frames, two lines of a frame or half of a frame. It should be appreciated that the segment size definition may be optimized to accommodate a variety of systems to minimize encoder processor overhead and maximize the frame transfer rate for streaming video content over low-bandwidth connections.
An example illustrating the concept of storing the segment reference pixel values is shown with reference to Figures 3 A and 3B, respectively. As shown in table 2201 of Figure 3 A, pixel parameters 231 and 232 each indicate that the dominant color for each pixel is red based upon a comparison of their respective RGB values. However, the red value for pixel one of 10,000 is greater than that of pixel two 9,000 and therefore would remain as the red segment reference pixel as shown in table 350 of Figure 3B. The segment reference pixel also retains its other pixel parameters such as green color component value 354ι, blue color component value 355 I, luminance value 356ι, and chrominance value 357ι. In other embodiments, all or some of these values may be scaled or otherwise manipulated to decrease table size or alter display characteristics.
After the dominant color of each pixel is determined and the color component values are compared to the stored segment reference pixel values, the pixel parameters, at block 512, are scaled down and stored in the table 320. In one embodiment, as illustrated with reference to Figure 3A table 320ι, the scaled pixel values include scaled predominant color value 323 I, scaled luminance value 326ι, scaled chrominance value 327ι, and a calculated color depth value 3281. In one embodiment of the present invention, only the dominant color value, luminance value, and chrominance value are scaled down and stored in the table 320. In another embodiment, all of the raw pixel parameter values are scaled down and stored within the table 320, including the non-dominant color values.
In one embodiment of the present invention, as shown in Figure 3 A, pixel parameters 231-235 are scaled down into a one through ten (1-10) scale as shown with scaled pixel parameters 331-335 of table 320ι. For example, pixel parameter row 233 of table 220ι indicates the dominant pixel color is green with a green color component value of 8,000 and a luminance and chrominance value of 490 and 510, respectively. If full-scale raw color values were 10,000, then the dominant color value may be rounded to the nearest thousand and divided by the full scale to produce a 1-10 value. For example:
Green dominant raw (Gd) value of 8000 (note, a value 8200 would round to 8000); Scaled Value (Gd) 8000 = ; Scaled Gd = G8 10 10000
As shown in scaled pixel parameter row 333 of table 320ι, wherein a dominant green color value of 8,000 becomes G8. Similarly, if the luminance and chrominance have full-scale values of 1,000, those values for pixel parameter row 233 would each become 5, respectively. For example:
Luminance (Lm) value of 490 rounds up to 500; Scaled Lm 500 = ; Scaled Lm = 5 (Similar calculation for chrominance) 10 1000
In one embodiment, the color depth 328 is calculated based upon the average of the scaled down luminance and chrominance values 326, 327 and as illustrated in table 320ι. In another embodiment, the calculation is performed at decoder 118. In yet other embodiments, the raw values may be scaled into any number of ranges, such as a 1-25 scale.
Once the pixel parameters are scaled down and prior to storing the parameters in table 320, encoder 112, checks the current pixel parameter values with previously stored values in the table. If the scaled down parameter values are unique, at block 516, encoder 112 writes the parameter values for the pixel into table 320. However, if an identical or sufficiently similar (e.g., within a tolerance amount) table entry already exists, encoder 112, at block 518, creates an entry for the current pixel that refers to the previously encoded pixel in table 320. For example, with reference to Figure 3 A, pixel parameter row 234, if scaled according to the process described above, would have identical scaled dominant pixel color, luminance, and chrominance to that of pixel parameter row 233. Therefore, encoder 112 inserts a reference to the previously encoded pixel as shown with reference to table 320ι row 334. It should be appreciated that in dealing with tens of thousands of pixels, the combination of scaling down the dominant color 323, luminance 326, and chrominance values 327 in addition to inserting pointers for redundant pixel values will result in a significant reduction in the size of encoded pixel table 320 over that of raw pixel table 220, and thus the amount of bandwidth required to transmit this information to a decoder 118 is reduced.
Encoder 112, at block 520, checks whether or not the encoding has reached the end of the segment. If the end of the segment has not been reached, then encoder 112 indexes to the next pixel corresponding to the next pixel parameter row and repeats blocks 506 through 518.
At the end of each segment, at block 522, encoder 112 retrieves the segment reference pixel values corresponding to the most intense dominant pixel colors for the segment and writes those segment reference pixels to the frame group file. In one embodiment of the present invention, the coordinates assigned to a segment reference pixel are the coordinates of the pixel prior to a pixel color change within the segment, or if there is not a color change leading up to the end of the segment, the segment reference pixel coordinates for that color are the coordinates of the last pixel of the segment. In other embodiments, the segment reference pixels may be stored, by coordinate references or otherwise, according to any programming method that would allow for the values to be scaled according to the encoding method described above.
If a segment only has fewer than four reference pixel colors represented therein, then there may be fewer than few segment reference pixels associated with that segment. For example, if a segment includes a row of five pixels, as illustrated with reference to Figures 2 and 3, table 320 illustrates that the segment only includes dominant color values of red and green and therefore will only have red and green segment reference pixels as further illustrated in Figure 3B, segment reference pixel data 361 and 362. Therefore, in this example, encoder 112 would only write segment reference pixel data corresponding to the most intense red and green pixel colors of the segment to the frame group file.
Once encoder 112 writes the segment reference pixel data to the frame group file, encoder 112, at block 524, determines if it has reached the end of the frame in the encoding process. If the process has not reached the end of the frame, encoder 112 indexes to the next segment and repeats blocks 504 through 522. If another frame has been reached, at block 526, encoder 112 determines whether it has encoded the entire frame group. If the entire frame group has not been encoded, encoder 112 indexes to the next frame and repeats blocks 504 through 524. However, if the end of the frame group has been reached, encoder 112, at block 528, inserts a pointer used by decoder 118 to identify the next frame group for decoding. Thereafter, encoder 112 communicates the frame group in the compressed video file 114 through network 116 to decoder 118. At block 530, encoder 112 begins encoding the next frame group and repeats blocks 504 through 528. In one embodiment, the frame group file includes multiple tables including multiple frames. For example, a table may include pixel information for 25 frames and a frame group may include five tables thus equalling 125 frames per frame group.
Figure 6 illustrates a flowchart of a decoding process 600, according to one embodiment of the present invention. As discussed with reference to Figure 1, decoder 118 receives compressed video file 114 through network 116. After decoding, decoder 118 supplies decoded video data 120 to playback device 122.
Decoding process 600 begins at block 602 by receiving and caching (temporarily storing) compressed video file 114 from network 116. At block 604, decoder 118 begins decoding the scaled table 320 of pixel parameter values beginning at the first pixel of the first frame. At block 606 decoder 118 reads from the table 320, the pixel location, reference pixel color value, luminance and chrominance. At block 608, decoder 118 scales the corresponding segment reference pixel values according to the table 350 of pixel parameter values.
For example, with reference to Figure 4, decoder 118 uses the encoded pixel parameter values of table 320ι and the segment reference pixels of table 350ι to generate decoded pixel parameter values as illustrated in table 420ι. For example, using the scaled dominant color value G8 of pixel three, the scaled luminance and chrominance of 5, and the green segment reference pixel 362 results in decoded pixel three values of table 420ι. For example:
Pixel parameter values of G8 (G, use green segment reference pixel), luminance (Lm) 5, and chrominance (Cm) 5, from table 350ι: Segment Reference Pixel G— R600, G10000, B740, Lm600, Cm400. Non-dominant R and B remain the same, scale dominant G Lm and Cm:
8 Scaled G (Gs) Gs = 8000; 10 10000 5 Scaled Lm 5 Scaled Cm _ = ; Lm = 300 _ = ; Cm = 200 10 600 10 400 Therefore, the decoded table entry would appear as illustrated in 433 of table 420 and is duplicated below: Decoded Pixel 3— R600, G8000, B740, Lm300, Cm200;
In another embodiment, R and B are scaled by the same factor of 0.8, similar to the calculation for Gs, above. In another embodiment, however, the scaled values of the table may include the non-dominant colors that may also be decoded with reference to the segment reference pixels. In other embodiments, the segment reference pixel values are the original raw full-scale values, and are either communicated with the table 320 of scaled values or are resident within the decoder system.
In the case where an entry in the table 320 of scaled pixel values is a reference pointer to a previous pixel entry, decoder 118 duplicates the decoded results of previous pixel entry.
Decoder 118 indexes to the next pixel to decode in the segment if it is determined, at block 610, that the segment has not been fully decoded. If the end of the segment has been reached, decoder 118 determines at block 612 if the end of the frame has been reached. If not, then decoder 118 begins decoding the next segment of the frame using the process described above. If the entire frame has been decoded, decoder 118 determines, at block 614, if the entire frame group has been decoded. If not, decoder 118 begins decoding the next frame in the table 320. If the entire frame group has been decoded, decoder 118, at block 616, receives and decompresses (if necessary) any audio data associated with the previously decoded frame group. At block 618, decoder 118 determines if the frame data requires reformatting for display. In one embodiment of the present invention, the user of the display device 122 configures decoder 118 to format decompressed video data 120 to accommodate various playback devices 122, such as Microsoft's Windows Media Player. If reformatting is required, refoπnatting is executed at block 620 and decoder 118, at block 622 synchronizes the audio and writes the decoded frame group to playback device 122.
After decoding and displaying the decoded frame group, according to one embodiment of the present invention, decoder 118 at block 624, reads from the frame group the pointer to the next frame group for decoding, and clears the previously decoded frame group from the cache. In one embodiment, decoder 118 may read a trailer appended to the communicated file. The trailer may provide decoder 118 with audio/video information, such as the logical location or name of the next frame group to decode, the number of frames and or files remaining in the encoded video, index information to the next file, or other audio/video information related to playback.
Having discussed numerous illustrations of encoding and decoding functions according to the present method and system, a brief description of the communication network and computer architecture encompassing the present system is provided. An Exemplary Network Architecture
Elements of the present invention may be included within a client-server based system 700 such as that illustrated in Figure 7. One or more servers 710 communicate with a plurality of clients 730-735. Clients 730-735 may transmit and receive data from servers 710 over a variety of communication media including (but not limited to) a local area network 740 and/or a larger network 725 (e.g., the Internet). Alternative communication channels such as wireless communication via GSM, TDMA, CDMA, Bluetooth, IEEE 802.11, or satellite broadcast (not shown) are also contemplated within the scope of the present invention.
Servers 710 may include a database for storing various types of data. This may include, for example, specific client data (e.g., client account information and client preferences) and/or more general data. The database on servers 710 in one embodiment runs an instance of a Relational Database Management System (RDBMS), such as Microsoft™ SQL-Server, Oracle™ or the like. A user/client may interact with and receive feedback from servers 710 using various different communication devices and/or protocols. According to one embodiment, a user connects to servers 710 via client software. The client software may include a browser application such as Netscape Navigator or Microsoft Internet Explorer on the user's personal computer, which communicates to servers 710 via the Hypertext Transfer Protocol (hereinafter "HTTP"). Among other embodiments, software such as Microsoft's Word, Power Point, or other applications for composing and presentations may be configured as client decoder/player. In other embodiments included within the scope of the invention, clients may communicate with servers 710 via cellular phones and pagers (e.g., in which the necessary transaction software is electronic in a microchip), handheld computing devices, and/or touch-tone telephones.
Servers 710 may also communicate over a larger network (e.g., network 725) to other servers 750-752. This may include, for example, servers maintained by businesses to host their Web sites -e.g., content servers such as
"yahoo.com." Network 725 may include router 720. Router 720 forwards data packets from one local area network (LAN) or wide area network (WAN) to another. Based on routing tables and routing protocols, router 720 reads the network address in each IP packet and makes a decision on how to send it based on the most expedient route. Router 720 works at layer 3 in the protocol stack.
According to one embodiment of the present method and system, components illustrated in Figure 1 may be distributed throughout network (client-server based system) 700. For example, video sources may be connected to any client 730-735 or 760-762, or sever 710 or 750-752. Live feed source 104, encoder 112, decoder 118 and display 122, may reside in any client or server, as well. Similarly, all or some of the components of Figure 1, may be fully contained within a signal server, or client.
In one embodiment, servers 750-752 host the video acquisition device which supplies live feed source 104 and encoder 112. Video sources connected to clients 760-762 provide source video to servers 750-752. Servers 750-752 encode and compress live source video 104 and deliver compressed video file 114 upon a client request. Upon client 730-733 request, servers 750-752 transmit compressed video file 114 over network 725, 740 to client 730-733 via server 710. In addition, server 710 and client 730-733 may be connected via a dial-up connection between 9.6 kBps and 56 kBps. Clients 730-733 host decoder 118, and upon receiving compressed video file 114, decodes file 114 and provide decoded video file 120 to an attached playback device 122. Numerous combinations may exist for placement of encoder 112, decoder 118 and video acquisition device (live feed source) 104. Similarly, encoder 112, decoder 118 and live feed source 104 may exist in software executed by a general purpose processor, a dedicated video processor provided on an add-on card to a personal computer, a PCMCIA card, an ASIC (application specific integrated circuit) or similar devices. Additionally, decoder 118 may reside as a software program running independently, or as a plug-in to a web browser. Decoder 118 may be configured to format its video output to have compatibility with existing playback devices that support motion JPEG, MPEG, MPEG-2, MPEG-4 and JVT standards.
AN EXEMPLARY COMPUTER ARCHITECTURE
Having briefly described an exemplary network architecture which employs various elements of the present invention, a computer system 800 representing exemplary clients 730-735 and/or servers (e.g., servers 710), in which elements of the present invention may be implemented will now be described with reference to Figure 8.
One embodiment of computer system 800 includes a system bus 820 for communicating information, and a processor 810 coupled to bus 820 for processing information. Computer system 800 further includes a random access memory (RAM) or other dynamic storage device 825 (referred to herein as main memory), coupled to bus 820 for storing information and instructions to be executed by processor 810. Main memory 825 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 810. Computer system 800 also may include a read only memory (ROM) and/or other static storage device 826 coupled to bus 820 for storing static information and instructions used by processor 810.
A data storage device 827 such as a magnetic disk or optical disc and its corresponding drive may also be coupled to computer system 800 for storing information and instructions. Computer system 800 can also be coupled to a second I/O bus 850 via an I/O interface 830. Multiple I/O devices may be coupled to I/O bus 850, including a display device 843, an input device (e.g., an alphanumeric input device 842 and/or a cursor control device 841). For example, video news clips and related information may be presented to the user on display device 843.
The communication device 840 is for accessing other computers (servers or clients) via a network 725, 740. Communication device 840 may include a modem, a network interface card, or other well-known interface device, such as those used for coupling to Ethernet, token ring, or other types of networks.
In the foregoing specification, the invention has been described with reference to specific embodiments. It will, however, be evident that various modifications and changes can be made without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims

Claims:
1. A method, including: encoding a frame of audio/video data, segment by segment, including a number of pixels each having a plurality of pixel color components by: creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; and determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels is a pixel within each one of the encoded segments having a most intense dominant pixel color value.
2. The method of claim 1 , wherein each pixel entry further includes at least one pixel parameter value.
3. The method of claim 2, wherein the pixel parameter values include at least one of luminance, chrominance, and color depth information.
4. The method of claim 2 or claim 3, including scaling down each pixel parameter value prior to storing each pixel entry into the frame group table.
5. The method of any one of the preceding claims, wherein prior to encoding the frame of audio/video data, creating a frame group file to store a header, the frame group table, and each set of segment reference pixels.
6. The method of claim 5, wherein after creating the frame group table, writing a pointer to a next frame group to the frame group file.
7. The method of claim 5 or claim 6, including storing audio data within the frame group file.
8. The method of any one of the preceding claims, including encoding audio data into the frame group table.
9. The method of any one of the preceding claims, including encoding audio synchronization tags into the frame group table.
10. The method of any one of the preceding claims, wherein determining the set of segment reference pixels includes comparing, on a pixel by pixel basis for each segment, a current pixel color value with a previously stored dominant pixel color value and storing the plurality of pixel color components and pixel parameters of the pixel with the most intense dominant pixel color component.
11. The method of any one of the preceding claims, wherein each pixel entry further includes scaled down non-dominant pixel color components.
12. The method of any one of the preceding claims, wherein the plurality of pixel color components includes at least one of the sets of primary color components, red, green, and blue, or cyan, magenta, and yellow.
13. The method of claim 12, wherein the dominant pixel color components include red, green, blue, and black.
14. The method of claim 13, wherein the dominant pixel color component black is determined by comparing an average of the red, green, and blue pixel component values to a black threshold value.
15. The method of any one of the preceding claims, wherein an encoded segment is a row of pixels within the frame of audio/video data.
16. The method of any one of the preceding claims, wherein creating a frame group table further includes scaling down the dominant pixel color component.
17. The method of any one of the preceding claims, wherein the frame group table includes pixel entries for a plurality of frames.
18. The method of any one of the preceding claims, wherein redundant encoded pixel values of the frame group table share common table entries.
19. The method of claim 18, wherein redundant encoded pixel values share identical dominant pixel color components and identical pixel parameter values.
20. The method of claim 18, wherein redundant encoded pixel values share dominant pixel color components and pixel parameters values that are similar to one another within a tolerance range.
21. An encoder, including: means to create, pixel by pixel, a frame table of encoded segments of audio/video data, wherein each pixel is represented by a single pixel color component; and means to create a set of color scaling pixels for each encoded segment, wherein each color scaling pixel is selected based upon a pixel having a most significant color intensity value with respect to similarly colored pixels within the segment.
22. The encoder of claim 21 , wherein the frame table data of encoded segments of audio/video data for each pixel includes pixel parameter values for at least one of, luminance, chrominance, and color depth.
23. The encoder of claim 22, wherein the pixel parameter values are scaled down values.
24. A machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: encoding a frame of audio/video data, segment by segment, including a number of pixels each having a plurality of pixel color components by: creating a frame group table in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components; and determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels is a pixel within each one of the encoded segments having a most intense dominant pixel color value.
25. A method of communicating live audio/visual information over a communications link, including: encoding, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components;
determining a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value;
communicating the frame group table and the segment reference pixels over a network to a receiver; and
at the receiver, decoding the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
26. The method of claim 25, wherein each pixel entry further includes at least one of luminance, chrominance, and color depth information.
27. The method of claim 26, further including scaling down each pixel I parameter value prior to storing each pixel entry into the frame group table.
28. The method of any one of claims 25 to 27, wherein the set of segment reference pixels includes a representative red pixel, green pixel, blue pixel, and black pixel.
29. The method of any one of claims 25 to 28, wherein prior to encoding the frame of audio/video data, creating a frame group file to store a header, the frame group table, and each set of segment reference pixels.
30. The method of claim 29, wherein after creating the frame group table, writing a pointer to a next frame group to the frame group file.
31. The method of claim 29 or claim 30, including storing audio data within the frame group file.
32. The method of any one of claims 25 to 31 , wherein determining the set of segment reference pixels includes comparing, on a pixel by pixel basis for each segment, a current pixel color value with a previously stored dominant pixel color value and storing the plurality of pixel color components and pixel parameters of the pixel with the most intense dominant pixel color component.
33. The method of any one of claims 25 to 33, wherein the plurality of pixel color components includes at least one of the sets of primary color components, red, green, and blue, or cyan, magenta, and yellow.
34. The method of claim 33, wherein the dominant pixel color components include red, green, blue, and black.
35. The method of any one of claims 25 to 34, wherein an encoded segment is a row of pixels within the frame of audio/video data.
36. The method of any one of claims 25 to 35, wherein creating a frame group table further includes scaling down the dominant pixel color component.
37. The method of any one of claims 25 to 36, wherein the set of the segment reference pixel values further includes a dominant color pixel value, non-dominant pixel color values, and luminance and chrominance values.
38. The method of any one of claims 25 to 37, wherein redundant encoded pixel values of the frame group table share common table entries.
39. The method of claim 38, wherein redundant encoded pixel values share identical dominant pixel color components and identical pixel parameter values.
40. The method of claim 38, wherein redundant encoded pixel values share dominant pixel color components and pixel parameters values that are similar to one another within a tolerance range.
41. The method of any one of claims 38 to 40, wherein each one of the common table entries is decoded by recalling previously decoded pixel parameter values associated with each common table entry.
42. The method of any one of claims 25 to 41 , wherein scaling the set of segment reference pixel values includes scaling the segment reference pixel's dominant color pixel value, and luminance and chrominance values.
43. The method of any one of claims 25 to 42, wherein the table of encoded pixel parameter values further includes non-dominant pixel color components.
44. The method of claim 43, wherein the set of segment reference pixels include full-scale pixel parameter values.
45. The method of claim 44, wherein scaling the set of segment reference pixel values further includes scaling each of the full-scale pixel parameter values with the each corresponding encoded pixel parameter values.
46. The method of any one of claims 25 to 45, further including synchronizing audio data associated with the table of encoded pixel parameter values.
47. The method of any one of claims 25 to 46, wherein after scaling the set of segment reference pixel values according to each entry in the table of encoded pixel parameter values, communicating the decoded pixels to a playback device
48. The method of claim 47, further including communicating and synchronizing audio data to the playback device.
49. The method of any one of claims 25 to 48, wherein the communications link is a low-bandwidth communications link.
50. A system, including: an encoder to encode, segment by segment, frames of audio/video data, including a number of pixels each having a plurality of pixel color components by creating a frame group table of encoded pixel values in which each pixel entry includes a dominant pixel color component of the plurality of pixel color components and to determine a set of segment reference pixels for each encoded segment, wherein each one of the segment reference pixels includes segment reference pixel parameter values and is a pixel within each one of the encoded segments having a most intense dominant pixel color value;
a server to communicate the frame group table and the segment reference pixels over a network to a receiver ; and
a decoder coupled to the receiver to decode the frame group table on a pixel-by-pixel basis by scaling the segment reference pixel parameter values according to each entry in the frame group table of encoded pixel parameter values to produce decoded pixels.
51. A method, including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
52. The method of claim 51 , wherein each set of the segment reference pixels corresponds to an encoded segment of a frame.
53. The method of claim 52, wherein the set of the segment reference pixel values includes a unique set of color pixels for each encoded segment, wherein each segment reference pixel represents a pixel with a most intense dominant pixel color component for each encoded segment.
54. The method of claim 53, wherein the set of segment reference pixels includes a representative red pixel, green pixel, blue pixel, and black pixel.
55. The method of any one of claims 51 to 54, wherein the table of encoded pixel parameter values further includes at least one of luminance, chrominance, and color depth.
56. The method of any one of claims 51 to 55, wherein the set of the segment reference pixel values further includes a dominant color pixel value, non-dominant pixel color values, and luminance and chrominance values.
57. The method of claim 56, wherein scaling the set of segment reference pixel values includes scaling the segment reference pixel's dominant color pixel value, non-dominant pixel color values, and luminance and chrominance values.
58. The method of any one of claims 51 to 57, wherein the table of encoded pixel values further includes redundant entries, wherein each one of the redundant entries is decoded by recalling previously decoded pixel parameter values associated with each one of the redundant entries.
59. The method of any one of claims 51 to 58, wherein the table of encoded pixel parameter values further includes non-dominant pixel color components.
60. The method of claim 59, wherein the set of segment reference pixels include full-scale pixel parameter values.
61. The method of claim 60, wherein scaling the set of segment reference pixel values further includes scaling each of the full-scale pixel parameter values with the each corresponding encoded pixel parameter values.
62. The method of any one of claims 51 to 61, further including synchronizing audio data associated with the decoded table of encoded pixel parameter values.
63. The method of any one of claims 51 to 62, wherein prior to decoding the audio/video data, receiving a file including the table of encoded pixel parameter values and the set of segment reference pixel values.
64. The method of claim 63, wherein the file further includes a header including video frame information and audio information.
65. The method of claim 64, further including processing the file by using the header to determine data locations within the file, including the beginning and end of the table of encoded pixel parameter values and the corresponding segment reference pixel values.
66. The method of any one of claims 51 to 65, wherein after scaling the set of segment reference pixel values according to each entry in the table of encoded pixel parameter values, communicating the decoded pixels to a playback device.
67. The method of claim 66, further including communicating and synchronizing audio data to the playback device.
68. The method of claim 66 or claim 67, wherein prior to communicating decoded pixel parameter values to the playback device, converting the decoded pixel parameter values to another display format.
69. A decoder, including: means to generate, pixel by pixel and segment by segment, a frame of audio/video data based upon a table of encoded pixel parameter values, wherein each pixel is represented by a single color entry, and means to scale a representative set of segment reference pixel values according to the table of encoded pixel parameter values, wherein a segment is a fractional portion of the frame of audio/video data.
70. A machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to execute a method, the method including: decoding, on a pixel-by-pixel basis, audio/video data using a table of encoded pixel parameter values, wherein each pixel is represented by an entry including a dominant pixel color component; and scaling a set of segment reference pixels including segment reference pixel values according to each entry in the table of encoded pixel parameter values to produce decoded pixels including decoded pixel parameter values.
PCT/AU2005/000128 2004-02-02 2005-02-02 System and method for encoding/decoding live audio/video information WO2005074143A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US10/771,097 2004-02-02
US10/770,952 US20050196049A1 (en) 2004-02-02 2004-02-02 System and method for encoding live audio/video information
US10/771,097 US7505045B2 (en) 2004-02-02 2004-02-02 System and method for decoding live audio/video information
US10/771,096 2004-02-02
US10/770,952 2004-02-02
US10/771,096 US20050180641A1 (en) 2004-02-02 2004-02-02 System and method for transmitting live audio/video information

Publications (2)

Publication Number Publication Date
WO2005074143A1 true WO2005074143A1 (en) 2005-08-11
WO2005074143A8 WO2005074143A8 (en) 2006-01-05

Family

ID=34831233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2005/000128 WO2005074143A1 (en) 2004-02-02 2005-02-02 System and method for encoding/decoding live audio/video information

Country Status (1)

Country Link
WO (1) WO2005074143A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120136902A1 (en) * 2010-11-30 2012-05-31 International Business Machines Corporation Multimedia size reduction for database optimization
CN120512544A (en) * 2025-07-21 2025-08-19 杭州肉肯肉比比科技有限公司 A method and system for intelligent perception and association of multimedia content in live sports events

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108383A (en) * 1997-07-15 2000-08-22 On2.Com, Inc. Method and apparatus for compression and decompression of video images
WO2001057804A2 (en) * 2000-02-07 2001-08-09 Sun Microsystems, Inc. Method and apparatus for compression and decompression of digital images
JP2001218226A (en) * 2000-01-31 2001-08-10 Snk Corp Image data compressor, image data compression method and computer-readable recording medium recording program to compress received image data
US20020012473A1 (en) * 1996-10-01 2002-01-31 Tetsujiro Kondo Encoder, decoder, recording medium, encoding method, and decoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020012473A1 (en) * 1996-10-01 2002-01-31 Tetsujiro Kondo Encoder, decoder, recording medium, encoding method, and decoding method
US6108383A (en) * 1997-07-15 2000-08-22 On2.Com, Inc. Method and apparatus for compression and decompression of video images
JP2001218226A (en) * 2000-01-31 2001-08-10 Snk Corp Image data compressor, image data compression method and computer-readable recording medium recording program to compress received image data
WO2001057804A2 (en) * 2000-02-07 2001-08-09 Sun Microsystems, Inc. Method and apparatus for compression and decompression of digital images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120136902A1 (en) * 2010-11-30 2012-05-31 International Business Machines Corporation Multimedia size reduction for database optimization
US8385414B2 (en) * 2010-11-30 2013-02-26 International Business Machines Corporation Multimedia size reduction for database optimization
CN120512544A (en) * 2025-07-21 2025-08-19 杭州肉肯肉比比科技有限公司 A method and system for intelligent perception and association of multimedia content in live sports events

Also Published As

Publication number Publication date
WO2005074143A8 (en) 2006-01-05

Similar Documents

Publication Publication Date Title
US20050180641A1 (en) System and method for transmitting live audio/video information
US9904503B1 (en) Method and system for application broadcast
US7864849B2 (en) Method for scalably encoding and decoding video signal
US7372904B2 (en) Video processing system using variable weights and variable transmission priorities
JP2003512787A (en) System and method for encoding and decoding a residual signal for fine-grained scalable video
EP1040668A2 (en) Scalable video coding system
US6968012B1 (en) Methods for encoding digital video for decoding on low performance devices
US20050169544A1 (en) System and method for encoding and decoding video
US6975767B1 (en) System and method for encoding and decoding video
US7010033B2 (en) System and method for compressing and encoding video
US7505045B2 (en) System and method for decoding live audio/video information
US20050196049A1 (en) System and method for encoding live audio/video information
US5917964A (en) Method and apparatus for pre-processing image data of encoders
SG188221A1 (en) Video signal processing
WO2005074143A1 (en) System and method for encoding/decoding live audio/video information
US20050207657A1 (en) System and method for encoding and decoding video
US7483576B2 (en) System and method for decoding video
WO2005073920A1 (en) Systems and methods for encoding and decoding video information
WO2017046408A1 (en) Determination of a co-located luminance sample of a color component sample, for hdr coding/decoding
WO2005074144A1 (en) System and method for compressing and encoding/decoding video
US20060133497A1 (en) Method and apparatus for encoding/decoding video signal using motion vectors of pictures at different temporal decomposition level
WO2005114847A1 (en) System and method for encoding and decoding video
EP0647910A1 (en) Method and system for efficiently transmitting multimedia data in a distributed data processing system
Moreton et al. HDCC: a software-based compression algorithm for video conferencing
Ahmad Mobile Video Transcoding Approaches and Challenges

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WR Later publication of a revised version of an international search report
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase