WO2001069911A2 - Interactive multimedia transmission system - Google Patents
Interactive multimedia transmission system Download PDFInfo
- Publication number
- WO2001069911A2 WO2001069911A2 PCT/US2001/007320 US0107320W WO0169911A2 WO 2001069911 A2 WO2001069911 A2 WO 2001069911A2 US 0107320 W US0107320 W US 0107320W WO 0169911 A2 WO0169911 A2 WO 0169911A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- content
- streams
- view
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440245—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/162—User input
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2365—Multiplexing of several video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4852—End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
- H04N5/602—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/162—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
- H04N7/163—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing by receiver means only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
Definitions
- the present invention relates generally to interactive multimedia transmission systems and, more particularly, to apparatus and methods for allowing a user to interact with transmitted digital content.
- One such advanced interactive multimedia transmission system is available from Relative Motion Technologies, 791 Tremont St., Boston, MA.
- Traditionally a viewer watching some kind of recorded visual event does so on apparatus having a display screen generally on the order of four units by three units. These numbers dictate an aspect ratio of approximately 1.3 and are commonly seen in things such as traditional motion picture television screens.
- the physics involved in early motion picture and television production led to the empirical adoption of screens having approximately this aspect ratio.
- the aspect ratio of most visual presentation screens has mandated most television and motion picture format as we know today.
- a director selects views from within his span of vision for presentation to the audience.
- One example, for instance as shown at FIG. 1 is a football game.
- the field of play is shown to be a relatively elongated rectangle, 1, encompassing the field of play having a number of players thereon, generally 5.
- the quarterback, 3 is preparing to throw a pass to a receiver, here 7.
- the television director has chosen the view ofthe receiver waiting to receive the ball as the most important view, and that which is being transmitted to the viewing public. This view is shown at 9.
- many high-resolution interactive video systems use a relatively large amount of computer processing power. Such systems can require processing power to encode and compress video content prior to transmission. Similarly, such systems can require processing power to receive, decode, decompress, and render video content after transmission.
- the receiving end of a video transmission system is often a device with limited processing power such as a personal computer or a set top box.
- What is needed is an interactive multimedia transmission system that enables a viewer to select a view from within a transmitted or recorded video or audiovisual image. What is also needed is an interactive video system that enables a viewer to perform at least one of scroll, pan, tilt and zoom to enable her to select and focus on the specific aspect ofthe video presentation in which she is most interested.
- the present invention teaches an interactive multimedia transmission system that enables, for the first time, a viewer to select a view from within a transmitted or recorded video, multimedia, or audiovisual image. While the principles enumerated herein teach a number of specific embodiments, each of them is capable of at least one of scrolling, panning, tilting and zooming to enable a viewer to select and focus on that specific aspect ofthe video presentation in which the viewer is most interested.
- the interactive multimedia transmission system taught herein is capable of fast transmission times and inexpensive operation, while still allowing a user to meaningfully interact with presented content.
- the interactive multimedia transmission system taught herein uses less bandwidth than prior systems that transmit at least one 180 degree FON.
- the interactive multimedia transmission system taught herein balances between the desirability of high-resolution, interactive content and the associated requirements of a relatively large amount of bandwidth and/or of a relatively large amount of computer processing power.
- the interactive multimedia transmission system taught herein also incorporates sound tracks intelligently, compensating sound volumes from varying parts ofthe scene according to the viewer's selected view.
- the interactive multimedia transmission system taught herein is usable in a broad range of media types, including but specifically not limited to live broadcast video, recorded broadcast video, and video or multimedia productions recorded on media. These media include, but are again not limited to videocassettes, compact disks, CD-ROMs, DVD discs, LaserDisksTM, and sundry other recording media and memory retention devices, both permanent and erasable.
- a method for allowing a user to select a view from within a transmitted video image wherein the interactive multimedia system includes at least one subscriber system.
- the method includes the steps of providing encoded video content, decoding individually compressed video streams, receiving view selection instructions from a user, selectively decompressing individually compressed video streams, and rendering seamless viewable video.
- the method provides encoded video content to the subscriber system.
- the video content includes a sequence of images. Prior to transmission, the images are parsed into video streams representing sections ofthe images. Each ofthe video streams is individually compressed and formatted.
- the subscriber system decodes the individually compressed video streams and receives view selection instructions from a user. Responsive to instructions received from the viewer, the subscriber system selectively decompresses individually compressed video streams that represent image sections within the user selected view. The system then renders seamless viewable video by rendering and merging the resulting image sections.
- transmission and “receipt” specifically include the broad general fields of broadcast images and recorded images.
- the video content includes images with normal aspect ratios.
- normal aspect ratio video is provided with high resolution images
- the principles of the present invention enable the interactive multimedia experience previously discussed including, but not limited to, zooming, panning, and tilting.
- the video content includes images with a high aspect ratio.
- an aspect ratio of an image is the ratio ofthe image's width to the image's height.
- the aspect ratio is preferably greater than 1.7 and most preferably between 3 and 4.
- the system taught by the present invention includes at least one transmission assembly for use in preparing the interactive video, and at least one subscription assembly for receiving the interactive video signal.
- One embodiment of a subscription assembly includes a receiving device, a decoding device, and a rendering device.
- One version of a transmission assembly for the system includes a content capturing assembly and an encoding device.
- the content capturing assembly captures video content.
- the video content includes a sequence of images.
- the content capturing assembly can include a camera in operative combination with substantially any lens system known in the art, including but specifically not limited to: spherical lenses, anamorphic lenses, wide-angle lenses, and other lenses known to those having ordinary skill in the art. While according to one embodiment, the camera implemented is a high-resolution DTV camera, substantially any camera capable of capturing a video image may be utilized. Alternative cameras include, but are specifically not limited to cameras having a high aspect ratio sensor, and a 1920xl080i interlaced HDTV camera
- the camera records video content and passes the recorded content in a digital format to the encoding device.
- the encoding device parses the images into video streams or bands representing sections ofthe images and compresses and formats the video streams for transmission.
- the receiving device receives encoded video content.
- the video content includes a sequence of images.
- the receiving device receives the encoded video content that has been previously been parsed into video streams representing sections ofthe images. Furthermore, each ofthe video streams has been individually compressed and formatted. These video streams may, again, be either broadcast or recorded.
- the decoding device is coupled to the receiving device for extracting the individually compressed video streams.
- the rendering device is coupled to the decoding device for selectively decompressing the individually compressed video streams and for merging the resulting image sections into seamless viewable video.
- the rendering device includes a user input device for receiving user view selection instructions such that the rendering device selectively decompresses individually compressed video streams to construct images in accordance with the user's view selection instructions.
- the interactive video system further includes a first microphone for capturing first audio content, and a second microphone for capturing second audio content.
- the microphones can be directional microphones.
- a system according to the present invention could include more than two microphones for capturing audio content.
- the encoding device encodes the audio content into encoded audio streams and then interleaves the encoded audio streams with the encoded video streams.
- the rendering device then alters the audio content associated with a video stream based on user view selection instructions.
- One embodiment of a method according to the invention includes the steps of capturing video content, parsing the content into streams, and encoding the streams.
- the video content includes a sequence of images.
- the parsing step parses the images into video streams or bands representing sections ofthe images.
- the encoding step encodes the video streams for transmission.
- Another embodiment of a method conducted in accordance with the teachings of the present invention includes the steps of capturing audio content, and encoding the audio content.
- the capturing step captures first audio content using a first microphone, and second audio content using a second microphone.
- the microphones are directional microphones.
- the method can include capturing audio content using more than two microphones.
- the encoding step encodes the first audio content and the second audio content into encoded audio streams, and interleaves the encoded audio streams with the encoded video streams. Subsequently, the system can adjust the mix ofthe first audio content and the second audio content based on the user view selection instructions.
- Animation scripts are the instructions for creating, or rendering, animation images. Being smaller than the actual animation image sequence, animation scripts generally require much less storage and bandwidth than the animation image sequences, which are essentially video content.
- the generated multimedia stream may further include message banners, which may be placed, for instance, at the periphery ofthe visual content.
- the messages for instance advertisements
- messages or advertisements could be directly inserted into the visual content.
- the director's viewing parameters, or "director's cut" are the default view presented to the viewer.
- the viewing parameters include the pan, tilt and zoom ofthe director's preferred view.
- the director's view may also include the currently selected visual stream.
- FIG. 1 is an example ofthe "view- within-video” property enabled by the present invention
- FIG. 2 is a block diagram of one embodiment of an interactive video system according to the invention.
- FIG. 3 is a flow diagram ofthe operation of an interactive video system according to the invention.
- FIG. 4 is a block diagram ofthe transmission system of FIG. 2;
- FIG. 5 shows one embodiment ofthe encoding device of FIG. 2
- FIG. 6 is a flow chart for one embodiment ofthe compressor ofthe encoding device of FIG. 3;
- FIG. 7 is a schematic of a MPEG specific content stream transmitted by the interactive video system of FIG.2;
- FIG. 8 is a schematic picture of uniform video banding performed with the system of FIG. 2;
- FIG. 9 is a schematic picture of non-uniform video banding performed with the system of FIG. 2 ;
- FIG. 10 is an illustration ofthe audio mixing performed by the system of FIG. 2;
- FIG. 11 illustrates the determination ofthe viewing direction parameter
- FIG. 12 is a block diagram of a subscription system of FIG. 2;
- FIG. 13 is a block diagram ofthe decoding and rendering assembly ofthe system of FIG. 2;
- FIG. 14 illustrates the operation ofthe video decompressor portion ofthe decoding and rendering assembly of FIG. 13;
- FIG. 15 shows one ofthe displays of FIG. 2 including an orientation inset screen
- FIG. 16 is a schematic illustration of user-controlled panning achieved using the system of FIG. 2;
- FIG. 17 is a schematic illustration of user-controlled panning and tilting achieved using the system of FIG. 2.;
- FIG. 18 is a schematic illustration of zooming achieved using the system of FIG.
- FIGS. 19a and 19b are a schematic illustration of peripheral advertising formed using the system of FIG. 2.
- a substantially high aspect ratio image, 1 has been transmitted to a viewer, not shown.
- the high aspect ratio image, 1, is but one embodiment ofthe present invention.
- the principles enumerated herein may, with equal facility, be implemented on substantially any conceivable aspect ratio image, including but not limited to the previously discussed "4x3" low-aspect ratio image, presuming that the resolution of that image enabled the features and advantages taught herein.
- the viewer has selected a portion ofthe video image, 9, for viewing.
- an interactive video system 20 according to the present invention, includes at least one transmission system, 140, and at least one subscription system, 30.
- a plurality of subscription systems, 30a and 30b is shown.
- the principles of the present invention are specifically applicable to the broadcast or distribution of a far more extensive plurality of subscription systems. The principles ofthe present invention specifically contemplate such extensive broadcast or distribution.
- a user can select a view within presented content by using a user input device 38a to send instructions to the decoding and rendering assembly 34a. Selecting a view can include panning, tilting and zooming, and most preferably includes at least horizontal panning.
- Visual content is defined herein as video content or scripts, which specify a digital animation sequence.
- Video content is a sequence of images, obtained, for example, by optically recording a physical event such as a football game or a soap opera.
- the sequence of images can be generated content, including the rendered output of animation scripts.
- a director at the transmission system can specify a default view within the video.
- Decoding and rendering assembly 34 may be implemented as a computer, a set top box, a game console, or other means well known to those having ordinary skill in the art for controlling an image on a video display unit.
- One set top box is available from Scientific Atlanta.
- an example of a game console is a Sony Playstation 2TM. It will be appreciated that these are exemplar devices, and other known devices capable of decoding and rendering in accordance with the principles enumerated herein may, with equal facility, be implemented.
- User input device 38 may be implemented as a mouse, keyboard, joy stick, game controller, remote control, video camera with computer vision, or other input devices well known to those having ordinary skill in the art.
- display device 36 may be implemented as a computer monitor, television screen, HDTV television set, projection display, head-mounted display (HMD), or other visual display device known to those having ordinary skill in the art.
- HMD head-mounted display
- Transmission system, 140 obtains content, 110, utilizing content capturing assembly 22.
- Content capturing assembly may include one or more cameras or microphones, not shown in this view.
- the audio-visual signal of content, 110 is treated as follows: The audio portion ofthe signal is split out at 112 and encoded at 114 before being transmitted at 116.
- the video signal, at 118 is parsed into sections at 120 and encoded at 122.
- the encoded video signal is then transmitted, along with the encoded audio signal from step 114, at step 116.
- the encoded audio signal 114, and video signal 122 are transmitted to subscription assembly, 30.
- the principles ofthe present invention specifically contemplate the implementation thereof on a wide variety of audio, video, and audio- video methodologies. These methods include, but are specifically not limited to, live broadcast, pre-recorded broadcast, netcast, and recorded media.
- the term "recorded media” in turn specifically contemplates substantially any known methodology for recording audio, video, or audio-visual signals, including but again specifically not limited to magnetic tape, video cassettes, laser disks, compact disks, LaserDiskTM, DVD, as well as other magnetic, optical, and electronic storage methodologies well known to those having ordinary skill in the art.
- the term “netcast” refers to any of several known technologies for transmitting audio, video or audio-visual signals over a network, including Internet. By way of illustration, but not limitation, one such netcast technology implements a digital broadcast embedded into analog NTSC television signal and is available as IntercastTM from IntelTM Corporation. Alternative netcast technology is available from Webcasts.comTM.
- the audio-visual signal or media stream is transmitted at 116, it is received, at 124 by a subscription system, 30.
- a determination is made whether to view the "director's cut", or default view within the transmitted video image, or whether the user will provide viewing directions. If, at 128 a decision is made to use the director's cut, the subscription system, 30, decodes the default sections ofthe transmitted audiovisual signal at 134, and the default content is displayed at 136.
- the desired viewing sections are decoded.
- the encoded audio signals are mixed in accordance with the viewing instructions provided by the user.
- the decoded audio and video signals are then merged and transmitted to the display unit, and the decoded content displayed, again at step 136.
- Viewing directions are commands translated from input device 38 in a manner well known to those having ordinary skill in the art.
- transmission assembly 140 of system 20 includes a content capturing assembly 22 including a lens 28 and one or more microphones 29.
- content capturing assembly, or camera, 22 utilizing lens 28 captures video content having a field of view.
- the illustrated transmission system can capture or provide digital video content that is larger than a user's display.
- the video content includes images with high aspect ratios.
- the aspect ratio of an image is the ratio ofthe image's width to the image's height.
- Standard television has an aspect ratio of approximately 1.3.
- High- definition television (HDTV) has an aspect ratio of approximately 1.7.
- One version of a system according to the present invention captures and/or provides images with aspect ratios of preferably greater than 1.7 and most preferably between 3 and 4.
- the content capturing assembly includes a lens 28 coupled to a high-resolution video camera 22.
- the camera assembly
- a system according to the invention preferably provides high-resolution video with a vertical resolution of 480 pixels or greater and an aspect ratio of greater than 1.7 and most preferably between 3 and 4.
- the maximum aspect ratio corresponds to a complete 360 degree panorama.
- a preferred system displays each view with 640x480 (VGA) resolution.
- An aspect ratio of 4 with 480 vertical pixels implies 1920 horizontal pixels.
- the present invention contemplates several embodiments for achieving high aspect ratio content.
- an image sensor e.g., a CCD or a CMOS imager
- a special lens is not necessary.
- the system can use a wide-angle lens to provide a wide angle field-of-view.
- the camera is preferably a progressive scan camera because interlacing can introduce artifacts in the presented video content.
- the video content includes digitized film recordings.
- the digitized film recordings can be cropped at the top and bottom to create high aspect ratio video content.
- the resolution of film is presently higher than the resolution of digital video content provided by current image sensors. Therefore, digitized film recordings can have a vertical resolution of 480 pixels or more, after cropping.
- the system uses a 1920xl080i interlaced DTV camera. Either the camera or the content can be modified to provide high aspect ratio content.
- the system can generate 1920x540 interlaced video by cropping a portion ofthe video.
- Interlaced video is composed of two fields. Each field contains every other horizontal line ofthe image with odd numbered lines in one field and even numbered lines in the other field.
- the video content consists of alternating even and odd fields at 60 fields per second, or 30 frames, consisting of both fields, per second.
- Interlacing has advantages. Interlacing allows the display content to be updated more frequently. Second, conventional televisions display NTSC video. NTSC video is interlaced. However, interlaced video can have noticeable artifacts, such as "jaggles". Furthermore, computer monitors preferably use progressive scan video. It is contemplated that the principles enumerated herein may, be implemented on either interlaced or progressive scan video.
- the system can store only one field from the 1920xl080i interlaced camera to produce a 1920x540 image.
- This system provides 30 frames per second of progressive scan video. Interlacing can be undesirable for the present system because it can complicate the compression and rendering process.
- Keeping only one field at 30 Hz halves the vertical resolution to create the effect of doubling the horizontal resolution. In other words, the camera decreases the total image size by downsampling the vertical dimension. Thus, the camera halves the height of recorded images.
- the system compensates for the reduction in image height by using an anamorphic lens 28 that focuses a vertically stretched image on the camera sensor.
- the image is vertically stretched to compensate for the above-described camera qualities.
- An anamorphic lens produces different optical magnification along mutually pe ⁇ endicular axes.
- the lens magnifies the vertical dimension ofthe image by two times, while leaving the horizontal dimension unaltered.
- An anamorphic lens provides advantages over a fish-eye lens.
- An anamorphic lens can induce less optical distortion on the captured images than a fish-eye lens.
- an anamo ⁇ hic lens can induce a slight vertical blur to avoid aliasing when vertically downsampling.
- a system according to the invention can use an anti-aliasing filter as an alternative to an anamo ⁇ hic lens.
- the anti-aliasing filter produces a preselected blur along the vertical axis.
- a system according to the invention can extend the horizontal field-of-view (FOV) to a 360-degree panorama with limited vertical resolution, e.g., 480 pixels.
- the system can achieve a high aspect ratio image, including a complete panorama, by horizontally merging more than one camera output.
- the resultant audiovisual signal, or media stream is transmitted to encoder 24.
- the encoded media stream is then forwarded to a transmission assembly 26 and then distributed to one or more subscription systems.
- Transmission assembly 26 transmits the media stream in a format appropriate to the receiving subscription system.
- transmission assembly 26 can include a television broadcast transmitter, videocassette recorder, CD recorder, and substantially any other broadcast, inter-cast or media recorder known to those having ordinary skill in the art.
- Transmission system 140 may further include one or more sources, 27, of generated content.
- generated content includes but is specifically not limited to the director's viewing parameters, animated figures, designs, messages, advertisements, and other artificially created or previously recorded audio or visual content as an alternative or supplementary content source.
- Encoder 27a may be substantially similar to encoder 24.
- the video and audio signals from content capturing assembly 22 are transmitted to encoder 24.
- the video signal is first transmitted to a splitter 150.
- Splitter 150 splits the incoming video stream into a plurality of video streams.
- the plurality of video streams corresponds to the number of encoded vertically split video bands, as will be hereinafter discussed.
- Each ofthe split video streams is then fed into a compressor, 152.
- this is a compressor capable of motion compensated video compression.
- MPEG2 For the pu ⁇ oses of illustrational succinctness, only one compressor 150 is shown in this view.
- splitter is capable of splitting an incoming video stream into a plurality of video streams include, but are not limited to, a MicrosoftTM Direct ShowTM filter.
- packager 154 may be implemented as a MicrosoftTM Direct ShowTM filter.
- injection of text and director's cut information into the packaged media stream This is accomplished by injecting text from a text source 156 directly into packager 154.
- the director's viewing parameters, messages and advertising banners and animation scripts may also be injected directly into packager 154 from 158, 160 and 162 respectively.
- FIG. 19a Messages, advertising banners and the like can be joined with video content, as shown in FIG. 19a to provide substantially "circular" content.
- the results of such circular content are shown at Fig. 19b.
- Suitable compressors include, but are specifically not limited to, computers, including Silicon GraphicsTM or IntelTM computers implementing MPEG2 or IntelTM IndeoTM motion compensated video compression schemes. Alternative compressor schemes or methodologies well known to those having ordinary skill in the art may, with equal facility, be implemented.
- One ofthe split video streams from splitter 150 is received at converter 42 of compressor 152.
- FIG. 6 a known compressor methodology suitable for implementation as the compressor ofthe present invention is shown.
- the compressor methodology is MPEG2.
- This embodiment is a motion- compensated system.
- Video received from the content capturing assembly 22 passes concurrently to the motion estimator 56 and to the line scan to block scan converter 42.
- the motion estimator 56 compares an incoming frame with a previous frame stored in the frame store 62 in order to measure motion, and in order to send the resulting motion vectors to the prediction encoder 60.
- Motion estimation is performed by searching for the best block-based correlation in a local search window that minimizes a difference metric.
- Two common methods are the minimum squared error (MSE) method and the minimum absolute difference (MAD) method.
- MSE minimum squared error
- MAD minimum absolute difference
- the motion estimator also shifts objects held in the frame store output to estimated positions in a new frame, a predicted frame.
- the predicted frame is subtracted 44 from the input frame to obtain the frame difference or prediction error.
- the frame difference is then processed with a combination of DCT and quantization.
- the local decoder adds the locally decoded prediction error to the predicted frame to produce the original frame (plus quantizing noise).
- the resulting frame updates the frame store 62.
- frame difference information and motion vectors are encoded and forwarded to transmit buffer 54.
- Compression systems often control the rate at which they transmit information.
- the system can use a transmit buffer 54 and a rate controller 50 to provide content information at a controlled rate.
- the rate controller 50 monitors the amount of information in transmit buffer 54 and changes the quantization scale factor appropriately to maintain the amount of information in the transmit buffer between pre-selected limits.
- the content information is then transmitted from the transmit buffer to the receiver 32 of FIG. 1.
- the receiver forwards the received content information to the decoding and rendering assembly 34.
- the encoding device includes a personal computer running a streaming media processing application such as Microsoft's DirectShowTM.
- the encoding device can include a personal computer running one of a variety of media processing applications known to those skilled in the art.
- the system according to the invention can use different compression/decompression schemes, including MPEG, MPEG2, and Intel IndeoTM compression/decompression schemes.
- FIG. 7 illustrates an MPEG specific content stream that can be transmitted by the system.
- the sequence layer 90 contains among other information a variety of video sequences.
- a video sequence can contain a number of groups of pictures (GOPs), as illustrated in the GOP layer 92.
- a GOP can contain Intra (I), Predicted (P), and Bi-directional Inte ⁇ olated (B) frames, as illustrated in the frame layer 94.
- 1 frame information can contain video data for an entire frame of video.
- An I frame is typically placed every 10 to 15 frames.
- a frame information stream contains a number of macroblock (MB) information streams, as shown in section layer 96.
- a macroblock information stream in turn can contain MB attribute, motion vector, luminance and color information, as shown in macroblock layer 98.
- the luminance information for example, can contain DCT coefficient information 100.
- one embodiment ofthe system 20 encodes the content, as previously discussed.
- the system can encode the content by parsing the images that make up the content into sections.
- the process of parsing images into sections can be termed video banding.
- a system according to the invention can perform uniform and nonuniform video banding.
- FIG. 8 illustrates uniform video banding.
- A is the horizontal width ofthe fraction of video in one section or band.
- S is the horizontal width ofthe fraction ofthe transmitted video on the screen or display, i.e., the viewed video.
- D is the horizontal width ofthe fraction ofthe transmitted video that is decoded, i.e., the immediately viewable video.
- a system according to the present invention can perform non- uniform video banding, as shown in FIG. 12. Sectioning with non-uniform width bands can be useful for rectangular video because the center ofthe video has a higher probability of being seen than the sides.
- one embodiment of a system according to the invention can parse the video content non-uniformly, with the center bands being wider than the edge bands. Furthermore, the system can devote greater compression resources to the center bands.
- the content capturing assembly also includes microphones 29a, 29b, 29c for capturing audio content.
- the microphones 29 are directionally dependent, i.e., the microphones are more sensitive in the direction in which they are pointed.
- a system according to the invention can include any number of microphones including no microphones. However, in a preferred embodiment, the system includes at least two microphones. The system then mixes the audio content based on user view selection instructions. This feature is explained having continued reference to FIG. 2, as well a to FIG. 10. According to the embodiments illustrated in FIGS. 2 and 10, the system mixes the audio obtained by microphones 29a, 29b, and 29c of FIG. 1 based on user view selection instructions.
- the audio signal from microphone 29c is more heavily weighted than the audio signals from microphones 29a and 29b.
- Such audio mixing further increases the user's ability to affect her viewing experience and further increases a user's ability to interact with presented content.
- One embodiment of a system according to the present invention mixes two audio channels.
- the method of audio mixing varies the volume of each channel according to the view selection instructions, while maintaining a minimum volume based on a predetermined ambient constant Ka.
- Ka ranges from 0 to 1/3.
- the first term is a linear inte ⁇ olation term and the second term is an ambient term.
- FIG. 11 illustrates the viewing direction parameter.
- viewing direction can be represented by a parameter d, which varies from 0 (left) to 1 (right).
- the full content width is Wc.
- the constant screen width is Ws (in units of pixels).
- the parameter d varies from 0 to 1 over a range of Wc -Ws in content units.
- the "viewed contents" window can have a size smaller than the screen width W s .
- the center pixel ofthe current view will have an X- coordinate, Xt, outside ofthe range [Ws/2, Wc-Ws/2]. Where this occurs, we clamp
- a system according to the present invention can mix audio from three channels.
- the principles taught herein may mix a larger number of audio channels, by expanding the previously provided equation to include such larger number of channels.
- a subscription system includes a receiving assembly 32, a decoding and rendering device 34, a user input device 38, and a display 36.
- the system 20 includes a plurality of such subscription systems.
- a plurality of subscription systems allows multiple simultaneous users.
- the present invention enables each user to select his or her view independent of all other users. Thus, for example, if the system records a wide-angle view of a theatrical presentation, one user can center her view on a character on the left side ofthe stage while another user can center his view on a character on the right side ofthe stage.
- FIG. 2 wherein the illustrated image includes, from left to right, a dog, a turtle, and a rabbit.
- a first user has provided view selection instructions via user input device 38a to decoding and rendering device 34a to center the view between the turtle and rabbit as shown on display 36a.
- a second user has provided view selection instructions via user input device 38b to decoding and rendering device 34b to center the view between the dog and turtle as shown on display 36b.
- the view on display 36b is centered to the left of the view on display 36a.
- One exemplar decoding and rendering assembly 34 is further illustrated having reference to Fig. 13. Having reference to that figure, the incoming streaming media received from receiver 32 is received at unpackager 160. Unpackager 160 separates the incoming video and audio signals from the received streaming media. Responsive to the current view selected by the viewer, selector 162 and mixer 164 provide the proper signals for viewing and hearing. Selector 162 determines which ofthe incoming video streams corresponding to the selected bands are to be decompressed and eventually viewed. In similar fashion, mixer 164 mixes the incoming audio signals to provide a mixed audio signal again responsive to the users selected view, as previously discussed. After the bands appropriate to the selected view have been selected by selector 162, they are decompressed by decompressor 166. The several bands are then aggregated by aggregator 168 into a viewable video stream. The video stream is cropped at cropper 170 in order to remove unviewed video. The resultant viewed video is then transmitted to display 36.
- the decompressor 166 is the logical inverse of encoder 24, previously discussed. This decompressor is detailed having reference to Fig. 14.
- the incoming video stream equated to one ofthe bands selected for viewing by the viewer is received at variable length decoder 72.
- Variable length decoder 72 outputs the resultant signal to a dequantize step 74 and to prediction decoder 82.
- Prediction decoder 82 transmits the discrete cosine transform coefficient to the inverse discrete cosine transform 76.
- Prediction decoder 82 further transmits a signal to motion predictor 84.
- the motion predictor 84 shifts the frame store 86 output by the transmitted motion vectors received from the prediction decoder 82.
- the result is the same predicted frame as was produced in the encoder.
- the system then adds the decoded frame e ⁇ or (received from the decoder 72, dequantizer 74, and the Inverse DCT 76) to the predicted frame to produce the original frame.
- Decompression of a video stream can begin on an I frame.
- An I frame begins a group of pictures (GOP) consisting of about 12-15 frames. Given the content streams at 30 frames per second, decompression can begin only once every 1/3 to 1/2 second.
- the bands must be large enough to allow reasonable velocity within the cu ⁇ ent bands before new bands are started. For rectangular video, a full pan in 2-3 seconds is reasonable. A full pan is defined as a pan from one extreme end ofthe rectangular video to the other extreme end. Circular video implies a 4-6 second pan.
- the system parses the video content into 7-12 bands. Using 7 bands implies that 4 bands are decompressed, resulting in a 43% reduction of work relative to total decompression. Using 12 bands implies that 6 bands are decompressed, resulting in a 50% reduction of work relative to total decompression.
- display 36b can include an orientation inset screen or "thumbnail" 110, as shown in FIG. 10.
- the inset screen 110 includes a present display location box 112 for indicating the location ofthe present selected view within the larger available visual content.
- the present display location box 112 becomes smaller if the user chooses to zoom in on a portion of presented content.
- zoom capability to provide one to four times magnification.
- the area ofthe inset screen 110 outside the location box 112 can be blank or can show the rest ofthe visual content as illustrated, in order to provide a "macroview".
- the system provides panning control.
- Panning and scrolling are somewhat similar visual presentations.
- Panning is defined herein as the rotation of a camera about a vertical axis, and collaterally, a panned image is the image resulting from such movement.
- a rectangular image is scrolled where it is moved horizontally or vertically so as to present a viewer with an apparently moving segment ofthe rectangular image.
- the display appears to "slide across" the larger content.
- Panning control can require perspective correction when the captured content exhibits perspective distortion caused by the camera lens. This correction may be performed by means of normal perspective correction methodologies including, but not necessarily limited to, perspective projection equations, image wa ⁇ ing, and other methodologies well known to those having ordinary skill in the art.
- a user can scroll as illustrated in FIG. 16.
- the system can provide the user with fine panning or scrolling control such that section 103 has a width as small as one pixel.
- a user can control her view in the lateral or horizontal direction to an accuracy of one pixel width.
- a user can both pan and tilt as illustrated in FIG. 17.
- the system does not receive user view selection instructions, the system decodes default sections 134 and displays a default view.
- a user can proactively select a view. Where the user's system enables substantially " VCR-like" functionality including play/pause/resume, the user can pause the action to inspect the entire transmitted image via panning, tilting and zooming.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2001245502A AU2001245502A1 (en) | 2000-03-07 | 2001-03-07 | Interactive multimedia transmission system |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18769900P | 2000-03-07 | 2000-03-07 | |
| US60/187,699 | 2000-03-07 | ||
| US18826400P | 2000-03-10 | 2000-03-10 | |
| US60/188,264 | 2000-03-10 | ||
| US54979700A | 2000-04-14 | 2000-04-14 | |
| US09/549,797 | 2000-04-14 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2001069911A2 true WO2001069911A2 (en) | 2001-09-20 |
| WO2001069911A3 WO2001069911A3 (en) | 2001-12-20 |
Family
ID=27392286
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2001/007320 Ceased WO2001069911A2 (en) | 2000-03-07 | 2001-03-07 | Interactive multimedia transmission system |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2001245502A1 (en) |
| WO (1) | WO2001069911A2 (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AU775005B2 (en) * | 2000-10-30 | 2004-07-15 | Canon Kabushiki Kaisha | Image transfer optimisation |
| WO2004059979A1 (en) * | 2002-12-31 | 2004-07-15 | British Telecommunications Public Limited Company | Video streaming |
| GB2400779A (en) * | 2003-04-17 | 2004-10-20 | Zoo Digital Group Plc | Creating video sequences representing transitions from a first view of an original asset to a second view of an original asset |
| GB2408867A (en) * | 2003-11-04 | 2005-06-08 | Zoo Digital Group Plc | Authoring an audiovisual product to enable scrolling of image data |
| US7212678B2 (en) | 2000-10-30 | 2007-05-01 | Canon Kabushiki Kaisha | Image transfer optimisation |
| WO2008127989A1 (en) * | 2007-04-11 | 2008-10-23 | At & T Intellectual Property I, L.P. | Method and system for video stream personalization |
| EP1627524A4 (en) * | 2003-03-20 | 2009-05-27 | Ge Security Inc | Systems and methods for multi-resolution image processing |
| WO2014044668A1 (en) * | 2012-09-24 | 2014-03-27 | Robert Bosch Gmbh | Client device, monitoring system, method for displaying images on a screen and computer program |
| GB2509954A (en) * | 2013-01-18 | 2014-07-23 | Canon Kk | Displaying a Region of Interest in High Resolution Using an Encapsulated Video Stream |
| DE102015002922A1 (en) | 2015-03-04 | 2016-09-08 | Oerlikon Textile Gmbh & Co. Kg | Machinery for the production or treatment of synthetic threads |
| EP3955584A1 (en) * | 2018-04-11 | 2022-02-16 | Alcacruz Inc. | Digital media system |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5912700A (en) * | 1996-01-10 | 1999-06-15 | Fox Sports Productions, Inc. | System for enhancing the television presentation of an object at a sporting event |
| US5894320A (en) * | 1996-05-29 | 1999-04-13 | General Instrument Corporation | Multi-channel television system with viewer-selectable video and audio |
-
2001
- 2001-03-07 WO PCT/US2001/007320 patent/WO2001069911A2/en not_active Ceased
- 2001-03-07 AU AU2001245502A patent/AU2001245502A1/en not_active Abandoned
Cited By (26)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7212678B2 (en) | 2000-10-30 | 2007-05-01 | Canon Kabushiki Kaisha | Image transfer optimisation |
| AU775005B2 (en) * | 2000-10-30 | 2004-07-15 | Canon Kabushiki Kaisha | Image transfer optimisation |
| WO2004059979A1 (en) * | 2002-12-31 | 2004-07-15 | British Telecommunications Public Limited Company | Video streaming |
| CN1732690B (en) * | 2002-12-31 | 2012-04-18 | 英国电讯有限公司 | Video streaming |
| US8681859B2 (en) | 2003-03-20 | 2014-03-25 | Utc Fire & Security Americas Corporation, Inc. | Systems and methods for multi-stream image processing |
| EP1627524A4 (en) * | 2003-03-20 | 2009-05-27 | Ge Security Inc | Systems and methods for multi-resolution image processing |
| EP1654864A4 (en) * | 2003-03-20 | 2009-05-27 | Ge Security Inc | Systems and methods for multi-stream image processing |
| GB2400779A (en) * | 2003-04-17 | 2004-10-20 | Zoo Digital Group Plc | Creating video sequences representing transitions from a first view of an original asset to a second view of an original asset |
| GB2408867A (en) * | 2003-11-04 | 2005-06-08 | Zoo Digital Group Plc | Authoring an audiovisual product to enable scrolling of image data |
| GB2408867B (en) * | 2003-11-04 | 2006-07-26 | Zoo Digital Group Plc | Data processing system and method |
| US10820045B2 (en) | 2007-04-11 | 2020-10-27 | At&T Intellectual Property I, L.P. | Method and system for video stream personalization |
| US9754353B2 (en) | 2007-04-11 | 2017-09-05 | At&T Intellectual Property I, L.P. | Method and system for video stream personalization |
| US9137497B2 (en) | 2007-04-11 | 2015-09-15 | At&T Intellectual Property I, Lp | Method and system for video stream personalization |
| WO2008127989A1 (en) * | 2007-04-11 | 2008-10-23 | At & T Intellectual Property I, L.P. | Method and system for video stream personalization |
| US10877648B2 (en) | 2012-09-24 | 2020-12-29 | Robert Bosch Gmbh | Client device, monitoring system, method for displaying images on a screen and computer program |
| WO2014044668A1 (en) * | 2012-09-24 | 2014-03-27 | Robert Bosch Gmbh | Client device, monitoring system, method for displaying images on a screen and computer program |
| GB2509954B (en) * | 2013-01-18 | 2016-03-23 | Canon Kk | Method of displaying a region of interest in a video stream |
| GB2509954A (en) * | 2013-01-18 | 2014-07-23 | Canon Kk | Displaying a Region of Interest in High Resolution Using an Encapsulated Video Stream |
| WO2016139207A1 (en) | 2015-03-04 | 2016-09-09 | Oerlikon Textile Gmbh & Co. Kg | Machine system for producing or treating synthetic threads |
| DE102015002922A1 (en) | 2015-03-04 | 2016-09-08 | Oerlikon Textile Gmbh & Co. Kg | Machinery for the production or treatment of synthetic threads |
| EP3955584A1 (en) * | 2018-04-11 | 2022-02-16 | Alcacruz Inc. | Digital media system |
| US11343568B2 (en) | 2018-04-11 | 2022-05-24 | Alcacruz Inc. | Digital media system |
| KR20220081386A (en) * | 2018-04-11 | 2022-06-15 | 알카크루즈 인코포레이티드 | Digital media system |
| US11589110B2 (en) | 2018-04-11 | 2023-02-21 | Alcacruz Inc. | Digital media system |
| KR102518869B1 (en) | 2018-04-11 | 2023-04-06 | 알카크루즈 인코포레이티드 | Digital media system |
| EP4440107A3 (en) * | 2018-04-11 | 2025-01-08 | Alcacruz Inc. | Digital media system |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2001245502A1 (en) | 2001-09-24 |
| WO2001069911A3 (en) | 2001-12-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US5903264A (en) | Video delivery system and method for displaying an indexing slider bar | |
| KR100904649B1 (en) | Adaptive video processing circuitry and player using sub-frame metadata | |
| EP3059948B1 (en) | Stereoscopic video and audio recording method, stereoscopic video and audio reproducing method | |
| US6005621A (en) | Multiple resolution video compression | |
| US6317164B1 (en) | System for creating multiple scaled videos from encoded video sources | |
| US7242436B2 (en) | Selection methodology of de-interlacing algorithm of dynamic image | |
| US6219837B1 (en) | Summary frames in video | |
| US7836193B2 (en) | Method and apparatus for providing graphical overlays in a multimedia system | |
| KR100906957B1 (en) | Adaptive video processing using sub-frame metadata | |
| KR100915367B1 (en) | Video processing system that generates sub-frame metadata | |
| JP3617573B2 (en) | Format conversion circuit and television receiver including the format conversion circuit | |
| JP3301055B2 (en) | Display system and method | |
| JP2007536825A (en) | Stereoscopic television signal processing method, transmission system, and viewer expansion apparatus | |
| KR20130138750A (en) | Content transmitting device, content transmitting method, content reproduction device, content reproduction method, program, and content delivery system | |
| JP2004194328A (en) | Composition for joined image display of multiple mpeg video streams | |
| JPH09512148A (en) | Digital video signal transmitter and receiver | |
| JP2002290876A (en) | Method for presenting motion image sequences | |
| JP4148673B2 (en) | Video distribution system | |
| US6891547B2 (en) | Multimedia data decoding apparatus and method capable of varying capacity of buffers therein | |
| WO2001069911A2 (en) | Interactive multimedia transmission system | |
| WO2009093557A1 (en) | Multi-screen display | |
| EP1999952B1 (en) | Video substitution system | |
| KR100487684B1 (en) | Computer-implemented method for indexing locations in a video stream, interactive video delivery system, video display device | |
| US6400895B1 (en) | Method for optimizing MPEG-2 video playback consistency | |
| KR100686137B1 (en) | Digital broadcast receivers and how to edit and save captured images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |