[go: up one dir, main page]

TW201029471A - Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system - Google Patents

Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system Download PDF

Info

Publication number
TW201029471A
TW201029471A TW098133808A TW98133808A TW201029471A TW 201029471 A TW201029471 A TW 201029471A TW 098133808 A TW098133808 A TW 098133808A TW 98133808 A TW98133808 A TW 98133808A TW 201029471 A TW201029471 A TW 201029471A
Authority
TW
Taiwan
Prior art keywords
video stream
format
computer system
uncompressed video
uncompressed
Prior art date
Application number
TW098133808A
Other languages
Chinese (zh)
Inventor
David Andrew Thomas
Lee B Hinkle
Kent E Biggs
Original Assignee
Hewlett Packard Development Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co filed Critical Hewlett Packard Development Co
Publication of TW201029471A publication Critical patent/TW201029471A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system. At least some of the illustrative embodiments are methods comprising obtaining a video stream in a first digital format in a first computer system, decompressing the video stream from the first digital format to a second digital format creating an uncompressed video stream (the decompressing by the first computer system), then sending the uncompressed video stream to a second computer system, processing the uncompressed video stream by the second computer system, wherein the processing comprises converting color space depth of the uncompressed video stream and scaling the size of the uncompressed video stream, and displaying the uncompressed video stream on a display device.

Description

201029471 六、發明說明: 【發明所屬_^技術々貝】 發明領域 本發明係有關視訊串流之技術,更牲 好別係有關於第一 電腦系統上將視訊串流解壓縮並於第二電腦系統上仗匕㈠ 縮放及顯示此視訊串流之技術。 發明背景 現存數種有關於在組織(如具有成千上百個員工的公 司)中之運算能力分佈的經營哲學。在—種經營哲學中^ 大多數的運算能力係駐於中央位置(如作為词服器的多個 高階電腦系統)’且末端使用者電腦系統具有有限的運算能 力(如「痩」客戶端)。在大部份的「瘦」客戶端情形中, 末端使用者電腦系統僅係作為終端裝置。在另一種經營哲 學中’末端使用者電腦系統具有顯著的運算能力,且中央 伺服器僅係作為檔案伺服器。 就視訊而言’經營哲學也同樣地致力於如何處置視 訊。在大多數的運算能力皆駐於中央位置的情況中,在這 種情形下,所有的視訊處理步驟係於伺服器執行的(如解 密、解壓縮、色彩空間深度轉換與依比例縮放)。末端使用 者之電腦系統接收準備好播放的視訊。在末端使用者機器 具有顯著的運算能力的情況中,伺服器僅提供末端使用者 電腦系統加密的/壓縮的視訊,且末端使用者機器要負責視 訊處理(如解密、解壓縮、色彩空間深度轉換與縮放)。 3 201029471 【發明内容】 發明概要 依據本發明之一實施例,係特地提出一種方法,其包 含下列步驟:於第一電腦系統中獲得第一數位格式的一視 訊串流;將該視訊串流從該第一數位格式解壓縮成創造— 未壓縮之視訊串流之第二數位格式,解壓縮動作係由該第 一電腦系統所作;然後將該未壓縮之視訊串流送至第二電 腦系統;由該第二電腦系統處理該未壓縮之視訊串流,其 中所作之處理包含轉換該未壓縮之視訊串流的色彩空間深 © 度以及依比例縮放該未壓縮之視訊串流之尺寸;以及在一 個顯示聚置上顯示該未壓縮之視訊串流。 依據本發明之另一實施例,係特地提出一種系統,其 - 包含:一個伺服器,其包含:一個處理器;耦接至該處理 ‘ 器的一個記憶體;由該伺服器實施的一個解壓縮子系統, 該解壓縮子系統係組配來獲得第一格式的一視訊串流,並 係組配來解壓縮該視訊串流以產生一未壓縮之視訊串流, 該飼服器係組配來將該未壓縮之視訊串流送至在一個網路 ® 上的一個客戶端電腦;以及耦接至該伺服器的一個客戶端 電腦’該客戶端電腦包含:一個處理器;耦接至該處理器 的一個記憶體;以及耦接至該處理器的一個顯示裝置,該 客戶端電腦係組配來接收該未壓縮之視訊串流、將該未壓 縮之視訊串流依比例縮放、以及在該顯示裝置上顯示該未 壓縮之視訊串流。 圖式簡單說明 4 201029471 為詳細說明示範實施例,現在將參考隨附圖式,其中: 第1圖示出依據一實施例的數個視訊處理例示性步驟; 第2圖示出依據一實施例的一個系統; 第3圖示出依據一實施例的一個電腦系統;並且 第4圖示出依據一實施例的一個方法。 【實施方式;j 符號和術$吾 φ 在後文之說明與申請專利範圍中通篇使用的某些詞語 係用以指涉特定系統元件。如熟於此技者會理解的,電腦 公司可能會以不同的名稱來指涉一個元件。本說明書並未 意欲在不同於名稱而非功能的元件間作區別。 在後文之时論與申請專利範圍中,「包括」與「包含」 等D口係以種開放式的型態來使用,且因此應解釋為意指 「包括’但不限制於…」。同時,「麵接」或「輕合」等語 欲意指-種間接或直接的連接。因此,若一個第一裝置耦 • 接至一個第二裝置,此連接可能會是透過-個直接連接、 透過經由其他裝置與連接的一個間接連接。 、「解壓縮」與「解碼器」係可互換使用的,並且「壓 縮」與「編碼」等語亦可互換使用。 「硬體解碼器」所指應為特㈣計來執行針對視訊串 流之壓縮及/或解壓縮操作的硬體裝置。不應因硬體解瑪器 在内部解碼器上執行勒體這樣的事實,而否認其作為硬體 解碼器之狀態。 較佳實施例之詳細說明 5 201029471 :文之討論係針對本發明之多 施例中的-個或多個可能會 1 些實 不應被解釋為,或者是用來限制树明施例 範圍之範疇。此外 ° "匕括申凊專利 廣泛的應用,且任;;者會了解’下文之說明具有 :二本說明書’包括申請專利範一係 多種實施例係針對在飼服器電腦 客戶端電腦系統間分割視訊處理作業,以善用== 置之性能。為了要完全說明作業之 用末缺用者裝 -個例示性的系統,-個例示性的二統說:== =Γ=集這些視訊處理作業的動作如何二影 種實施例。雖然這些討論係針對二二Τ多 =見此等多種作業係發生在視訊串流時,在一個連 基礎上的視訊之離散部份(如逐框基礎)。 、 第1圖不出作為伺服器30的一個電腦系統,复亦 二固接至多個客戶端電腦系統32。飼服器30 ,-、早何服裔,或者,伺服器30亦可與在中央位置 的多個其他他器(如在一個機架安裝系統中的多個「刀鋒 伺服器)相關連。各個客戶端32皆同樣為—個電腦系統·」 然而’各個客戶端32的運算能力在大部分的情況中係低於 或顯著地低於各個飼服器3〇之運算能力。電腦網路Μ 飼服器30能與各個客戶端32通訊的任何網路,諸如區域網 201029471 路(LAN)、廣域網路(WAN)、固線式網路(如乙太網 網路,Ethernet® Network)、或無線網路(如蜂巢式寬頻、 IEEE 802.11(b),(g),⑻相容無線網路、藍牙, BLUETOOTH®)。 第2圖更詳細地示出一個伺服器30。具體上’伺服器 30包含取道於一個橋接裝置44而耦接至一個記憶體裝置42 的一個處理器40。雖然只示出一個處理器40,但亦可等效 • 實施多個處理器系統與其中之「處理器」具有多個處理核心 的系統。處理器40取道於一個處理器匯流排46而耦接至橋 接裝置44,而記憶體42取道於一個記憶體匯流排48而耦 接至橋接裝置44。記憶體42為任何種類的依電性或非依電 性記憶體裝置’或為記憶體裝置陣列,諸如隨機存取記憶體 (RAM)裝置、動態ram ( DRAM)裝置、靜態DRAM (SDRAM)裝置、雙資料率DRAM (DDR DRAM)裝置或 磁RAM ( MRAM )裝置等等。 橋接裝置44包含針對讀取與寫入記憶體42而宣告控制 信號的一個記憶體控制器(未示於圖中),讀取與寫入兼係 藉由處理器40與耦接至橋接裝置44的其他裝置(即,直接 6己憶體存取(DMA))。記憶體42為用於處理器4〇的工 作記憶體’其儲存由處理器4〇所執行的程式,且其儲存在 處理器40上執行的程式所使用的資料結構。在一些情況 中,保持在記Μ 42巾的程式錢行之前係先從其他裝置 (如於下文中說明的硬碟52)中複製而來。 7 201029471 橋接裝置44不只將處理器40橋接到記憶體42,還將 處理器4〇與"己憶體42橋接到其他裝置。例如,飼服器30 〇含個超輸入/輪出(1/0)控制器50。超I/O控制器50 將所出現的多種I/Q裝置接合到伺服器電腦祕。在第2 圖之伺服H 3G巾’超1/〇控制器5{)使非依電性記憶體裝置 52 (諸如硬碟(HD)料)、指點裝置或滑鼠%與鍵盤 56月匕夠耗接與使用。超1/〇控制器%可亦使其他並未具體 示出的裝置(如唯讀記憶體光碟(CDRC)M)裝置、通用串 列匯流排(USB)埠)能夠㈣,並由於其所致能使用的許 多I/O裝置’而係以「超」來指稱。 仍參考第2圖,橋接裝置44更將處理器40與記憶體 42橋接至一個圖形適應器58與網路適應器6〇。圖形適應器 58為適合讀取顯示記憶體與以呈現於顯示記憶體中之圖形 影像來驅動監測器62的任何圖形適應器。在一些實施例 中,圖形適應器58内部包含一個記憶體區域,其中,圖形 基兀藉由處理器40而寫入此記憶體區域,且/或DMA在記 憶體42與圖形適應器58之間取道於此記憶體區域而寫入。 圖形適應器58取道於任何合適的匯流排系統而耦接至橋接 裝置,諸如週邊組件互連(PCI)匯流排或高階圖形蜂(AGp) 匯流排。在一些實施例中,圖形適應器58為橋接裝置44 的組成部分。在一些情況中(如「刀鋒」型伺服器),可省 略圖形適應is及/或顯示裝置。 網路適應器60使伺服器30能夠在一個電腦網路上與其 他電腦系統通訊。在—些實施例中,網路適應器6〇藉由固 201029471 線式連接(如乙太網路)而提供對區域網路(LAN)或是廣 域網路(WAN)之使用途徑,而在其他實施例中,網路適 應器60透過一個無線網路協定(如IEEE 8〇2丨吵),⑹,⑻) 而提供對LAN或是WAN的使用途徑。而又在其他實施例 中,網路適應器60透過一個無線寬頻連接而提供對網際網 路的使用途徑,諸如蜂巢式無線寬頻網際網路連接等等。因 此,客戶端電腦系統32 (第1圖)可為本地耦接(即,在 幾呎内),或可為離伺服器30數碼外者。雖然第2圖係參 考一個伺服器30而討論,但這些說明同樣地亦可應用於任 何電腦系統32。 第3圖繪示執行來使視訊串流可在電腦系統上顯示的 一連串作業步驟。具體上,可將視訊儲存在非依電性裝置 上,諸如數位多功能光碟(DVD) 10。視訊是以二元格式, 諸如按照八到十四調變(EFM)者,儲存在DVD 10上。在 其他實施例中,視訊係儲存在其他類型的非依電性記憶體 上,諸如伺服器30之硬碟52 (第2圖)。在一些情況中, 在繪示性硬碟52上的視訊可於先前便已從DVD 10複製到 硬碟52 (如虛線14所示)。視訊可以許多視訊壓縮方案中 的一種來壓縮或編碼,諸如動畫專家群(MPEG) MPEG-2、 MPEG-4、Windows Media Video 格式(WMV )、Real Media 格式(RM)、進階串流格式(ASF)、Quicktime格式以及 AVI格式等等。此外’在一些情況中,亦可加密所壓縮的視 201029471 =、為何’假如视訊串流被加密 解密已 =_串流,如由區塊16所緣示的。可以多種方式來 執仃解岔。例如,在一些實施 u ^ ^ 中係由在—個電腦系統中 的主處理态上執行的軟體,來解 仙眘士 心已加搶的視訊串流。在其 他實把例中,係由在電腦系统 —死中的—個硬體部件來執行解 岔,此硬體部件係特別設計來_ ,全接辨㈣, 求執仃解岔的(即,一個特定用 途積體電路(ASIC)),且硬體部件本身可且有執行軟體 的一個内部處理器。而又在其 /、 、實施财’可n由在主處理 器上的軟體與硬體部件之組合 情形中,可省略解密動作。成解碼。在不加密視訊的 仍參考第3圖,接下來,(第— 认、、日;一 *數位格式(如MPEG) 的)視汛串流被解壓縮或是解碼, 々 於£塊18中所繪示的。 同樣地,可以多種方式來執行 中,你获W, 崎例如,在一些實施例 中係藉由在一個電腦系統中之主處理 縮解壓縮_〇系統來解壓缩視訊串流; =來Γ*—個硬體CODEC或是在電腦系統中之硬體解 瑪器來執订解壓縮,此硬體部 rgp , 衧別δ又叶來執行解壓縮的201029471 VI. INSTRUCTIONS: [Invention] _^Technical Mussel] FIELD OF THE INVENTION The present invention relates to video streaming technology, and more preferably to decompress video streams on a first computer system and to a second computer On the system (1) Technology for scaling and displaying this video stream. BACKGROUND OF THE INVENTION There are several existing business philosophy related to the distribution of computing power in organizations such as companies with hundreds of employees. In a business philosophy ^ Most of the computing power is in a central location (such as multiple high-end computer systems as a word processor) 'and end-user computer systems have limited computing power (such as "痩" client) . In most "thin" client scenarios, the end user computer system is only used as a terminal device. In another business philosophy, the end-user computer system has significant computing power, and the central server is only used as a file server. As far as video is concerned, the business philosophy is equally committed to how to deal with video. In the case where most of the computing power is in the central location, in this case, all video processing steps are performed by the server (eg, decryption, decompression, color space depth conversion, and scaling). The end user's computer system receives the video ready to play. In the case where the end user machine has significant computing power, the server only provides encrypted/compressed video of the end user computer system, and the end user machine is responsible for video processing (such as decryption, decompression, color space depth conversion). With scaling). 3 SUMMARY OF THE INVENTION SUMMARY OF THE INVENTION In accordance with an embodiment of the present invention, a method is specifically provided that includes the steps of: obtaining a video stream in a first digital format in a first computer system; streaming the video stream from Decompressing the first digital format into a second digit format of the uncompressed video stream, the decompression action is performed by the first computer system; and then transmitting the uncompressed video stream to the second computer system; Processing the uncompressed video stream by the second computer system, wherein the processing comprises converting a color space depth of the uncompressed video stream and scaling the size of the uncompressed video stream; The uncompressed video stream is displayed on a display overlay. According to another embodiment of the present invention, a system is specifically provided, comprising: a server comprising: a processor; a memory coupled to the processor; and a solution implemented by the server a compression subsystem, the decompression subsystem is configured to obtain a video stream in a first format, and is configured to decompress the video stream to generate an uncompressed video stream, the feeding device group Equipped to stream the uncompressed video stream to a client computer on a network®; and a client computer coupled to the server's client computer comprising: a processor; coupled to a memory of the processor; and a display device coupled to the processor, the client computer is configured to receive the uncompressed video stream, scale the uncompressed video stream, and The uncompressed video stream is displayed on the display device. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 4 is a detailed description of an exemplary embodiment, and reference will now be made to the accompanying drawings, in which: FIG. 1 illustrates an exemplary illustrative steps of several video processing in accordance with an embodiment; FIG. 2 illustrates an embodiment in accordance with an embodiment One system; FIG. 3 illustrates a computer system in accordance with an embodiment; and FIG. 4 illustrates a method in accordance with an embodiment. [Embodiment; j symbol and technique $u φ Certain terms used throughout the description and patent application are used to refer to particular system components. As will be appreciated by those skilled in the art, a computer company may refer to a component by a different name. This description is not intended to distinguish between elements that differ from the name and not the function. In the following texts and patent applications, the D-ports, including "include" and "include", are used in an open-ended form and should therefore be interpreted as meaning "including but not limited to...". At the same time, the terms "face" or "light" are intended to mean an indirect or direct connection. Therefore, if a first device is coupled to a second device, the connection may be through a direct connection, through an indirect connection to the connection via other devices. "Decompression" and "Decoder" are used interchangeably, and the terms "compression" and "encoding" are also used interchangeably. A "hardware decoder" shall mean a hardware device that performs a compression and/or decompression operation on a video stream. The fact that the hardware solver performs a lemma on the internal decoder should not be denied as a hardware decoder. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT 5 201029471: The discussion of the present invention may or may not be construed as limiting the scope of the application. category. In addition, ° " 匕 凊 凊 凊 凊 凊 凊 凊 凊 ; ; ; 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊Split the video processing job to make good use of == performance. In order to fully explain the operation of the end-users, an exemplary system, an exemplary second-order: ===Γ= sets the actions of these video processing operations. Although these discussions are for more than two or two = see these various operating systems occur in video streaming, discrete parts of the video on a connected basis (such as frame-by-frame basis). The first figure is not a computer system of the server 30, and is also fixed to a plurality of client computer systems 32. The feeder 30, -, the service, or the server 30 can also be associated with a plurality of other devices in the central location (such as multiple "blade servers" in a rack mount system). 32 is also a computer system. However, the computing power of each client 32 is in most cases lower or significantly lower than the computing power of each of the feeders. Computer Network 任何 Any network that can communicate with each client 32, such as regional network 201029471 (LAN), wide area network (WAN), fixed line network (such as Ethernet network, Ethernet®) Network), or wireless network (such as cellular broadband, IEEE 802.11 (b), (g), (8) compatible wireless network, Bluetooth, BLUETOOTH®). Figure 2 shows a server 30 in more detail. Specifically, the server 30 includes a processor 40 coupled to a memory device 42 by a bridge device 44. Although only one processor 40 is shown, it can be equivalent to a system that implements multiple processor systems with multiple processors in which the "processors" have multiple processing cores. The processor 40 is coupled to the bridge unit 44 by a processor bus 46, and the memory 42 is coupled to the bridge unit 44 by a memory bus 48. The memory 42 is any type of electrically or non-electrical memory device' or an array of memory devices, such as a random access memory (RAM) device, a dynamic ram (DRAM) device, a static DRAM (SDRAM) device. , double data rate DRAM (DDR DRAM) devices or magnetic RAM (MRAM) devices, etc. The bridge device 44 includes a memory controller (not shown) that announces control signals for reading and writing to the memory 42. The read and write are coupled to the bridge device 44 by the processor 40 and Other devices (ie, direct 6 memory access (DMA)). The memory 42 is a working memory for the processor 4' which stores the program executed by the processor 4, and stores the data structure used by the program executed on the processor 40. In some cases, it is maintained from other devices (such as the hard disk 52 described below) before being stored in the program. 7 201029471 The bridging device 44 not only bridges the processor 40 to the memory 42, but also bridges the processor 4 and "the memory 42 to other devices. For example, the feeder 30 includes an ultra-input/round-out (1/0) controller 50. The super I/O controller 50 joins the various I/Q devices that appear to the server computer. In the servo H 3G towel 'Super 1/〇 controller 5{) in Fig. 2, the non-electrical memory device 52 (such as a hard disk (HD) material), the pointing device or the mouse % is flushed with the keyboard for 56 months. Consumption and use. The super 1/〇 controller % can also enable other devices not specifically shown (such as a read-only memory optical disc (CDRC) M) device, a universal serial bus (USB), (4), and due to Many I/O devices that can be used are referred to as "super". Still referring to FIG. 2, bridge device 44 bridges processor 40 and memory 42 to a graphics adaptor 58 and network adaptor 6A. Graphics adaptor 58 is any graphics adaptor that is adapted to read display memory and to drive monitor 62 to present graphical images in display memory. In some embodiments, graphics adaptor 58 internally includes a memory region in which graphics are written by processor 40 and/or DMA is between memory 42 and graphics adaptor 58. Write in this memory area. Graphics adaptor 58 is coupled to the bridge device by any suitable busbar system, such as a peripheral component interconnect (PCI) bus or a high order graphics bee (AGp) bus. In some embodiments, graphics adaptor 58 is an integral part of bridge device 44. In some cases (such as "blade" type servers), graphics can be omitted to accommodate is and/or display devices. Network adaptor 60 enables server 30 to communicate with other computer systems on a single computer network. In some embodiments, the network adaptor 6 provides access to a local area network (LAN) or a wide area network (WAN) by means of a solid 201029471 line connection (such as Ethernet), while in other In an embodiment, the network adaptor 60 provides access to the LAN or WAN through a wireless network protocol (e.g., IEEE 8〇2), (6), (8)). In yet other embodiments, network adaptor 60 provides access to the Internet via a wireless broadband connection, such as a cellular wireless broadband Internet connection. Thus, client computer system 32 (Fig. 1) may be locally coupled (i.e., within a few frames) or may be external to server 30. Although Figure 2 is discussed with reference to a server 30, these descriptions are equally applicable to any computer system 32. Figure 3 illustrates a series of operational steps performed to enable video streaming to be displayed on a computer system. In particular, the video can be stored on a non-electrical device such as a digital versatile disc (DVD) 10. The video is stored on the DVD 10 in a binary format, such as an eight to fourteen modulation (EFM). In other embodiments, the video system is stored on other types of non-electrical memory, such as hard disk 52 of server 30 (Fig. 2). In some cases, video on the illustrated hard disk 52 may have previously been copied from the DVD 10 to the hard disk 52 (as indicated by the dashed line 14). Video can be compressed or encoded in one of many video compression schemes, such as Animation Experts Group (MPEG) MPEG-2, MPEG-4, Windows Media Video format (WMV), Real Media format (RM), advanced streaming format ( ASF), Quicktime format, AVI format and more. In addition, in some cases, the compressed view 201029471 can also be encrypted. Why, if the video stream is encrypted, the decrypted =_ stream, as indicated by block 16. There are many ways to solve problems. For example, in some implementations, u ^ ^ is a software that is executed on the main processing state in a computer system to solve the video stream that has been robbed by Xian Shen. In other practical examples, the hardware component is executed by a hardware component that is dead in the computer system. This hardware component is specially designed to be _, and all of them are identified (4), and the solution is solved (ie, A specific purpose integrated circuit (ASIC), and the hardware component itself can have an internal processor that executes the software. In the case where the /, and implementation of the software can be combined by the software and the hardware components on the main processor, the decryption operation can be omitted. Into decoding. Referring to Figure 3 for unencrypted video, the video stream is decompressed or decoded (in the form of a * digit format (such as MPEG), as shown in block 18 Painted. Similarly, it can be performed in a variety of ways, you get W, Saki, for example, in some embodiments, the video stream is decompressed by a main processing in the computer system to compress the _ 〇 system; = Γ * A hardware CODEC or a hardware solver in a computer system to perform decompression, this hardware part rgp, screening δ and leaves to perform decompression

(即一個特疋用途積體電路(ASI )),且硬體解碼器 本身了具有執仃軟體的一個内部處 中,可藓由在主虛捜# u 里器。而又在其他實施例 Ψ ]糟由在主處理15上的軟體與硬髂魅踩# 解壓縮懕始♦ ^ 體解碼器之組合來完成 顯縮無娜驗縮之明確具體實施態樣為何 接收第-數位格式(如細G) 縮步驟 造未壓縮的視辦流。 雜入,並創 201029471 依據至少一些實施例,係將已壓縮的視訊串流解壓縮到 yuv色彩空間。也就是說,已壓縮的視訊串流被轉變成一 串YUV值,其中,各組γυν值均可適用於顯示器上的一 個單-點(如像素)值為亮度成份,❿^^與¥為彩度 成伤。又於其它實關巾,縣已壓賴視訊_流解壓縮到 Y .Cb.Cr色彩空間。也就是說,已壓縮的視訊串流被轉變成 -串Y :Cb:Cr值’其中,各組Y,:Cb:Cr值均可適用於顯示 器上的一個單―點(如像素)°Y,值為亮度成份,而Cb與 Cr為彩度成份。亦可等效地使用其他色彩空間(如Y:Pb:Pr· 或其他基於紅綠藍(RGB)的封包式系統)。 接下來’可使未壓縮的視訊串流受色彩空間深度轉換之 支配,如第3圖之區塊2〇所示。具體上,各組未壓縮視訊 串流之值係代表螢幕上一個特定的點(如像素)的亮度及/ 或彩度,且各個值可涵蓋某個數目的位元。然而,要顯示未 麼縮之視訊串流的顯示裝置可不具有與這些未壓縮之視訊 串流相同的色彩空間深度(即,位元數)。顧名思義,色彩 空間深度轉換牵涉到為了符合,或是實質上地符合,要在上 頊顯不未壓縮之視訊的電腦系統之性能,而改變及/或調整 各個值所涵蓋的位元數。例如,在MPEG標準中的Υ’、Cb 與Cr值各涵蓋多到32個的位元,而要顯示未壓縮之視訊的 顯示裝置可僅具有8個位元的解析度。因此,在一些實施例 中’在顯示視訊之前,先對未壓縮之視訊串流的多種成份作 色彩空間深度轉換。當未壓縮之視訊串流與此未壓縮之視訊 11 201029471 串流所在的電腦系統之色彩空間深度實質上相同時,可省略 色彩空間深度轉換。 接下來,可在尺寸上依比例縮放未壓縮之視訊串流,如 區塊22所繪示的。具體上’未壓縮之視訊串流可具有其所 錄製及/或呈現的一個特定尺寸(長寬比)。然而,顯示裳 置之尺寸及/或在此顯示螢幕上要用於此未壓縮之視訊串流 的顯示區域之尺寸可能並不符合此未壓縮之視訊串流所錄 製及/或呈現的尺寸。因此,在真正地顯示未壓縮之視訊之(ie, a special purpose integrated circuit (ASI)), and the hardware decoder itself has an internal part of the executable software, which can be used by the main virtual machine. In other embodiments, it is determined by the combination of the software on the main processing 15 and the hard 髂 踩 step # decompression ♦ ^ body decoder to complete the condensed non-reduction The first-digit format (such as thin G) shrinks the uncompressed view stream. Miscellaneous, and creating 201029471 According to at least some embodiments, the compressed video stream is decompressed into the yuv color space. That is to say, the compressed video stream is converted into a string of YUV values, wherein each group of γυν values can be applied to a single-point (such as pixel) value on the display as a luminance component, ❿^^ and ¥ is a color Degree of injury. In other real customs towels, the county has relied on video _ stream decompression to Y.Cb.Cr color space. That is to say, the compressed video stream is converted into a -string Y:Cb:Cr value', wherein each group of Y,:Cb:Cr values can be applied to a single point (such as a pixel) on the display. The value is a luminance component, and Cb and Cr are chroma components. Other color spaces (such as Y:Pb:Pr· or other packet-based systems based on Red Green Blue (RGB)) can also be used equivalently. Next, the uncompressed video stream can be dominated by the color space depth conversion, as shown in block 2 of Figure 3. Specifically, the values of the uncompressed video streams of each group represent the brightness and/or chroma of a particular point (e.g., pixels) on the screen, and each value can cover a certain number of bits. However, display devices that display unreduced video streams may not have the same color space depth (i.e., number of bits) as these uncompressed video streams. As the name implies, color space depth conversion involves changing and/or adjusting the number of bits covered by each value in order to conform, or substantially conform, to the performance of the computer system that displays the uncompressed video. For example, the Υ', Cb, and Cr values in the MPEG standard each cover up to 32 bits, and a display device to display uncompressed video may have only a resolution of 8 bits. Thus, in some embodiments, color space depth conversion is performed on various components of the uncompressed video stream prior to displaying the video. The color space depth conversion can be omitted when the uncompressed video stream is substantially the same as the color space depth of the computer system in which the uncompressed video 11 201029471 stream is located. Next, the uncompressed video stream can be scaled in size, as depicted by block 22. Specifically, an uncompressed video stream can have a particular size (aspect ratio) that it is recorded and/or presented. However, the size of the display skirt and/or the size of the display area on the display screen to be used for this uncompressed video stream may not match the size of the uncompressed video stream recorded and/or rendered. So, really show uncompressed video

前,此視訊可能會需要依比例縮放尺寸,以滿足所預期的顯 示尺寸。例如,可依比例縮放各個繪示性Y,:Cb:Cr值以使 其能夠在多個像素上應用。為了縮小尺寸,可結合多個繪示 性Y’:Cb:Cr值,以使其能夠在一個單一像素上應用。因此, 在一些實施例中,在顯示視訊之前,先對未壓縮之視訊串流 的多種成份依比例縮放。當不需依比例縮放時,可省略依比 例縮放動作。Previously, this video might need to be scaled to fit the expected display size. For example, each of the illustrative Y, :Cb:Cr values can be scaled to enable it to be applied across multiple pixels. To reduce the size, multiple illustrative Y':Cb:Cr values can be combined to enable it to be applied on a single pixel. Thus, in some embodiments, the various components of the uncompressed video stream are scaled prior to displaying the video. The proportional scaling action can be omitted when scaling is not required.

敢後,在解法、(如果有的話)、解壓縮、色彩空間項 轉換(如果有的話)與依比例縮放(如果有的話)動作之j 視訊便在顯示裝置上顯示,如區塊24所示。為了顯示視』 解密、解壓縮、色彩空間深度轉換與依比例縮放可 處理,其可為在此視訊串流與顯示過程中,在逐框 上操作的。 在一些相關技藝系統及/或方法論中,中央運算妒 夕個问階伺服器)執行所有的例示性視訊處理步驟(女 密、解壓縮、色彩空間深度轉換與依比例縮放)。客戶= 12 201029471 器被供給準備要顯示的親%电丈、丄 及/或專業硬财置之數量隼㈣操作哲學將軟體認證 齡旦μ ^數^中’並限制為正同於舰器之After dare, in the solution, (if any), decompression, color space item conversion (if any) and scaling (if any) action, the video is displayed on the display device, such as a block. 24 is shown. In order to display the video decryption, decompression, color space depth conversion and scaling, it can be operated on a frame-by-frame basis during the video streaming and display process. In some related art systems and/or methodologies, the central computing server performs all of the exemplary video processing steps (female secret, decompression, color space depth conversion, and scaling). Customer = 12 201029471 The device is supplied with the number of pro-%, 丄 and/or professional hard-funded items to be displayed 四 (4) Operational philosophy will be certified by software μ μ ^ ^ ^ ^ and limited to the same as the ship

m ,,_可持有對一個特定軟體⑶脈的 固'^,部提供未壓縮之視訊給並未被這個特定軟體 c〇DEC認證的多個客戶端電腦系統。又如另-個範例,-個飼服器可實施—個專業硬體解碼器,並提供視訊給並不實 威硬體解碼器的多個客戶端f腦系統。然而,視訊處理為 松集運鼻的’且㈣服純行财的視訊會關蘭服器可 服務的使用者之數量及/或可執行的其他作業之數量。 曰在其他相關技藝线及/或方法論中巾央運算裝置僅 提供未壓縮之視訊串流,而客戶端機器執行所有的:示性視 訊處理步驟(如_、解壓縮、色毅間深度轉換與依比例 縮放)這種操作哲學將大量的運算貞載㈣㈣移開但 強迫規定各個客戶錢n必賴證及/或備有執行解壓縮所 必要的軟體(如軟體C0DEC)及/或硬體(如硬體解碼器), 且各個客戶端機器必須具有執行解密、轉換色彩深度與依比 例縮放所必要的運算能力。 大邛分的客戶端電腦系統’在具有相比於高階伺服器來 °尤較為跫限的運算能力時,會具有必要的運算能力以執行所 有或部份的視訊處理步驟,諸如色彩深度轉換及/或依比例 縮放等等。此外,色彩深度轉換與依比例縮放並不需要專有 的軟體應用及/或專業硬體。因此,藉由將部份的視訊處理 卸載到客戶端電腦系統,當維持保有專有軟體(如軟體 CODEC)之能力及/或於中央位置之硬體時,飼服器系統可 13 201029471 服務比所有的視訊處理步驟皆在伺服器階執行所能服務的 更多的客戶。此外,於伺服器端管理c〇DEC:提供資訊技術 師在某種程度上控制何種視訊可為客戶端取用。 回到第1圖,依據多種實施例,且針對在一或多個客戶 端電腦系統32上的視訊顯示,伺服器3〇執行一部分的視訊 處理,而各個客戶端32執行剩餘部份的視訊處理。具體上, 依據多種實施例’伺服器30執行在第3圖之虛線36之上的 視訊處理步驟(即解密與解壓縮),並將已解密與已解壓縮 的視訊串流送到客戶端32。客戶端32接著執行在第3圖之 虛線36之下視訊處理步驟(即色彩空間深度轉換與依比例 縮放尺寸),然後顯示此視訊串流。 將視訊處理作業用這種方式分割會限制並集中在伺服 器30上的昂貴的及/或需認証的處理。例如,有限數量的軟 體CODEC及/或硬體解碼器可駐於伺服器3〇中,而非在各 個客戶鳊電腦32中。此外,將一部分視訊處理職責對客戶 端32的分配使得個伺服器3〇能夠將視訊提供給更多的客戶 端32。色彩空間深度轉換及/或依比例縮放尺寸之動作可不 需要允許瘦客戶端與終端機具有有限的運算能力以在客戶 端32階執行這些作業的專有軟體及/或硬體。此外,在伺服 器30與客戶端32間分割視訊處理作業可對付一或多種品質 問題,諸如不穩定的、奇怪的或顛簸的視訊及/或不一致的 音訊。 第4圖繪示依據至少一些實施例的—個方法。首先,此 方法從區塊400 _。在區塊404,一第_電腦系統獲得第 14 201029471 一數位格式的-視訊串流。在區塊顿,由第一電腦系統解 壓縮此視訊串流’第m统創造第二數位格式的一已解 壓縮的視訊串流。因此,在區塊412,已解壓縮的視訊串流 被送到—第二電腦系統。在-些實施例中,第二電腦系統可 為無法解壓縮第-數位格式的,因而便要求在區塊中 由第電胳系統所做的對第一數位格式之解壓縮。可利 用一或多個傳輸協定來發送未壓縮的視訊串流,例如,可將 Φ 未壓縮的視訊串流分割成—争封包,並以傳輸控制協定/網 協疋(TCP-IP)封包送出。此外,若考慮安全性則可將 這些例示性的TCP_IP封包以一個安全協定來實施,以確保 有特疋客戶或客戶群可取用此視訊串流。第二電腦系統可 具有或不具有解壓縮視訊串流所需的軟體CODEC及/或硬 體解碼器。 仍參考第4圖,在區塊416中,未壓縮的視訊串流係由 第一電腦系統處理,包含轉換色彩空間深度轉換與依比例縮 春放。例如,可從第一電腦系統以32位元亮度及/或彩度成份 將未壓縮的視訊串流送出,然而,耦接至客戶端32之顯示 震置可能只有8位元解析度的能力。因此,可由客戶端32 在區塊416執行一個色彩空間深度轉換,以確保視訊符合此 顯示裝置之解析度。至於依比例縮放,則可在耦接至第二電 腦系統的顯示裝置之尺寸及/或長寬比與由第一電腦系統所 供應的視訊之尺寸及/或長寬比不同時,做這樣的縮放。在 區塊420中,由第二電腦系統以適當的解析度、尺寸及/或 15 201029471 長寬比來顯示未壓縮的視訊串流。此方法之後在區塊424 終止。 由於此所提供之說明,熟於此技者可以很容易地結合如 所說明之軟體與適當的一般用途或特定用途電腦硬體,來創 造依據多種實施例的電腦系統及/或電腦子部件,以創造用 以實施這些方法或多種實施例的電腦系統及/或電腦子部 件,及/或創造用以儲存軟體程式以實施多種實施例之方法 面的電腦可讀媒體。 上述討論僅係本發明之原理與多種實施例的說明性討 論。對熟於此技者來說,只要領會上述揭露,多種變異體 與修改體便會明顯浮出。吾等欲使後附申請專利範圍包含 所有此等變異體與修改體。 I:圖式簡單說明3 第1圖示出依據一實施例的數個視訊處理例示性步驟; 第2圖示出依據一實施例的一個系統; 第3圖示出依據一實施例的一個電腦系統;並且 第4圖示出依據一實施例的一個方法。 【主要元件符號說明】 10...數位多功能光碟(DVD) 36...虛線 40.. .處理器 42.. .記憶體/記憶體裝置 44.. .橋接裝置 46.. .處理器匯流排 48.. .記憶體匯流排 14.. .虛線 16〜24、400〜424…區塊 30.. .伺服器/伺服器電腦系統 32.. .客戶端/客戶端電腦系統 34.. .電腦網路 16 201029471 50…超輸入/輸出(I/O)控制器 52.. .硬碟/非依電性記憶體裝 置 54.. .指點裝置/滑鼠 56.. .鍵盤 58.. .圖形適應器 60.. .網路適應器 62.. .監測器m , , _ can hold the firmware of a particular software (3), providing uncompressed video to multiple client computer systems that are not authenticated by this particular software c〇DEC. As another example, a feeding device can implement a professional hardware decoder and provide video to multiple client f brain systems that are not real hardware decoders. However, the video processing is a matter of loosening the nose and the number of users that can be served by the video server and/or other operations that can be performed.曰In other related skill lines and/or methodologies, the towel computing device only provides uncompressed video streaming, and the client machine performs all: display video processing steps (such as _, decompression, color depth conversion and Scale-by-scale) This philosophy of operation removes a large number of computational loads (4) and (4) but forces the individual customers to rely on and/or have the software (such as software CODEC) and/or hardware necessary to perform the decompression ( Such as hardware decoders, and each client machine must have the computing power necessary to perform decryption, convert color depth, and scale. A large client computer system' has the necessary computing power to perform all or part of the video processing steps, such as color depth conversion, when it has a computing power that is more limited than that of a high-end server. / or scale and so on. In addition, color depth conversion and scaling do not require proprietary software applications and/or professional hardware. Therefore, by offloading part of the video processing to the client computer system, the feeder system can be 13 201029471 when maintaining the ability to maintain proprietary software (such as software CODEC) and/or hardware in a central location. All video processing steps are performed by more clients at the server level. In addition, the server manages c〇DEC: providing the information technician with some control over what kind of video can be accessed by the client. Returning to Fig. 1, in accordance with various embodiments, and for video display on one or more client computer systems 32, the server 3 performs a portion of the video processing, and each client 32 performs the remaining portion of the video processing. . In particular, the server 30 performs the video processing steps (ie, decryption and decompression) above the dashed line 36 of FIG. 3 in accordance with various embodiments, and streams the decrypted and decompressed video streams to the client 32. . The client 32 then performs the video processing steps (i.e., color space depth conversion and scaled down) under the dashed line 36 of Figure 3, and then displays the video stream. Separating video processing jobs in this manner limits and concentrates on expensive and/or authentication-compliant processing on server 30. For example, a limited number of software CODECs and/or hardware decoders may reside in the server 3, rather than in each client computer 32. In addition, the allocation of a portion of the video processing duties to the client 32 enables a server 3 to provide video to more clients 32. The effect of color space depth conversion and/or scaling can eliminate the need for thin clients and terminals to have limited computing power to perform proprietary software and/or hardware for these jobs on the client side 32. In addition, splitting the video processing job between server 30 and client 32 can address one or more quality issues, such as unstable, strange or bumpy video and/or inconsistent audio. Figure 4 illustrates a method in accordance with at least some embodiments. First, this method is from block 400_. At block 404, a first computer system obtains a video stream of the 14th 201029471 one digit format. In the block, the video stream is decompressed by the first computer system to create a decompressed video stream in the second digit format. Thus, at block 412, the decompressed video stream is sent to a second computer system. In some embodiments, the second computer system may be incapable of decompressing the first-digit format, thus requiring decompression of the first digit format by the tweezer system in the block. The uncompressed video stream can be transmitted using one or more transport protocols. For example, the Φ uncompressed video stream can be segmented into a contention packet and sent out in a Transmission Control Protocol/Network Protocol (TCP-IP) packet. . In addition, these security TCP_IP packets can be implemented with a security protocol to ensure that a particular client or customer group can access the video stream if security is considered. The second computer system may or may not have the software CODEC and/or hardware decoder required to decompress the video stream. Still referring to Fig. 4, in block 416, the uncompressed video stream is processed by the first computer system, including the conversion color space depth conversion and the scaling down. For example, an uncompressed video stream can be streamed from a first computer system with 32-bit luminance and/or chroma components. However, the display coupled to client 32 may have an 8-bit resolution capability. Thus, a color space depth conversion can be performed by client 32 at block 416 to ensure that the video conforms to the resolution of the display device. For scaling, the size and/or aspect ratio of the display device coupled to the second computer system may be different when the size and/or aspect ratio of the video supplied by the first computer system is different. Zoom. In block 420, the uncompressed video stream is displayed by the second computer system with an appropriate resolution, size, and/or 15 201029471 aspect ratio. This method is then terminated at block 424. As a result of the description provided herein, a computer system and/or computer sub-assembly according to various embodiments can be readily combined with a software as described and a suitable general purpose or application-specific computer hardware. To create computer systems and/or computer sub-components for implementing the methods or embodiments, and/or to create computer-readable media for storing software programs to implement the methods of various embodiments. The above discussion is merely illustrative of the principles of the invention and various embodiments. For those skilled in the art, as long as the above disclosure is grasped, a variety of variants and modifications will obviously emerge. We intend to include all such variants and modifications in the scope of the appended patent application. I: BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing exemplary steps of video processing according to an embodiment; FIG. 2 is a diagram showing a system according to an embodiment; FIG. 3 is a diagram showing a computer according to an embodiment. System; and Figure 4 illustrates a method in accordance with an embodiment. [Major component symbol description] 10...Digital versatile disc (DVD) 36...Dash line 40.. Processor 42.. Memory/memory device 44.. Bridge device 46.. Processor Busbars 48.. Memory busbars 14... Dashed lines 16~24, 400~424... Block 30.. Server/Server Computer System 32.. Client/Client Computer System 34.. Computer Network 16 201029471 50...Super Input/Output (I/O) Controller 52.. Hard Disk/Non-Electric Memory Device 54.. Pointing Device/Mouse 56.. Keyboard 58.. Graphics Adaptor 60.. Network Adaptor 62.. Monitor

1717

Claims (1)

201029471 七、申請專利範圍: 1. 一種方法,其包含下列步驟: 於第一電腦系統中獲得第一數位格式的一視訊串 流; 將該視訊串流從該第一數位格式解壓縮成創造一 未壓縮之視訊串流之第二數位格式,解壓縮動作係由該 第一電腦系統所作;然後 將該未壓縮之視訊串流送出給第二電腦系統; 由該第二電腦系統處理該未壓縮之視訊串流,其 中,所作之處理包含轉換該未壓縮之視訊串流的色彩空 間深度以及依比例縮放該未壓縮之視訊串流之尺寸;以 及 在一個顯示裝置上顯示該未壓縮之視訊串流。 2. 如申請專利範圍第1項之方法,其中該第一數位格式為 從包括下列格式的一個群組中所選出的至少一種格 式:動晝專家群(MPEG)格式、Windows Media Video .格式(WMV )、Real Media格式(RM )、進階串流格式 (ASF)、Quicktime格式、以及AVI格式。 3. 如申請專利範圍第1項之方法,其中解壓縮步驟包含以 從包括下列格式的一個群組中所選出的至少一種格式 來創造未壓縮之視訊串流:YUV格式;及Y:cr:CB格式。 4. 如申請專利範圍第1項之方法,其中送出步驟包含將該 未壓縮之視訊串流送以一連串TCP-IP封包送出。 5. 如申請專利範圍第1項之方法,其中解壓縮步驟更包含 18 201029471 藉由從包括下列方式的一個群組中所選出的至少一種 方式來解壓縮:執行於一個處理器上的軟體,該軟體亦 執行於用於該第一電腦系統的一個作業系統上;以及一 個硬體解碼器。 6. 如申請專利範圍第1項之方法,其更包含在解壓縮數位 視訊串流之前先解密該數位視訊串流。 7. —種系統,其包含: 一個伺服器,其包含: 一個處理器; 耦接至該處理器的一個記憶體; 由該伺服器實施的一個解壓縮子系統,該解壓 • 縮子系統係組配來獲得第一格式的一視訊事流,並 ' 係組配來解壓縮該視訊串流以產生一未壓縮之視 訊串流, 該伺服器係組配來將該未壓縮之視訊串流送至 φ 在一個網路上的一個客戶端電腦;以及 耦接至該伺服器的一個客戶端電腦,該客戶端電腦 包含: 一個處理器; 耦接至該處理器的一個記憶體;以及 耦接至該處理器的一個顯示裝置, 該客戶端電腦係組配來接收該未壓縮之視訊串 流、將該未壓缩之視訊串流依比例縮放、以及在該 顯示裝置上顯示該未壓縮之視訊串流。 19 * 201029471 8. 如申請專利範圍第7項之系統,其中該客戶端電腦更係 組配來轉換該未壓縮之視訊串流的色彩空間深度。 9. 如申請專利範圍第7項之系統,其中該伺服器之該解壓 縮子系統為從包括下列系統的一個群組中所選出的至 少一種系統:執行於該處理器上的一個軟體壓縮/解壓 縮系統;以及一個硬體解碼器。201029471 VII. Patent application scope: 1. A method comprising the steps of: obtaining a video stream in a first digital format in a first computer system; decompressing the video stream from the first digital format to creating one a second digit format of the uncompressed video stream, the decompression action is performed by the first computer system; the uncompressed video stream is then sent out to the second computer system; the uncompressed by the second computer system Video stream, wherein the processing includes converting a color space depth of the uncompressed video stream and scaling the size of the uncompressed video stream; and displaying the uncompressed video string on a display device flow. 2. The method of claim 1, wherein the first digit format is at least one format selected from a group consisting of: an MPEG format, a Windows Media Video format ( WMV), Real Media Format (RM), Advanced Streaming Format (ASF), Quicktime format, and AVI format. 3. The method of claim 1, wherein the decompressing step comprises creating an uncompressed video stream in at least one format selected from a group comprising the following format: YUV format; and Y:cr: CB format. 4. The method of claim 1, wherein the sending step comprises sending the uncompressed video stream to a series of TCP-IP packets for delivery. 5. The method of claim 1, wherein the decompressing step further comprises 18 201029471 decompressing by at least one selected from a group consisting of: executing software on a processor, The software is also executed on an operating system for the first computer system; and a hardware decoder. 6. The method of claim 1, further comprising decrypting the digital video stream prior to decompressing the digital video stream. 7. A system comprising: a server comprising: a processor; a memory coupled to the processor; a decompression subsystem implemented by the server, the decompression system The server is configured to obtain a video stream in the first format, and is configured to decompress the video stream to generate an uncompressed video stream, and the server is configured to stream the uncompressed video stream. a client computer sent to φ on a network; and a client computer coupled to the server, the client computer comprising: a processor; a memory coupled to the processor; and coupled a display device to the processor, the client computer is configured to receive the uncompressed video stream, scale the uncompressed video stream, and display the uncompressed video on the display device Streaming. 19 * 201029471 8. The system of claim 7, wherein the client computer is further configured to convert the color space depth of the uncompressed video stream. 9. The system of claim 7, wherein the decompression subsystem of the server is at least one system selected from the group consisting of: a software compression performed on the processor/ Decompression system; and a hardware decoder. 10. 如申請專利範圍第7項之系統,其更包含先於該解壓縮 系統的一個解密系統,其中係先於該解壓縮系統而解密 該視訊串流。10. The system of claim 7, further comprising a decryption system prior to the decompression system, wherein the video stream is decrypted prior to the decompression system. 2020
TW098133808A 2008-10-23 2009-10-06 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system TW201029471A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2008/080870 WO2010047706A1 (en) 2008-10-23 2008-10-23 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system

Publications (1)

Publication Number Publication Date
TW201029471A true TW201029471A (en) 2010-08-01

Family

ID=42119562

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098133808A TW201029471A (en) 2008-10-23 2009-10-06 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system

Country Status (2)

Country Link
TW (1) TW201029471A (en)
WO (1) WO2010047706A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013081624A1 (en) * 2011-12-02 2013-06-06 Hewlett-Packard Development Company, L.P. Video clone for a display matrix
CA2908701C (en) 2013-04-05 2016-07-19 Media Global Links Co., Ltd. Ip uncompressed video encoder and decoder

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5262875A (en) * 1992-04-30 1993-11-16 Instant Video Technologies, Inc. Audio/video file server including decompression/playback means
WO2002097584A2 (en) * 2001-05-31 2002-12-05 Hyperspace Communications, Inc. Adaptive video server
JP2007004301A (en) * 2005-06-21 2007-01-11 Sony Corp Computer, data processing method, program, and communication method

Also Published As

Publication number Publication date
WO2010047706A1 (en) 2010-04-29

Similar Documents

Publication Publication Date Title
US10721282B2 (en) Media acceleration for virtual computing services
JP4809616B2 (en) Method and system for protecting media content
KR101962990B1 (en) Low-complexity remote presentation session encoder
TWI428020B (en) Image transmitting apparatus, image transmitting method, receiving apparatus, and image transmitting system
CN112073737B (en) Re-encoding predicted image frames in live video streaming applications
US20050195205A1 (en) Method and apparatus to decode a streaming file directly to display drivers
US20130298171A1 (en) Device for generating content data, method for generating content data, and recording medium
CN105165009B (en) A system, device, and method for screen sharing of multiple visualization components
TW200948088A (en) System and method for virtual 3D graphics acceleration and streaming multiple different video streams
WO2021072878A1 (en) Audio/video data encryption and decryption method and apparatus employing rtmp, and readable storage medium
CN114175652A (en) Decoding apparatus and operating method thereof, and Artificial Intelligence (AI) amplifying apparatus and operating method thereof
US10819951B2 (en) Recording video from a bitstream
CN104954812A (en) Video synchronized playing method, device and system
US20060164328A1 (en) Method and apparatus for wireless display monitor
US10803903B2 (en) Method for capturing and recording high-definition video and audio output as broadcast by commercial streaming service providers
CN106464943A (en) Information processing device and method
CN109729299A (en) Method and device for sending and receiving ultra-high-definition video
WO2012163059A1 (en) Method, device and system for device redirection data transmission
CN105472442A (en) Out-chip buffer compression system for superhigh-definition frame rate up-conversion
TW201029471A (en) Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system
CN110572673A (en) Video coding and decoding method and device, storage medium and electronic device
CN106658070B (en) Method and device for redirecting video
KR102401881B1 (en) Method and apparatus for processing image data
WO2018192231A1 (en) Image processing method, device, and terminal device
JP2012257196A (en) System and method for transferring streaming medium based on sharing of screen