[go: up one dir, main page]

HK1141893B - Video camera - Google Patents

Video camera Download PDF

Info

Publication number
HK1141893B
HK1141893B HK10108352.3A HK10108352A HK1141893B HK 1141893 B HK1141893 B HK 1141893B HK 10108352 A HK10108352 A HK 10108352A HK 1141893 B HK1141893 B HK 1141893B
Authority
HK
Hong Kong
Prior art keywords
image data
color
camera
green
sensing cells
Prior art date
Application number
HK10108352.3A
Other languages
Chinese (zh)
Other versions
HK1141893A1 (en
Inventor
詹姆斯‧杰纳德
托马斯‧格雷姆‧纳特瑞斯
Original Assignee
Red.Com, Llc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Red.Com, Llc. filed Critical Red.Com, Llc.
Priority claimed from PCT/US2008/060126 external-priority patent/WO2008128112A1/en
Publication of HK1141893A1 publication Critical patent/HK1141893A1/en
Publication of HK1141893B publication Critical patent/HK1141893B/en

Links

Description

Video camera
Technical Field
The present invention relates to a digital camera, for example, a digital camera for capturing still or moving images, and more particularly, to a digital camera that compresses image data.
Background
Despite the availability of digital cameras, most producers of moving pictures and some television broadcast media still rely on film cameras. The film used herein provides very high resolution images that a video editor can edit by conventional methods. More recently, however, such films have often been scanned, digitized, and digitally edited.
Disclosure of Invention
Although some currently available digital video cameras include a high-resolution image sensor, thereby outputting high-resolution video; however, the image processing and compression techniques widely used on these cameras are too lossy to be accepted by the high-end market mentioned above, eliminating too much raw image data. One aspect of at least one embodiment disclosed herein includes implementing: the video quality that is acceptable by the above-mentioned high-end markets (e.g., most moving image markets) may be met by cameras that can capture and store raw or substantially raw video data having a resolution of at least about 2k and a frame rate of at least about 23 frames per second.
Thus, according to one embodiment, a camera may include a portable housing, and a lens assembly supported by the housing and configured to focus light. The light sensing device may be configured to convert the focused light to raw image data having a resolution of at least 2k at a frame rate of at least about 23 frames per second. The camera may also include a memory device and an image processing system configured to compress and store raw image data to the memory device at a frame rate of at least about 23 frames per second and a compression rate of at least 6:1, while remaining substantially visually lossless.
According to another embodiment, a method of recording live video with a camera may include directing light toward a light sensitive device. The method may also include converting light received by the light sensing device into raw digital image data having a frame rate of at least greater than 23 frames per second, compressing the raw digital image data, and recording the raw image data having a frame rate of at least about 23 frames per second to the storage device.
According to yet another embodiment, a camera may include a lens assembly supported by a housing and configured to focus light and a light sensing device configured to convert the focused light into a signal representative of raw image data of the focused light. The camera may also include a memory device and means for compressing and recording the raw image data having a frame rate of at least about 23 frames per second.
According to yet another embodiment, a camera may include a portable housing having at least one handle configured to allow a user to manipulate an orientation of at least one degree of freedom of movement with respect to the housing during a video recording operation of the camera. A lens assembly may include at least one lens supported by the housing and configured to focus light onto a plane disposed within the housing. The light sensing device may be configured to convert the focused light into raw image data having a horizontal resolution of at least 2k and a frame rate of at least about 23 frames per second. The storage device may also be configured to store video image data. The image processing system may be configured to compress and store raw image data to the storage device at a frame rate of at least about 23 frames per second and a compression rate of at least 6:1, while remaining substantially visually lossless.
Another aspect of at least one embodiment disclosed herein includes implementing: since the human eye is more sensitive to green wavelengths than any other color, modifications based on green image data to the image data output by the image sensor can be used to improve the compressibility of the data, and also provide a higher quality video image. One such technique may include subtracting the magnitude of the detected green light from the magnitude of the detected red and/or blue light prior to compressing the data. This may convert the red and/or blue image data to a more compressible format. For example, in converting gamma-corrected RGB data to Y' CbCrIn the known method of (1), the image is "decorrelated", leaving most of the image data at Y' (also known as "luma"), so that the remaining chrominance components are more compressive. However, for conversion to Y' CbCrThe known techniques of (1) cannot be directly applied to Bayer pattern data because the color data spaces are uncorrelated, the Bayer pattern data comprising twice as much green image data as blue or red image data. The method of subtracting the green image data, according to some embodiments disclosed herein, may be similar to the Y' C described abovebCrConversion, because most of the image data remains with the green image data, makes the remaining data into a more compressible format.
Further, the process of subtracting the green image data may be reversed to retain all of the original data. Thus, systems and methods produced in conjunction with this technique may provide lossless or visually lossless and improved compressibility of video image data.
Thus, according to one embodiment, a camera may include a lens assembly supported by a housing and configured to focus light and a light sensing device configured to convert the focused light into raw signals representing image data of at least first, second, and third colors of the focused light. The image processing module may be configured to modify image data of at least one of the first and second colors based on image data of the third color. In addition, the camera may include a storage device and a compression device configured to compress the image data of the first, second, and third colors and store the compressed image data to the storage device.
According to another embodiment, a method of processing an image may be provided. The method can comprise the following steps: converting the image into first image data representing a first color, second image data representing a second color, and third image data representing a third color; modifying at least the first image data and the second image data based on the third image data; compressing the third image data and the modified first and second image data; and storing the compressed data.
According to yet another embodiment, a camera may include a lens assembly supported by a housing and configured to focus light. The light sensing device may be configured to convert the focused light into raw signals representing image data of at least first, second, and third colors of the focused light. The camera may further include: means for modifying image data of at least one of the first and second colors based on image data of the third color; a memory device; and a compression device configured to compress the image data of the first, second, and third colors and store the compressed image data to the storage device.
Drawings
FIG. 1 is a block diagram illustrating a system that may include hardware and/or may be configured to perform a method for processing video image data, according to one embodiment;
FIG. 2 is an alternative embodiment of the housing of the camera schematically shown in FIG. 1;
FIG. 3 is a schematic distribution of an image sensor with a Bayer pattern filter that may be used in the system shown in FIG. 1;
FIG. 4 is a schematic block diagram of an image processing module that may be used with the system shown in FIG. 1;
FIG. 5 is a schematic distribution of green image data from a green sensing element of the image sensor of FIG. 3;
FIG. 6 is a schematic distribution of the remaining green image data of FIG. 5 after an alternative process of deleting some of the original green image data;
FIG. 7 is a schematic distribution of the red, blue and green image data of FIG. 5 organized for processing in the image processing module of FIG. 1;
FIG. 8 is a flow diagram illustrating an image data conversion technique that may be used with the system shown in FIG. 1;
FIG. 8A is a flow chart illustrating a modification of the image data conversion technique that may also be used with the system shown in FIG. 1;
FIG. 9 is a schematic distribution of blue image data resulting from the image conversion flow of FIG. 8;
FIG. 10 is a schematic distribution of red image data resulting from the image conversion flow of FIG. 8;
FIG. 11 illustrates an exemplary optional transformation that may be applied to image data for gamma correction;
FIG. 12 is a flow chart of a control routine that may be used with the system of FIG. 1 to decompress and demosaic image data;
FIG. 12A is a flow chart showing a modification of the control routine of FIG. 12 that may also be used with the system shown in FIG. 1;
FIG. 13 is a schematic distribution of green image data that has been decompressed and demosaiced in accordance with the flowchart of FIG. 12;
FIG. 14 is a schematic distribution of half of the original green image data in FIG. 13 that has been decompressed and demosaiced according to the flowchart in FIG. 12;
FIG. 15 is a schematic distribution of blue image data that has been decompressed in accordance with the flow diagram of FIG. 12; and
fig. 16 is a schematic distribution of the blue image data of fig. 15 that has been demosaiced according to the flowchart of fig. 12.
Detailed Description
Fig. 1 is a schematic diagram of a camera with image sensing, processing and compression modules, depicted as a case of a video camera for moving images. The embodiments disclosed herein are described as such a case: a camera with a single sensing device with a Bayer pattern filter, as these embodiments are particularly beneficial in this situation. However, the embodiments and inventions herein are also applicable to cameras having other types of image sensors (e.g., CMY Bayer and other non-Bayer patterns), cameras having other numbers of image sensors, cameras operating in different image format types, and cameras configured for still and/or moving images. Therefore, it is to be understood that the disclosed embodiments are illustrative and not restrictive of the broad embodiments, and that the invention disclosed herein is not limited to the disclosed exemplary embodiments.
With continued reference to fig. 1, the camera 10 may include a body or housing 12 configured to support a system 14 configured to detect, process, and optionally store and/or play video image data. For example, the system 14 may include optical hardware 16, an image sensor 18, an image processing module 20, a compression module 22, and a storage device 24. Optionally, the camera 10 may also include a monitor module 26, a play module 28, and a display 30.
FIG. 2 shows one non-limiting exemplary embodiment of the camera 10. As shown in fig. 2, the optical hardware 16 may be supported by the housing 12 in such a manner that its outer surface is exposed. In some embodiments, the system 14 is supported in the housing 12. For example, the image sensor 18, the image processing module 20, and the compression module 22 may be housed in the housing 12. The memory device 24 may be mounted in the housing 12. Additionally, in some embodiments, the memory device 24 may be mounted outside of the housing 12 and connected to the remainder of the system 14 by any type of known connector or cable. Additionally, the storage device 24 may be connected to the housing 12 with a flexible cable, allowing the storage device 24 to move somewhat independently of the housing 12. For example, with such a flexible cable connection, the memory device 24 may be worn on the user's belt, allowing the overall weight of the housing 12 to be reduced. Further, in some embodiments, the housing may include one or more storage devices 24 located internally and mounted externally thereto. In addition, the housing 12 may also support a monitor module 26, and a play module 28. Additionally, in some embodiments, the display 30 may be configured to be mounted on the exterior of the housing 12.
The optical hardware 16 may be in the form of a lens system having at least one lens configured to focus an incoming image onto the image sensor 18. The optical hardware 16, optionally, may be in the form of a multi-lens system provided with zoom, aperture and focus. Additionally, the optical hardware 16 may be in the form of a lens holder supported by the housing 12 and configured to accommodate many different types of lens systems, such as, but not limited to, the optical hardware 16 including a holder configured to accommodate lens systems of various sizes, including 50-100 millimeter (F2.8) zoom lenses, 18-50 millimeter (F2.8) zoom lenses, 300 millimeter (F2.8) lenses, 15 millimeter (F2.8) lenses, 24 millimeter (F1.9) lenses, 35 millimeter (F1.9) lenses, 50 millimeter (F1.9) lenses, 85 millimeter (F1.9) lenses, and/or any other lenses. As described above, the optical hardware 16 may be configured so that regardless of which lens is attached, the image may be focused onto the light-sensitive surface of the image sensor 18.
The image sensor 18 may be any type of video sensing device including: for example, but not limited to, CCD, CMOS, e.g. CCDVertically stacked CMOS devices of sensors, or a multi-sensor array with prisms to split light between sensors. In some embodiments, the image sensor 18 may include a CMOS device having about 1200 thousand photosites. However, other sized sensors may also be used. In some configurations, the camera 10 may be configured to output video at "2 k" (e.g., 2048 × 1152 pixels), "4 k" (e.g., 2096 × 2540 pixels), "4.5 k" horizontal resolution, or greater. As used herein, in terms expressed in xk (e.g., 2k and 4k above) format, the number x refers to the approximate horizontal resolution. As such, a "4 k" resolution corresponds to about 4000 or more horizontal pixels and a "2 k" resolution corresponds to about 2000 or more pixels. The sensor may be as small as about 0.5 inches (8mm) using existing commercially available hardware, however, it may also be about 1.0 inch, or larger. Additionally, the image sensor 18 may be configured to provide varying resolution by selectively outputting only predetermined portions of the sensor 18. For example, the sensor 18 and/or image processing module may be configured to allow a user to identify the resolution of the image data output.
The camera 10 may also be configured to down-sample and then process the output of the sensor 18 to produce a 2K, 1080p, 720p, or any other resolution video output. For example, image data from sensor 18 may be "window sampled" to reduce the size of the output image and allow for higher readout speeds. However, other sized sensors may be used. Additionally, the camera 10 may be configured to upsample the output of the sensor 18 to produce a higher resolution video output.
Referring to fig. 1-3, in some embodiments, the sensor 18 may include a Bayer pattern filter. In this manner, the sensor 18, through its chip set (not shown), outputs data representative of the magnitude of the red, green, or blue light detected by the respective photosites of the image sensor 18. Fig. 3 schematically shows the Bayer pattern output of the sensor 18. In some embodiments, for example, as shown in fig. 3, the Bayer pattern filter has twice the number of red cells and twice the number of blue cells. The chipset of the image sensor 18 may be used to read the charge on the cells of the image sensor to output a stream of values in a known RGB format.
Referring next to FIG. 4, the image processing module 20 is optionally configured to format the data stream from the image sensor 18 in any known manner. In some embodiments, the image processing module 20 may be configured to divide the green, red and blue image data into three or four separate data sets. For example, the image processing module 20 may be configured to divide red data into one data unit, divide blue data into one blue data unit, and divide green data into one green data unit. For example, referring to fig. 4, the image processing module 20 may include a red data processing module 32, a blue data image processing module 34, and a first green image data processing module 36.
However, as described above, the Bayer pattern data shown in fig. 3 has twice as many green pixels as the other two colors. Fig. 5 shows a data cell in which blue and red data are removed, leaving only the original green image data.
In some embodiments, the camera 10 may be configured to delete or ignore some of the green image data. For example, in some embodiments, image processing module 20 may be configured to delete 1/2 green image data so that the total number of green image data is the same as the number of blue and red image data. For example, fig. 6 shows the remaining data after the image processing module 20 deletes 1/2 the green image data. In the embodiment shown in FIG. 6, rows n-3, n-1, n +1, and n +3 have been deleted. This is just one example of a format of green image data that can be deleted. Other formats and other amounts of green image data may be deleted.
In some alternatives, the camera 10 may be configured to delete 1/2 the green image data after transforming the red and blue image data based on the green image data. This alternative technique will be described after describing the subtraction of the green image data values from the other color image data.
Alternatively, the image processing module 20 may be configured to selectively delete the green image data. For example, image processing module 20 may include a deletion analysis module (not shown) configured to selectively determine which green image data is deleted. For example, such a deletion module may be configured to determine whether deleting a format of rows from the green image data would result in aliasing artifacts (e.g., moire fringes) or other visually perceptible artifacts. The deletion module may be further configured to select a format of green image data for deletion such that there is less risk of such artifacts being created. For example, the deletion module may be configured to select the green image data deletion format of alternating vertical columns if it determines that the image captured by the image sensor 18 includes image features that appear as a plurality of parallel horizontal rows. The deletion format may reduce or eliminate artifacts, such as moire, that may be produced by a deletion format that deletes alternating lines of image data parallel to detected horizontal lines in an image.
However, this is merely one illustrative, non-limiting example of the types of image features and deletion formats that may be used by the deletion module. The deletion module may also be configured to detect other image features and delete using other image data deletion formats, such as, but not limited to, alternate rows, alternate diagonals, or other formats of deletion. In addition, the deletion module may be configured to delete portions of other image data (e.g., red and blue image data), or other image data that is determined by the type of sensor used.
Additionally, the camera 10 may be configured to insert data fields in the image data to indicate what image data was deleted. For example, but not limiting of, the camera 10 may be configured to insert a data field in the beginning of any video clip stored in the memory device 24 to indicate what data was deleted in each "frame" of the video clip. In some embodiments, the camera may be configured to insert a data field in each frame captured by the sensor 18 to indicate what image data was deleted. For example, in some embodiments where image processing module 20 is configured to delete 1/2 green image data in a delete format, the data field may be as little as one bit of the data field to indicate whether the image data was deleted. Because the image processing module 20 is configured to delete data in only one format, one bit is sufficient to indicate what data was deleted.
In some embodiments, as described above, image processing module 20 may be configured to selectively delete image data in more than one format. Thus, the image data deletion field may be larger, including a sufficient number of values to provide an indication of which of these different image data deletion formats was used. This data field may be used by downstream components and or processing to determine which spatial locations the remaining image data corresponds to.
In some embodiments, the image processing module may be configured to retain all of the raw green image data, such as the data shown in fig. 5. In these embodiments, the image processing module may include one or more green image data processing modules.
As described above, in the known Bayer pattern filter, there are twice as many green cells as there are red cells and blue cells. In other words, the red cells comprise 25% of the total Bayer pattern array, the blue cells are 25% of the Bayer pattern array, and the green cells comprise 50% of the Bayer pattern array cells. As such, in some embodiments where all of the green image data is retained, the image processing module 20 may include a second green data image processing module 38. As such, first green data image processing module 36 may process half of the green cells and second green image data processing module 38 may process the remaining green cells. However, the present invention may be used with other types of modes, such as, but not limited to, CMY and RGBW.
Fig. 7 includes a schematic illustration of the red, blue and two green data components processed by modules 32, 34, 36 and 38 (fig. 4). This may provide further advantages because each of these modules is approximately the same size and configuration because they process approximately the same amount of data. In addition, image processing module 20 may selectively switch between a mode in which all of the green image data is processed (using modules 36 and 38) and a mode in which the green image data of 1/2 is deleted (where only one of modules 36 and 38 is utilized). However, other configurations may be used.
Additionally, in some embodiments, the image processing module 20 may include other modules and/or may be configured to perform other processes, such as, but not limited to, gamma correction processes, noise filtering processes, and the like.
Additionally, in some embodiments, the image processing module 20 may be configured to subtract the value of the green cell from the value of the blue cell and/or the red cell. As such, in some embodiments, when certain colors are detected by the image sensor 18, the corresponding red or blue cells may be reduced to zero. For example, in many imaging, there are large areas of black, white, or gray, or colors that transition from gray to red or blue. Thus, if a gray block is sensed by a corresponding pixel of the image sensor 18, the magnitudes of green, red, and blue will be approximately equal. Thus, if the green value is subtracted from the red and blue values, the red and blue values will fall to zero or near zero. Thus, in subsequent compression processing, pixels sensing black, white, or gray blocks will produce more zeros and the resulting data will be more compressible. Additionally, subtracting green from one or both of the other colors may make the resulting image data more compressible for other reasons.
This technique, due to its relationship to the entropy of the original image data, helps achieve a more efficient compression ratio and still remains visually lossless. For example, the entropy of an image is related to the amount of randomness in the image. For example, subtracting image data of one color from image data of other colors may reduce randomness, thereby reducing entropy of image data of these colors, thus allowing data to be compressed with a higher compression rate and with less loss. Typically, an image is not a collection of random color values. Therefore, this subtraction technique can use the correlation of the cells to achieve better compression. The amount of compression will depend at least in part on the entropy of the original information in the image.
In some embodiments, the magnitude subtracted from a red or blue pixel may be a magnitude of a value output from a green pixel adjacent to the subtracted red or blue pixel. Further, in some embodiments, the green magnitude subtracted from the red or blue cells may be derived from the average of the surrounding green cells. This technique is described in more detail below. However, other techniques may be used.
Optionally, the image processing module 20 may also be configured to selectively subtract green image data from other colors. For example, image processing module 20 may be configured to determine whether subtracting green image data from a portion of the image data of any of the other colors would provide better compressibility. In this mode, the image processing module 20 may be configured to insert a marker in the image data to indicate what portion of the image data was modified (e.g., by subtracting the green image data) and which portion was not modified. With these flags, the downstream demosaicing/reconstruction component may selectively add the green image values back to the image data of other colors based on the state of these data flags.
Optionally, the image processing module 20 may further comprise a further data reduction module (not shown) configured to round off red and blue data (rounding). For example, if the red or blue data is near zero after subtracting the green amplitude (e.g., within 1 or 2 for an 8-bit value range of 0-255, or a higher amplitude for a higher resolution system). For example, sensor 18 may be a 12-bit sensor that outputs red, blue, and green data in the value range 0-4095. Any rounding or filtering performed on the data by the rounding module may be adjusted to achieve the desired effect. For example, if lossless output is desired, rounding is performed to a lesser extent; and to a greater extent if some loss or lossy output is acceptable. Some rounding may be performed and still produce a visually lossless output. For example, in an 8-bit numerical range, red or blue data having an absolute value of up to 2 or 3 may be rounded to 0 and still provide a visually lossless output. Additionally, red or blue data with absolute values up to 10 to 20 can be rounded to 0 and still provide a visually lossless output in a 12-bit numerical range.
In addition, the magnitude of the values that may be rounded to zero or other values, and still provide a visually lossless output, depends on the configuration of the system, including the optical hardware 16, the image sensor 18, the resolution of the image sensor, the color resolution (bits) of the image sensor 18, the type of filter, the anti-aliasing technique or other technique performed by the image processing module 20, the compression technique performed by the compression module 22, and/or other parameters or features of the camera 10.
As described above, in some embodiments, the camera 10 may be configured to delete 1/2 of the green image data after converting the red and blue image data based on the green image data. For example, but not limiting of, the processing module 20 may be configured to delete 1/2 the green image data after subtracting the average of the magnitudes of the surrounding green data values from the red and blue data values. This reduction in green data may reduce throughput requirements for associated hardware. In addition, the remaining green image data may be used to reconstruct the red and blue image data, as will be described in more detail below with reference to fig. 14 and 16.
As described above, the camera 10 may also include a compression module 22. The compression module 22 may be in the form of a single chip or may be implemented by software and another processor. For example, the compression module 22 may be in the form of a commercially available compression chip that performs compression techniques in accordance with the JPEG 2000 standard, or other compression techniques.
The compression module may be configured to perform any type of compression processing on the data from the image processing module 20. In some embodiments, compression module 22 performs compression techniques that utilize techniques performed by image processing module 20. For example, as described above, image processing module 20 may be configured to reduce the magnitude of the values of the red and blue data by subtracting the magnitude of the green image data, thereby producing more zeros, among other effects. In addition, the image processing module 20 may perform an operation on the original data using the entropy of the image data. As such, the compression techniques performed by the compression module 22 may be of the type: it benefits from the presence of larger strings of zeros to reduce the size of the compressed data that is output.
Further, the compression module 22 may be configured to compress the image data from the image processing module 20 to produce a visually lossless output. For example, first, the compression module may be configured to apply any known compression technique, such as, but not limited to, JPEG 2000, active JPEG (motion JPEG), any DCT-based codec, any codec designed to compress RGB image data, h.264, MPEG4, huffman, or other techniques.
Depending on the type of compression technique used, various parameters of the compression technique may be set to provide a visually lossless output. For example, many of the compression techniques described above can be adjusted to different compression rates, where when decompressed, the resulting image is of better quality for low compression rates and of lower quality for high compression rates. As such, the compression module may be configured to compress the image data in a manner that provides a visually lossless output, or may be configured to allow a user to adjust various parameters to obtain a visually lossless output. For example, the compression module 22 may be configured to compress the image data at a compression rate of approximately 6:1, 7: 1, 8: 1, or greater. In some embodiments, compression module 22 may be configured to compress the image data to a 12:1 ratio or higher.
Additionally, the compression module 22 may be configured to allow a user to adjust the compression rate achieved by the compression module 22. For example, the camera 10 may include a user interface that allows a user to input commands that cause the compression module 22 to change the compression rate. As such, in some embodiments, the camera 10 may provide variable compression.
The term "visually lossless" as used herein is intended to include such outputs: when compared side-by-side with raw (uncompressed) image data on the same display device, one of ordinary skill in the art would be unable to determine which image is the original image with reasonable accuracy based solely on visual inspection of the images.
With continued reference to fig. 1, the camera 10 may also include a memory device 24. The storage device may be in the form of any type of digital storage such as, but not limited to, a hard disk, flash memory, or any other type of storage device. In some embodiments, the size of the memory device 24 is large enough to store image data from the compression module 22, corresponding to a 12 megapixel resolution, a 12-bit color resolution, and at least about 30 minutes of video at 60 frames per second. However, the memory device 24 may have any size.
In some embodiments, the memory device 24 may be mounted outside of the housing 12. Further, in some embodiments, the storage device 24 may be connected to other components of the system 14 through standard communication ports, including, for example, but not limited to, IEEE 1394, USB 2.0, IDE, SATA, and the like. Further, in some embodiments, the storage device 24 may include a plurality of hard drives operating according to a RAID protocol. However, any type of memory device may be used.
With continued reference to fig. 1, as described above, in some embodiments, the system may include a monitor module 26 and a display device 30 configured to allow a user to view video images captured by the image sensor 18 during operation. In some embodiments, the image processing module 20 may include a subsampling system configured to output reduced resolution image data to the monitor module 26. For example, such a subsampling system may be configured to output video image data to support 2K, 1080p, 720p, or any other resolution. In some embodiments, the filter for demosaicing may be further adapted to perform downsampling filtering such that downsampling and filtering may be performed simultaneously. Monitor module 26 may be configured to perform any type of demosaicing process on data from image processing module 20. Thereafter, monitor module 26 may output the demosaiced image data to display 30.
The display 30 may be any type of monitoring device. For example, but not limiting of, the display 30 may be a 4 inch LCD panel supported by the housing 12. For example, in some embodiments, the display 30 may be coupled to an infinite adjustment configured to allow the display 30 to be adjusted to any position relative to the housing 12 so that a user may view the display 30 at any angle relative to the housing 12. In some embodiments, the display 30 may be connected to the monitor module by any type of video cable, for example, RGB or YCC format video cables.
Alternatively, the play module 28 may be configured to receive data from the storage device 24, decompress and demosaic the image data, and then output the image data to the display 30. In some embodiments, the monitor module 26 and the play module 28 may be connected to a display through an intermediate display controller (not shown). As such, the display 30 may be connected to the display controller through a single connector. The display controller may be configured to transfer data from the monitor module 26 or the play module 28 to the display 30.
Fig. 8 includes a flow chart 50 illustrating the processing of image data by the camera 10. In some embodiments, the flow chart 50 may represent a control flow stored in a memory device (e.g., the memory device 24 or another memory device (not shown) in the camera 10). In addition, a Central Processing Unit (CPU) (not shown) may be configured to execute the control flow. A method corresponding to flowchart 50 described in the context of processing a single frame of video image data is described below. Thus, this technique is applicable to the processing of a single still image. These flows may also be applied to the processing of continuous video, for example, frame rates greater than 12, and frame rates of 20, 23.976, 24, 30, 60, and 120, or other frame rates in between or greater than these.
With continued reference to FIG. 8, the control flow may begin at operation block 52. In operation block 52, the camera 10 may obtain sensor data. For example, referring to FIG. 1, an image sensor 18, which may include a Bayer sensor and a chip set, may output image data.
For example, and without limitation, referring to FIG. 3, the image sensor may comprise a CMOS device having a Bayer pattern filter on its light receiving face. In this way, the focused image from the optical hardware 16 is focused onto a Bayer pattern filter on the CMOS device of the image sensor 18. Fig. 3 shows an example of a Bayer pattern generated by arranging a Bayer pattern filter on a CMOS device.
In fig. 3, column m is the fourth column from the left edge of the Bayer pattern, and row n is the fourth row from the upper edge of the format. The remaining rows and columns are labeled with respect to column m and row n. However, this layout is chosen randomly for illustrative purposes only and does not limit any of the embodiments and inventions disclosed herein.
As mentioned above, known Bayer pattern filters typically include twice as many green cells as blue and red cells. In the pattern of FIG. 5, the blue cells are only present in rows n-3, n-1, n +1, and n + 3. The red cells are only present in rows n-2, n +2 and n + 4. However, the green cells are present in all rows and columns with red and blue cells interspersed between them.
Thus, in operation block 52, the red, blue and green image data output from the image sensor 18 may be received by the image processing module 20 and organized into data components of separate colors, such as those shown in FIG. 7. As shown in fig. 7, the image processing module 20 may separate the red, blue and green image data into four separate components as described above with reference to fig. 4. Fig. 7 shows two green components (green 1 and green 2), one blue component and one red component. However, this is merely one exemplary method of processing image data from the image sensor 18. In addition, as described above, the image processing module 20 may optionally randomly or selectively delete 1/2 of the green image data.
After operation block 52, the flowchart 50 may proceed to operation block 54. In operation block 56, the image data may be further processed. For example, optionally, any or all of the generated data may be further processed (e.g., green 1, green 2, blue image data from fig. 9, and red image data from fig. 10).
For example, the image data may be pre-emphasized or otherwise processed. In some embodiments, the image data may be processed to be more (mathematically) non-linear. Some compression algorithms benefit from linearizing the unit before performing such compression. However, other techniques may also be used. For example, the image data may be processed with a linear curve, which provides substantially no emphasis.
In some embodiments, the operation block 54 may process the image data using a curve defined by the function y ═ x ^ 0.5. In some embodiments, the curve may be used when the image data is, for example, but not limited to, floating point data normalized to a range of 0-1. In other embodiments, for example, when the image data is 12-bit data, the image may be processed using the curve y ═ x/4095 ^ 0.5. Additionally, other curves may be used to process the image data, such as y ═ x + c ^ g, where 0.01 < g < 1 and c is an offset, and c may be 0 in some embodiments. In addition, a logarithmic curve may also be used. For example, a curve of the form y ═ a × log (B × x + C), where A, B and C are constants selected to provide the desired results. In addition, the curves and methods described above can be modified to provide a more linear region around black, similar to those techniques used in the well-known Rec709 gamma correction. When these processes are applied to image data, the same processes may be applied to all the image data, or different processes may be applied to image data of different colors. However, these are merely exemplary curves that may be used to process image data, and other curves or transforms may also be used. In addition, these processing techniques may be applied using mathematical functions, such as those described above, or look-up tables (LUTs). In addition, different processes, techniques, or transformations may be used for different types of image data, different ISO settings used in recording image data, temperature (which may affect noise levels), and so forth.
After operation block 54, the flowchart 50 may proceed to operation block 56. In operation block 56, the red and blue cells may be switched. For example, as described above, green image data may be subtracted from each of the blue and red image data components. In some embodiments, the red or blue image data value may be transformed by subtracting the green image data value of at least one green cell adjacent to the red or blue cell. In some embodiments, the average of the data values of multiple adjacent green cells may be subtracted from the red or blue image data values. For example, but not limiting of, an average of 2, 3, 4 or more green image data values may be calculated and subtracted from a red or blue cell near the green cell.
For example, but not limiting of, referring to FIG. 3, a red unit Rm-2,n-2Is output by four green cells Gm-2,n-3、Gm-1,n-2、Gm-3,n-2And Gm-2,n-1Surrounding. Thus, the red unit Rm-2,n-2The transformation can be done by subtracting the average of the values of the surrounding green cells as follows:
(1)Rm,n=Rm,n-(Gm,n-1+Gm+1,n+Gm,n+1+Gm_1,n)/4
similarly, the blue cells may be transformed in a similar manner by subtracting the average of the surrounding green cells as follows:
(2)Bm+1,n+1=Bm+1,n+1-(Gm+1,n+Gm+2,n+1+Gm+1,n+2+Gm,n+1)/4
FIG. 9 shows a graph in which the original blue raw data Bm-1,n-1The resulting blue data component after transformation, with the new value labeled B'm-1,n-1(only one value in this component is filled and the same technique can be used for all blue cells). Similarly, FIG. 10 shows a red data component that has been transformed, wherein the transformed red cells Rm-2,n-2Is marked R'm-2,n-2. In this stateThe image data is still considered "raw" data. For example, the mathematical processing performed on the data is completely reversible, so that all the original values can be obtained by reversing those flows.
With continued reference to fig. 8, following operation block 56, the flowchart 50 may proceed to operation block 58. In operation block 58, the resulting data (which is raw or substantially raw) may be further compressed using any known compression algorithm. For example, compression module 22 (fig. 1) may be configured to execute such a compression algorithm. After compression, the compressed raw data may be stored in the storage device 24 (FIG. 1).
Fig. 8A shows a modification of the flow chart 50, marked with reference number 50'. Some of the steps described above with reference to flowchart 50 may be similar or identical to corresponding steps of flowchart 50', and are therefore labeled with the same reference numbers.
As shown in fig. 8A, in some embodiments, the flowchart 50' optionally omits the operation block 54. In some embodiments, flowchart 50' may also include operation block 57, where a lookup table may be applied to the image data. For example, an optional look-up table, as represented by the curve of FIG. 11, may be used to improve further compression. In some embodiments, the lookup table of FIG. 11 is for green cells only. In other embodiments, a lookup table may also be used for the red and blue cells. The same look-up table may be used for the three different colors, or each color may have its own look-up table. In addition, other than that represented by the graph of fig. 11 may also be applied.
By processing the image data in the manner described above with reference to fig. 8 and 8A, it has been found that the image data from the image sensor 18 can be compressed at a compression ratio of 6:1 or greater and still remain visually lossless. In addition, despite the transformation of the image data (e.g., subtraction of the green image data), all of the raw image data is still available to the end user. For example, by reversing some of the processes, all or substantially all of the raw data may be extracted and further processed, filtered, and/or demosaiced using any method desired by the user.
For example, referring to FIG. 12, data stored in the storage device 24 may be decompressed and demosaiced. Alternatively, the camera 10 may be configured to perform the method illustrated by the flow chart 60. For example, but not limiting of, the play module 28 may be configured to perform the method illustrated by the flow chart 60. However, the user may also transfer data from the memory device 24 to a separate workstation and apply any or all of the steps and/or operations of the flow chart 60.
With continued reference to FIG. 12, the flowchart 60 may begin at operation block 62, where data from the memory device 24 is decompressed. For example, the decompression of the data in operation block 62 may be the inverse of the compression algorithm performed in operation block 58 (FIG. 8). Following operation block 62, the flowchart 60 may proceed to operation block 64.
In operation block 64, the method performed in operation block 56 (FIG. 8) may be reversed. For example, the inverse of the curve of FIG. 11, or the inverse of any other function described above with reference to operation block 56 in FIGS. 8 and 8A, may be applied to the image data. After the operation block 64, the flowchart 60 may proceed to step 66.
In operation block 66, the green cells may be demosaiced. For example, as described above, all values from the data components green 1 and/or green 2 (FIG. 7) may be stored in the memory device 24. For example, referring to fig. 5, the green image data from the data components green 1, green 2 may be arranged in a raw Bayer pattern applied in the image sensor 18. The green data may then be further demosaiced by any known technique, such as linear interpolation, bilinear, and so forth.
Fig. 13 shows an exemplary layout of the green image data after demosaicing all the original green image data. By the letter GxThe marked green image unit represents the original (decompressed) image data, whereas the mark DGxThe cells of (b) represent cells derived from the raw data by a demosaicing process. The term is used for the following demosaicing process for other colors. Drawing (A)An exemplary image data layout of the de-mosaiced green image data of the raw green image data of 1/2 is shown at 14.
With continued reference to fig. 12, the flowchart 60 may proceed to operation block 68 after operation block 66. In operation block 68, the demosaiced green image data may be further processed. For example, but not limiting of, noise removal techniques may be applied to green image data. However, any other image processing technique, for example, an antialiasing technique, may also be applied to the green image data. Following operation block 68, the flowchart 60 may proceed to operation block 70.
In operation block 70, the red and blue image data may be demosaiced. For example, first, the blue image data of fig. 9 may be rearranged according to the original Bayer pattern (fig. 15). The surrounding cells, as shown in fig. 16, may be demosaiced from the existing blue image data using any known demosaicing technique, including linear interpolation, bilinear, etc. As a result of the demosaicing step, there will be blue image data for each pixel, as shown in fig. 16. However, the blue image data is demosaiced based on the modified blue image data of fig. 9, i.e., the blue image data value from which the green image data value is subtracted.
Operation block 70 may also include a demosaicing process for red image data. For example, the red image data from fig. 10 may be rearranged in the original Bayer pattern and further demosaiced by any known demosaicing method, such as linear interpolation, bilinear, and so forth.
Following operation block 70, the flow diagram may proceed to operation block 72. In operation block 72, the demosaiced red and blue image data may be reconstructed from the demosaiced green image data.
In some embodiments, each of the red and blue image data elements may be reconstructed by adding green values from co-located green image elements (co-located green image elements with column "m" and row "n"). For example, demosaicingAfter the gram, the blue image data includes a blue cell value DBm-2,n-2. Since the original Bayer pattern of fig. 3 does not include blue cells at this position, the blue value DBm-2,n-2Is based on, for example, from unit Bm-3,n-3、Bm-1,n-3、Bm-3,n-1And Bm-1,n-1Or any other technique or blue values in other blue image cells, derived by the demosaicing process described above. As indicated above, these values are modified in the operation block (fig. 8) and thus do not correspond to the raw blue image data detected by the image sensor 18. More specifically, the average green value has been subtracted from each of these values. Thus, the generated blue image data DBm-2,n-2And also represents the blue data from which the green image data has been subtracted. Thus, in one embodiment, the unit DGm-2,n-2The demosaiced green image data of (a) may be added to the blue image value DBm-2,n-2Thereby producing reconstructed blue image data values.
In some embodiments, the blue and/or red image data may optionally be first reconstructed prior to demosaicing. For example, converted blue image data B'm-1,n-1It can be reconstructed first by adding the average of the surrounding green cells. This will result in obtaining or recalculating the original blue image data Bm-1,n-1. This process may be performed for all blue image data. The blue image data may then be further demosaiced by any known demosaicing technique. The red image data may also be processed in the same or similar manner.
Fig. 12A shows a modification of the flow chart 60, marked with reference number 60'. Some of the steps described above with reference to flowchart 60 are similar or identical to corresponding steps of flowchart 60' and are therefore labeled with the same reference numbers.
As shown in fig. 12A, the flow chart 60 'may include an operation block 68' following the operation block 62. In operation block 68', a noise removal technique may be performed on the image data. For example, but not limiting of, noise removal techniques may be applied to green image data. However, any other image processing technique, such as an anti-aliasing technique, may also be applied to the green image data. Following operation block 68 ', the flow diagram may proceed to operation block 70'.
In operation block 70', the image data may be demosaiced. The green, red and blue image data may be demosaiced in two steps, as described above with reference to operation blocks 66 and 70. However, in the present flow chart 60', the demosaicing of image data of all three colors is embodied in a single step, although the same demosaicing techniques described above may be applied to this demosaicing process. Following operation block 70', the flowchart may proceed to operation block 72 and operation block 64, where red and blue image data may be reconstructed in operation block 72 and an inverse lookup table may be applied in operation block 64.
After decompressing and processing the image data according to either of flowcharts 70 or 70' or any other suitable method, the image data may be further processed into demosaiced image data.
Some further advantages may be realized by demosaicing the green image data prior to reconstructing the red and blue image data. For example, as described above, the human eye is more sensitive to green light. Demosaicing and processing of the green image data optimizes the green image values to which the human eye is more sensitive. Thus, subsequent reconstruction of the red and blue image data will be affected by the processing of the green image data.
In addition, the Bayer pattern has twice as many green cells as red and blue cells. Thus, in embodiments where all green data is retained, the green cell has twice as much image data as the red or blue image data component. In this way, demosaicing techniques, filters, and other image processing techniques will produce better demosaiced, sharpened, or otherwise filtered images. Using these demosaiced values to reconstruct and demosaic the red and blue image data passes the benefits associated with higher resolution raw green data to the processing, reconstruction, and demosaicing of the red and blue cells. In this way, the resulting image is further enhanced.

Claims (19)

1. A camera, comprising:
a portable housing;
a lens assembly supported by the housing and configured to focus light;
a light-sensing device configured to convert the focused light to image data of an original mosaic of at least 2k resolution at a frame rate of at least about 23 frames per second, wherein the image data of the original mosaic represents at least a first color, a second color, and a third color of the focused light;
a memory device; and
an image processing system configured to compress and store the image data of the original mosaic in the storage device such that the compressed image data of the original mosaic remains substantially visually lossless, an
Wherein the image processing system comprises an image processing module configured to calculate an average of the image data of the third color from at least four sensing cells surrounding and adjacent to the sensing cells of the first color to obtain a first average, calculate an average of the image data of the third color from at least four sensing cells surrounding and adjacent to the sensing cells of the second color to obtain a second average, modify the image data by subtracting the first average from the value of the image data from the sensing cells of the first color and subtracting the second average from the value of the image data from the sensing cells of the second color, and after the subtraction, the image processing system is configured to compress the modified image data of the original mosaic at a rate of at least about 23 frames per second, and stores the image data of the compressed original mosaic.
2. The camera of claim 1, wherein the third color is green.
3. The camera of claim 1, wherein the image processing system is configured to delete about half of the image data representing the third color.
4. The camera of claim 3, wherein the photosensitive device comprises a first set of sensing cells configured to detect the first color, a second set of sensing cells configured to detect the second color, and a third set of sensing cells configured to detect the third color, the third set of sensing cells comprising twice as many sensing cells as the second set of sensing cells.
5. The camera of claim 1, wherein the storage device is disposed in the housing.
6. The camera of claim 1, wherein the storage device is supported outside of the housing.
7. The camera of claim 1, wherein the memory device is connected to the housing by a flexible cable.
8. The camera of claim 1, wherein the image processing system comprises a compression chip that performs compression of the image data of the raw mosaic within the camera.
9. The camera of claim 1, wherein the light sensitive device comprises a Bayer sensor.
10. A method of recording live video using a camera, the method comprising:
directing light toward a photosensitive device;
converting light received by said light sensitive device into digital image data of a raw mosaic of at least 2k resolution at a rate of at least greater than 23 frames per second, wherein said digital image data of said raw mosaic represents at least a first color, a second color, and a third color;
calculating an average value of the image data of the third color from at least four sensing cells in the vicinity of the sensing cells of the first color to obtain a first average value, calculating an average value of the image data of the third color from at least four sensing cells in the vicinity of the sensing cells of the second color to obtain a second average value, modifying the image data by subtracting the first average value from a value of the image data from the sensing cells of the first color and subtracting the second average value from a value of the image data from the sensing cells of the second color;
compressing the digital image data of the modified original mosaic; and
the digital image data of the compressed raw mosaic is recorded to a storage device at a rate of at least about 23 frames per second at a resolution of at least 2k,
wherein the digital image data of the compressed raw mosaic remains substantially visually lossless.
11. The method of claim 10, wherein the step of compressing comprises compressing the digital image data of the modified original mosaic at an effective compression rate of at least 6: 1.
12. The method of claim 10, wherein the step of compressing comprises compressing the digital image data of the modified original mosaic at an effective compression rate of at least about 12: 1.
13. The method of claim 10, wherein said recording step comprises recording digital image data of said compressed raw mosaics to said storage device at a rate of at least about 23.976 frames per second.
14. The method of claim 10, the step of compressing comprising compressing the digital image data of the modified original mosaic in a visually lossless manner.
15. The method of claim 10, wherein the step of recording comprises storing compressed data to a camera, the camera including the photosensitive device for the step of converting.
16. The method of claim 10, further comprising the step of deleting half of the third image data representing the third color.
17. The method of claim 16, wherein the third image data represents green image data.
18. The method of claim 10, further comprising decompressing the digital image data of the compressed raw mosaic.
19. The method of claim 18, further comprising demosaicing the decompressed digital image data of the original mosaic.
HK10108352.3A 2007-04-11 2008-04-11 Video camera HK1141893B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US91119607P 2007-04-11 2007-04-11
US60/911,196 2007-04-11
US1740607P 2007-12-28 2007-12-28
US61/017,406 2007-12-28
PCT/US2008/060126 WO2008128112A1 (en) 2007-04-11 2008-04-11 Video camera

Publications (2)

Publication Number Publication Date
HK1141893A1 HK1141893A1 (en) 2010-11-19
HK1141893B true HK1141893B (en) 2015-08-21

Family

ID=

Similar Documents

Publication Publication Date Title
KR101478380B1 (en) Video camera
US8878952B2 (en) Video camera
AU2016213747B2 (en) Video camera
AU2012216606B2 (en) Video camera
HK1141893B (en) Video camera
US20200005434A1 (en) Video capture devices and methods